query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 7
25
| subset
stringclasses 5
values |
---|---|---|---|---|
7c944862dfcc3f89cd284ac16b50f486
|
Grouping Synonymous Sentences from a Parallel Corpus
|
[
{
"docid": "4361b4d2d77d22f46b9cd5920a4822c8",
"text": "While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. Our approach yields phrasal and single word lexical paraphrases as well as syntactic paraphrases.",
"title": ""
}
] |
[
{
"docid": "ee5eb52575cf01b825b244d9391c6f5c",
"text": "We present a data-driven framework called generative adversarial privacy (GAP). Inspired by recent advancements in generative adversarial networks (GANs), GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. We show that for appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. We also evaluate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI face database. KeywordsData Privacy, Differential Privacy, Adversarial Learning, Generative Adversarial Networks, Minimax Games, Information Theory",
"title": ""
},
{
"docid": "30ba59e335d9b448b29d2528b5e08a5c",
"text": "Classification of alcoholic electroencephalogram (EEG) signals is a challenging job in biomedical research for diagnosis and treatment of brain diseases of alcoholic people. The aim of this study was to introduce a robust method that can automatically identify alcoholic EEG signals based on time–frequency (T–F) image information as they convey key characteristics of EEG signals. In this paper, we propose a new hybrid method to classify automatically the alcoholic and control EEG signals. The proposed scheme is based on time–frequency images, texture image feature extraction and nonnegative least squares classifier (NNLS). In T–F analysis, the spectrogram of the short-time Fourier transform is considered. The obtained T–F images are then converted into 8-bit grayscale images. Co-occurrence of the histograms of oriented gradients (CoHOG) and Eig(Hess)-CoHOG features are extracted from T–F images. Finally, obtained features are fed into NNLS classifier as input for classify alcoholic and control EEG signals. To verify the effectiveness of the proposed approach, we replace the NNLS classifier by artificial neural networks, k-nearest neighbor, linear discriminant analysis and support vector machine classifier separately, with the same features. Experimental outcomes along with comparative evaluations with the state-of-the-art algorithms manifest that the proposed method outperforms competing algorithms. The experimental outcomes are promising, and it can be anticipated that upon its implementation in clinical practice, the proposed scheme will alleviate the onus of the physicians and expedite neurological diseases diagnosis and research.",
"title": ""
},
{
"docid": "37c005b87b3ccdfad86c760ecba7b8de",
"text": "Intelligent processing of complex signals such as images is often performed by a hierarchy of nonlinear processing layers, such as a deep net or an object recognition cascade. Joint estimation of the parameters of all the layers is a difficult nonconvex optimization. We describe a general strategy to learn the parameters and, to some extent, the architecture of nested systems, which we call themethod of auxiliary coordinates (MAC) . This replaces the original problem involving a deeply nested function with a constrained problem involving a different function in an augmented space without nesting. The constrained problem may be solved with penalty-based methods using alternating optimization over the parameters and the auxiliary coordinates. MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable, can perform some model selection on the fly, and is competitive with stateof-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations. The continued increase in recent years in data availability and processing power has enabled the development and practical applicability of ever more powerful models in sta tistical machine learning, for example to recognize faces o r speech, or to translate natural language. However, physical limitations in serial computation suggest that scalabl e processing will require algorithms that can be massively parallelized, so they can profit from the thousands of inexpensive processors available in cloud computing. We focus on hierarchical, or nested, processing architectures. As a particular but important example, consider deep neuAppearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors. ral nets (fig. 1), which were originally inspired by biological systems such as the visual and auditory cortex in the mammalian brain (Serre et al., 2007), and which have been proven very successful at learning sophisticated task s, such as recognizing faces or speech, when trained on data.",
"title": ""
},
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "10d8bbea398444a3fb6e09c4def01172",
"text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.",
"title": ""
},
{
"docid": "0798ed2ff387823bcd7572a9ddf6a5e1",
"text": "We present a novel algorithm for point cloud segmentation using group convolutions. Our approach uses a radial basis function (RBF) based variational autoencoder (VAE) network. We transform unstructured point clouds into regular voxel grids and use subvoxels within each voxel to encode the local geometry using a VAE architecture. In order to handle sparse distribution of points within each voxel, we use RBF to compute a local, continuous representation within each subvoxel. We extend group equivariant convolutions to 3D point cloud processing and increase the expressive capacity of the neural network. The combination of RBF and VAE results in a good volumetric representation that can handle noisy point cloud datasets and is more robust for learning. We highlight the performance on standard benchmarks and compare with prior methods. In practice, our approach outperforms state-of-the-art segmentation algorithms on the ShapeNet and S3DIS datasets.",
"title": ""
},
{
"docid": "3f9e5be7bfe8c28291758b0670afc61c",
"text": "Grayscale error di usion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. In color error di usion what color to render is a major concern in addition to nding optimal dot patterns. This article presents a survey of key methods for artifact reduction in grayscale and color error di usion. The linear gain model by Kite et al. replaces the thresholding quantizer with a scalar gain plus additive noise. They show that the sharpening is proportional to the scalar gain. Kite et al. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit ipping quantizer (Damera-Venkata and Evans, 2001). Several other variations on grayscale error di usion have been proposed to reduce false textures in shadow and highlight regions, including green noise halftoning (Levien, 1993) and tone-dependent error di usion (Li and Allebach, 2002). Color error di usion ideally requires the quantization error to be di used to frequencies and colors, to which the HVS is least sensitive. We review the following approaches: color plane separable (Kolpatzik and Bouman 1992) design; perceptual quantization (Shaked et al. 1996, Haneishi et al. 1996) ; green noise extensions (Lau et al. 2000); and matrix-valued error lters (Damera-Venkata and Evans, 2001).",
"title": ""
},
{
"docid": "ebaf73ec27127016f3327e6a0b88abff",
"text": "A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can’t communicate about their medical history to the medical practitioners. Also, Medical practitioners can’t edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.",
"title": ""
},
{
"docid": "4eb1e28d62af4a47a2e8dc795b89cc09",
"text": "This paper describes a new computational finance approach. This approach combines pattern recognition techniques with an evolutionary computation kernel applied to financial markets time series in order to optimize trading strategies. Moreover, for pattern matching a template-based approach is used in order to describe the desired trading patterns. The parameters for the pattern templates, as well as, for the decision making rules are optimized using a genetic algorithm kernel. The approach was tested considering actual data series and presents a robust profitable trading strategy which clearly beats the market, S&P 500 index, reducing the investment risk significantly.",
"title": ""
},
{
"docid": "764eba2c2763db6dce6c87170e06d0f8",
"text": "Kansei Engineering was developed as a consumer-oriented technology for new product development. It is defined as \"translating technology of a consumer's feeling and image for a product into design elements\". Kansei Engineering (KE) technology is classified into three types, KE Type I, II, and III. KE Type I is a category classification on the new product toward the design elements. Type II utilizes the current computer technologies such as Expert System, Neural Network Model and Genetic Algorithm. Type III is a model using a mathematical structure. Kansei Engineering has permeated Japanese industries, including automotive, electrical appliance, construction, clothing and so forth. The successful companies using Kansei Engineering benefited from good sales regarding the new consumer-oriented products. Relevance to industry Kansei Engineering is utilized in the automotive, electrical appliance, construction, clothing and other industries. This paper provides help to potential product designers in these industries.",
"title": ""
},
{
"docid": "132880bc2af0e8ce5e0dc04b0ff397f6",
"text": "The need to have equitable access to quality healthcare is enshrined in the United Nations (UN) Sustainable Development Goals (SDGs), which defines the developmental agenda of the UN for the next 15 years. In particular, the third SDG focuses on the need to “ensure healthy lives and promote well-being for all at all ages”. In this paper, we build the case that 5G wireless technology, along with concomitant emerging technologies (such as IoT, big data, artificial intelligence and machine learning), will transform global healthcare systems in the near future. Our optimism around 5G-enabled healthcare stems from a confluence of significant technical pushes that are already at play: apart from the availability of high-throughput low-latency wireless connectivity, other significant factors include the democratization of computing through cloud computing; the democratization of Artificial Intelligence (AI) and cognitive computing (e.g., IBM Watson); and the commoditization of data through crowdsourcing and digital exhaust. These technologies together can finally crack a dysfunctional healthcare system that has largely been impervious to technological innovations. We highlight the persistent deficiencies of the current healthcare system and then demonstrate how the 5G-enabled healthcare revolution can fix these deficiencies. We also highlight open technical research challenges, and potential pitfalls, that may hinder the development of such a 5G-enabled health revolution.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "cd8cad6445b081e020d90eb488838833",
"text": "Heavy metal pollution has become one of the most serious environmental problems today. The treatment of heavy metals is of special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for heavy metal removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat heavy metal wastewater and evaluates these techniques. These technologies include chemical precipitation, ion-exchange, adsorption, membrane filtration, coagulation-flocculation, flotation and electrochemical methods. About 185 published studies (1988-2010) are reviewed in this paper. It is evident from the literature survey articles that ion-exchange, adsorption and membrane filtration are the most frequently studied for the treatment of heavy metal wastewater.",
"title": ""
},
{
"docid": "062149cd37d1e9f04f32bd6b713f10ab",
"text": "Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN).",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
},
{
"docid": "c3eaaa0812eb9ab7e5402339733daa28",
"text": "BACKGROUND\nHypovitaminosis D and a low calcium intake contribute to increased parathyroid function in elderly persons. Calcium and vitamin D supplements reduce this secondary hyperparathyroidism, but whether such supplements reduce the risk of hip fractures among elderly people is not known.\n\n\nMETHODS\nWe studied the effects of supplementation with vitamin D3 (cholecalciferol) and calcium on the frequency of hip fractures and other nonvertebral fractures, identified radiologically, in 3270 healthy ambulatory women (mean [+/- SD] age, 84 +/- 6 years). Each day for 18 months, 1634 women received tricalcium phosphate (containing 1.2 g of elemental calcium) and 20 micrograms (800 IU) of vitamin D3, and 1636 women received a double placebo. We measured serial serum parathyroid hormone and 25-hydroxyvitamin D (25(OH)D) concentrations in 142 women and determined the femoral bone mineral density at base line and after 18 months in 56 women.\n\n\nRESULTS\nAmong the women who completed the 18-month study, the number of hip fractures was 43 percent lower (P = 0.043) and the total number of nonvertebral fractures was 32 percent lower (P = 0.015) among the women treated with vitamin D3 and calcium than among those who received placebo. The results of analyses according to active treatment and according to intention to treat were similar. In the vitamin D3-calcium group, the mean serum parathyroid hormone concentration had decreased by 44 percent from the base-line value at 18 months (P < 0.001) and the serum 25(OH)D concentration had increased by 162 percent over the base-line value (P < 0.001). The bone density of the proximal femur increased 2.7 percent in the vitamin D3-calcium group and decreased 4.6 percent in the placebo group (P < 0.001).\n\n\nCONCLUSIONS\nSupplementation with vitamin D3 and calcium reduces the risk of hip fractures and other nonvertebral fractures among elderly women.",
"title": ""
},
{
"docid": "0ff3e49a700a776c1a8f748d78bc4b73",
"text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.",
"title": ""
},
{
"docid": "895f912a24f00984922c586880f77dee",
"text": "Massive multiple-input multiple-output technology has been considered a breakthrough in wireless communication systems. It consists of equipping a base station with a large number of antennas to serve many active users in the same time-frequency block. Among its underlying advantages is the possibility to focus transmitted signal energy into very short-range areas, which will provide huge improvements in terms of system capacity. However, while this new concept renders many interesting benefits, it brings up new challenges that have called the attention of both industry and academia: channel state information acquisition, channel feedback, instantaneous reciprocity, statistical reciprocity, architectures, and hardware impairments, just to mention a few. This paper presents an overview of the basic concepts of massive multiple-input multiple-output, with a focus on the challenges and opportunities, based on contemporary research.",
"title": ""
},
{
"docid": "122e31e413efd0f96860661d461ce780",
"text": "Recent years have seen a dramatic increase in research and development of scientific workflow systems. These systems promise to make scientists more productive by automating data-driven and computeintensive analyses. Despite many early achievements, the long-term success of scientific workflow technology critically depends on making these systems useable by ‘‘mere mortals’’, i.e., scientists who have a very good idea of the analysis methods they wish to assemble, but who are neither software developers nor scripting-language experts. With these users in mind, we identify a set of desiderata for scientific workflow systems crucial for enabling scientists to model and design the workflows they wish to automate themselves. As a first step towards meeting these requirements, we also show how the collection-oriented modeling and design (comad) approach for scientific workflows, implemented within the Kepler system, can help provide these critical, design-oriented capabilities to scientists. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
6c0a4d1ad7c8f4d369cb866fda7e4777
|
EduRank: A Collaborative Filtering Approach to Personalization in E-learning
|
[
{
"docid": "45d57f01218522609d6ef93de61ea491",
"text": "We consider the problem of finding a ranking of a set of elements that is “closest to” a given set of input rankings of the elements; more precisely, we want to find a permutation that minimizes the Kendall-tau distance to the input rankings, where the Kendall-tau distance is defined as the sum over all input rankings of the number of pairs of elements that are in a different order in the input ranking than in the output ranking. If the input rankings are permutations, this problem is known as the Kemeny rank aggregation problem. This problem arises for example in building meta-search engines for Web search, aggregating viewers’ rankings of movies, or giving recommendations to a user based on several different criteria, where we can think of having one ranking of the alternatives for each criterion. Many of the approximation algorithms and heuristics that have been proposed in the literature are either positional, comparison sort or local search algorithms. The rank aggregation problem is a special case of the (weighted) feedback arc set problem, but in the feedback arc set problem we use only information about the preferred relative ordering of pairs of elements to find a ranking of the elements, whereas in the case of the rank aggregation problem, we have additional information in the form of the complete input rankings. The positional methods are the only algorithms that use this additional information. Since the rank aggregation problem is NP-hard, none of these algorithms is guaranteed to find the optimal solution, and different algorithms will provide different solutions. We give theoretical and practical evidence that a combination of these different approaches gives algorithms that are superior to the individual algorithms. Theoretically, we give lower bounds on the performance for many of the “pure” methods. Practically, we perform an extensive evaluation of the “pure” algorithms and ∗Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. frans@mail.tsinghua.edu.cn. Research performed in part while the author was at Nature Source Genetics, Ithaca, NY. †Institute for Theoretical Computer Science, Tsinghua University, Beijing, China. anke@mail.tsinghua.edu.cn. Research partly supported by NSF grant CCF-0514628 and performed in part while the author was at the School of Operations Research and Information Engineering at Cornell University, Ithaca, NY. combinations of different approaches. We give three recommendations for which (combination of) methods to use based on whether a user wants to have a very fast, fast or reasonably fast algorithm.",
"title": ""
}
] |
[
{
"docid": "e73c560679c9d856390c672ebc66d571",
"text": "{ This paper describes a complete coverage path planning and guidance methodology for a mobile robot, having the automatic oor cleaning of large industrial areas as a target application. The proposed algorithms rely on the a priori knowledge of a 2D map of the environment and cope with unexpected obstacles not represented on the map. A template based approach is used to control the path execution, thus incorporating, in a natural way, the kinematic and the geometric model of the mobile robot on the path planning procedure. The novelty of the proposed approach is the capability of the path planner to deal with a priori mapped or unexpected obstacles in the middle of the working space. If unmapped obstacles permanently block the planned trajectory, the path tracking control avoids these obstacles. The paper presents experimental results with a LABMATE mobile robot, connrming the feasibility of the total coverage path and the robustness of the path tracking behaviour based control.",
"title": ""
},
{
"docid": "47790125ba78325a4455fcdbae96058a",
"text": "Today solar energy became an important resource of energy generation. But the efficiency of solar system is very low. To increase its efficiency MPPT techniques are used. The main disadvantage of solar system is its variable voltage. And to obtained a stable voltage from solar panels DC-DC converters are used . DC-DC converters are of mainly three types buck, boost and cuk. This paper presents use of cuk converter with MPPT technique. Generally buck and boost converters used. But by using cuk converter we can step up or step down the voltage level according to the load requirement. The circuit has been simulated by MATLAB and Simulink softwares.",
"title": ""
},
{
"docid": "fc3af1e7ebc13605938d8f8238d9c8bd",
"text": "Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and/or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1% AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"title": ""
},
{
"docid": "0d23abee044cf8c793a285146f0669a5",
"text": "Water cycle algorithm (WCA) is a new population-based meta-heuristic technique. It is originally inspired by idealized hydrological cycle observed in natural environment. The conventional WCA is capable to demonstrate a superior performance compared to other well-established techniques in solving constrained and also unconstrained problems. Similar to other meta-heuristics, premature convergence to local optima may still be happened in dealing with some specific optimization tasks. Similar to chaos in real water cycle behavior, this article incorporates chaotic patterns into stochastic processes of WCA to improve the performance of conventional algorithm and to mitigate its premature convergence problem. First, different chaotic signal functions along with various chaotic-enhanced WCA strategies (totally 39 meta-heuristics) are implemented, and the best signal is preferred as the most appropriate chaotic technique for modification of WCA. Second, the chaotic algorithm is employed to tackle various benchmark problems published in the specialized literature and also training of neural networks. The comparative statistical results of new technique vividly demonstrate that premature convergence problem is relieved significantly. Chaotic WCA with sinusoidal map and chaotic-enhanced operators not only can exploit high-quality solutions efficiently but can outperform WCA optimizer and other investigated algorithms.",
"title": ""
},
{
"docid": "5a2c6049e23473a5845b17da4101ab41",
"text": "This paper discusses the design of a battery-less wirelessly-powered UWB system-on-a-chip (SoC) tag for area-constrained localization applications. An antenna-rectifier co-design methodology optimizes sensitivity and increases range under tag area constraints. A low-voltage (0.8-V) UWB TX enables high rectifier sensitivity by reducing required rectifier output voltage. The 2.4GHz rectifier, power-management unit and 8GHz UWB TX are integrated in 65nm CMOS and the rectifier demonstrates state-of-the-art -30.7dBm sensitivity for 1V output with only 1.3cm2 antenna area, representing a 2.3× improvement in sensitivity over previously published work, at 2.6× higher frequency with 9× smaller antenna area. Measurements in an office corridor demonstrate 20m range with 36dBm TX EIRP. The 0.8-V 8GHz UWB TX consumes 64pJ/pulse at 28MHz pulse repetition rate and achieves 2.4GHz -10dB bandwidth. Wireless measurements demonstrate sub-10cm range resolution at range > 10m.",
"title": ""
},
{
"docid": "d3dde75d07ad4ed79ff1da2c3a601e1d",
"text": "In open trials, 1-Hz repetitive transcranial magnetic stimulation (rTMS) to the supplementary motor area (SMA) improved symptoms and normalized cortical hyper-excitability of patients with obsessive-compulsive disorder (OCD). Here we present the results of a randomized sham-controlled double-blind study. Medication-resistant OCD patients (n=21) were assigned 4 wk either active or sham rTMS to the SMA bilaterally. rTMS parameters consisted of 1200 pulses/d, at 1 Hz and 100% of motor threshold (MT). Eighteen patients completed the study. Response to treatment was defined as a > or = 25% decrease on the Yale-Brown Obsessive Compulsive Scale (YBOCS). Non-responders to sham and responders to active or sham rTMS were offered four additional weeks of open active rTMS. After 4 wk, the response rate in the completer sample was 67% (6/9) with active and 22% (2/9) with sham rTMS. At 4 wk, patients receiving active rTMS showed on average a 25% reduction in the YBOCS compared to a 12% reduction in those receiving sham. In those who received 8-wk active rTMS, OCD symptoms improved from 28.2+/-5.8 to 14.5+/-3.6. In patients randomized to active rTMS, MT measures on the right hemisphere increased significantly over time. At the end of 4-wk rTMS the abnormal hemispheric laterality found in the group randomized to active rTMS normalized. The results of the first randomized sham-controlled trial of SMA stimulation in the treatment of resistant OCD support further investigation into the potential therapeutic applications of rTMS in this disabling condition.",
"title": ""
},
{
"docid": "b150e9aef47001e1b643556f64c5741d",
"text": "BACKGROUND\nMany adolescents have poor mental health literacy, stigmatising attitudes towards people with mental illness, and lack skills in providing optimal Mental Health First Aid to peers. These could be improved with training to facilitate better social support and increase appropriate help-seeking among adolescents with emerging mental health problems. teen Mental Health First Aid (teen MHFA), a new initiative of Mental Health First Aid International, is a 3 × 75 min classroom based training program for students aged 15-18 years.\n\n\nMETHODS\nAn uncontrolled pilot of the teen MHFA course was undertaken to examine the feasibility of providing the program in Australian secondary schools, to test relevant measures of student knowledge, attitudes and behaviours, and to provide initial evidence of program effects.\n\n\nRESULTS\nAcross four schools, 988 students received the teen MHFA program. 520 students with a mean age of 16 years completed the baseline questionnaire, 345 completed the post-test and 241 completed the three-month follow-up. Statistically significant improvements were found in mental health literacy, confidence in providing Mental Health First Aid to a peer, help-seeking intentions and student mental health, while stigmatising attitudes significantly reduced.\n\n\nCONCLUSIONS\nteen MHFA appears to be an effective and feasible program for training high school students in Mental Health First Aid techniques. Further research is required with a randomized controlled design to elucidate the causal role of the program in the changes observed.",
"title": ""
},
{
"docid": "2de4de4a7b612fd8d87a40780acdd591",
"text": "In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events like TLB misses, L1 and L2 cache misses, by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms make them perform well, which is confirmed by experimental results. *This work was carried out when the author was at the University of Amsterdam, supported by SION grant 612-23-431 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999.",
"title": ""
},
{
"docid": "df106178a7c928318cf116b608ca31b3",
"text": "Toothpaste is a paste or gel to be used with a toothbrush to maintain and improve oral health and aesthetics. Since their introduction several thousand years ago, toothpaste formulations have evolved considerably - from suspensions of crushed egg shells or ashes to complex formulations with often more than 20 ingredients. Among these can be compounds to combat dental caries, gum disease, malodor, calculus, erosion and dentin hypersensitivity. Furthermore, toothpastes contain abrasives to clean and whiten teeth, flavors for the purpose of breath freshening and dyes for better visual appeal. Effective toothpastes are those that are formulated for maximum bioavailability of their actives. This, however, can be challenging as compromises will have to be made when several different actives are formulated in one phase. Toothpaste development is by no means complete as many challenges and especially the poor oral substantivity of most active ingredients are yet to overcome.",
"title": ""
},
{
"docid": "7c1691fd1140b3975b61f8e2ce3dcd9b",
"text": "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"title": ""
},
{
"docid": "3ba87a9a84f317ef3fd97c79f86340c1",
"text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.",
"title": ""
},
{
"docid": "b231f2c6b19d5c38b8aa99ec1b1e43da",
"text": "Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals’ degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores (“indirect reciprocity”), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. “tit-for-tat”) as well as indirect reciprocity (helping strangers in order to increase one’s reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level.",
"title": ""
},
{
"docid": "e088ad55f29634e036f291a6131ac669",
"text": "In this paper, we present a novel anomaly detection framework which integrates motion and appearance cues to detect abnormal objects and behaviors in video. For motion anomaly detection, we employ statistical histograms to model the normal motion distributions and propose a notion of “cut-bin” in histograms to distinguish unusual motions. For appearance anomaly detection, we develop a novel scheme based on Support Vector Data Description (SVDD), which obtains a spherically shaped boundary around the normal objects to exclude abnormal objects. The two complementary cues are finally combined to achieve more comprehensive detection results. Experimental results show that the proposed approach can effectively locate abnormal objects in multiple public video scenarios, achieving comparable performance to other state-of-the-art anomaly detection techniques. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "be722a19b56ef604d6fe24012470e61f",
"text": "In this paper, we derive optimality results for greedy Bayesian-network search algo rithms that perform single-edge modifica tions at each step and use asymptotically consistent scoring criteria. Our results ex tend those of Meek (1997) and Chickering (2002), who demonstrate that in the limit of large datasets, if the generative distribu tion is perfect with respect to a DAG defined over the observable variables, such search al gorithms will identify this optimal (i.e. gen erative) DAG model. We relax their assump tion about the generative distribution, and assume only that this distribution satisfies the composition property over the observable variables, which is a more realistic assump tion for real domains. Under this assump tion, we guarantee that the search algorithms identify an inclusion-optimal model; that is, a model that (1) contains the generative dis tribution and (2) has no sub-model that con tains this distribution. In addition, we show that the composition property is guaranteed to hold whenever the dependence relation ships in the generative distribution can be characterized by paths between singleton el ements in some generative graphical model (e.g. a DAG, a chain graph, or a Markov network) even when the generative model in cludes unobserved variables, and even when the observed data is subject to selection bias.",
"title": ""
},
{
"docid": "f16ab00d323e4169117eecb72bcb330e",
"text": "Despite the availability of various substance abuse treatments, alcohol and drug misuse and related negative consequences remain prevalent. Vipassana meditation (VM), a Buddhist mindfulness-based practice, provides an alternative for individuals who do not wish to attend or have not succeeded with traditional addiction treatments. In this study, the authors evaluated the effectiveness of a VM course on substance use and psychosocial outcomes in an incarcerated population. Results indicate that after release from jail, participants in the VM course, as compared with those in a treatment-as-usual control condition, showed significant reductions in alcohol, marijuana, and crack cocaine use. VM participants showed decreases in alcohol-related problems and psychiatric symptoms as well as increases in positive psychosocial outcomes. The utility of mindfulness-based treatments for substance use is discussed.",
"title": ""
},
{
"docid": "0c6afb06f8d230943c6855dcb4dd4392",
"text": "The home computer user is often said to be the weakest link in computer security. They do not always follow security advice, and they take actions, as in phishing, that compromise themselves. In general, we do not understand why users do not always behave safely, which would seem to be in their best interest. This paper reviews the literature of surveys and studies of factors that influence security decisions for home computer users. We organize the review in four sections: understanding of threats, perceptions of risky behavior, efforts to avoid security breaches and attitudes to security interventions. We find that these studies reveal a lot of reasons why current security measures may not match the needs or abilities of home computer users and suggest future work needed to inform how security is delivered to this user group.",
"title": ""
},
{
"docid": "9b85f81f50cf94a3a076b202ba94ab82",
"text": "Growing accuracy and robustness of Deep Neural Networks (DNN) models are accompanied by growing model capacity (going deeper or wider). However, high memory requirements of those models make it difficult to execute the training process in one GPU. To address it, we first identify the memory usage characteristics for deep and wide convolutional networks, and demonstrate the opportunities of memory reuse on both intra-layer and inter-layer levels. We then present Layrub, a runtime data placement strategy that orchestrates the execution of training process. It achieves layer-centric reuse to reduce memory consumption for extreme-scale deep learning that cannot be run on one single GPU.",
"title": ""
},
{
"docid": "05e8879a48e3a9808ec74b5bf225c562",
"text": "Although peribronchial lymphatic drainage of the lung has been well characterized, lymphatic drainage in the visceral pleura is less well understood. The objective of the present study was to evaluate the lymphatic drainage of lung segments in the visceral pleura. Adult, European cadavers were examined. Cadavers with a history of pleural or pulmonary disease were excluded. The cadavers had been refrigerated but not embalmed. The lungs were surgically removed and re-warmed. Blue dye was injected into the subpleural area and into the first draining visceral pleural lymphatic vessel of each lung segment. Twenty-one cadavers (7 males and 14 females; mean age 80.9 years) were dissected an average of 9.8 day postmortem. A total of 380 dye injections (in 95 lobes) were performed. Lymphatic drainage of the visceral pleura followed a segmental pathway in 44.2% of the injections (n = 168) and an intersegmental pathway in 55.8% (n = 212). Drainage was found to be both intersegmental and interlobar in 2.6% of the injections (n = 10). Lymphatic drainage in the visceral pleura followed an intersegmental pathway in 22.8% (n = 13) of right upper lobe injections, 57.9% (n = 22) of right middle lobe injections, 83.3% (n = 75) of right lower lobe injections, 21% (n = 21) of left upper lobe injections, and 85.3% (n = 81) of left lower lobe injections. In the lung, lymphatic drainage in the visceral pleura appears to be more intersegmental than the peribronchial pathway is—especially in the lower lobes. The involvement of intersegmental lymphatic drainage in the visceral pleura should now be evaluated during pulmonary resections (and especially sub-lobar resections) for lung cancer.",
"title": ""
},
{
"docid": "ac808ecd75ccee74fff89d03e3396f26",
"text": "This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint. Keywords—Agricultural engineering, computer vision, image processing, flower detection.",
"title": ""
},
{
"docid": "6e77a99b6b0ddf18560580fed1ca5bbe",
"text": "Theoretical analysis of the connection between taxation and risktaking has mainly been concerned with the effect of taxes on portfolio decisions of consumers, Mossin (1968b) and Stiglitz (1969). However, there are some problems which are not naturally classified under this heading and which, although of considerable practical interest, have been left out of the theoretical discussions. One such problem is tax evasion. This takes many forms, and one can hardly hope to give a completely general analysis of all these. Our objective in this paper is therefore the more limited one of analyzing the individual taxpayer’s decision on whether and to what extent to avoid taxes by deliberate underreporting. On the one hand our approach is related to the studies of economics of criminal activity, as e.g. in the papers by Becker ( 1968) and by Tulkens and Jacquemin (197 1). On the other hand it is related to the analysis of optimal portfolio and insurance policies in the economics of uncertainty, as in the work by Arrow ( 1970), Mossin ( 1968a) and several others. We shall start by considering a simple static model where this decision is the only one with which the individual is concerned, so that we ignore the interrelationships that probably exist with other types of economic choices. After a detailed study of this simple case (sections",
"title": ""
}
] |
scidocsrr
|
3489d1d49350cc9ce296c29ba1c5d1cf
|
Economics of Internet of Things (IoT): An Information Market Approach
|
[
{
"docid": "24a164e7d6392b052f8a36e20e9c4f69",
"text": "The initial vision of the Internet of Things was of a world in which all physical objects are tagged and uniquely identified by RFID transponders. However, the concept has grown into multiple dimensions, encompassing sensor networks able to provide real-world intelligence and goal-oriented collaboration of distributed smart objects via local networks or global interconnections such as the Internet. Despite significant technological advances, difficulties associated with the evaluation of IoT solutions under realistic conditions in real-world experimental deployments still hamper their maturation and significant rollout. In this article we identify requirements for the next generation of IoT experimental facilities. While providing a taxonomy, we also survey currently available research testbeds, identify existing gaps, and suggest new directions based on experience from recent efforts in this field.",
"title": ""
}
] |
[
{
"docid": "1885ee33c09d943736b03895f41cea06",
"text": "Since the late 1990s, there has been a burst of research on robotic devices for poststroke rehabilitation. Robot-mediated therapy produced improvements on recovery of motor capacity; however, so far, the use of robots has not shown qualitative benefit over classical therapist-led training sessions, performed on the same quantity of movements. Multidegree-of-freedom robots, like the modern upper-limb exoskeletons, enable a distributed interaction on the whole assisted limb and can exploit a large amount of sensory feedback data, potentially providing new capabilities within standard rehabilitation sessions. Surprisingly, most publications in the field of exoskeletons focused only on mechatronic design of the devices, while little details were given to the control aspects. On the contrary, we believe a paramount aspect for robots potentiality lies on the control side. Therefore, the aim of this review is to provide a taxonomy of currently available control strategies for exoskeletons for neurorehabilitation, in order to formulate appropriate questions toward the development of innovative and improved control strategies.",
"title": ""
},
{
"docid": "2683c65d587e8febe45296f1c124e04d",
"text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.",
"title": ""
},
{
"docid": "48096a9a7948a3842afc082fa6e223a6",
"text": "We present a method for using previously-trained ‘teacher’ agents to kickstart the training of a new ‘student’ agent. To this end, we leverage ideas from policy distillation (Rusu et al., 2015; Parisotto et al., 2015) and population based training (Jaderberg et al., 2017). Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (Beattie et al., 2016), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple ‘expert’ teachers which specialise on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10× fewer steps, and surpassing its final performance by 42%. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.",
"title": ""
},
{
"docid": "9694bc859dd5295c40d36230cf6fd1b9",
"text": "In the past two decades, the synthetic style and fashion drug \"crystal meth\" (\"crystal\", \"meth\"), chemically representing the crystalline form of the methamphetamine hydrochloride, has become more and more popular in the United States, in Eastern Europe, and just recently in Central and Western Europe. \"Meth\" is cheap, easy to synthesize and to market, and has an extremely high potential for abuse and dependence. As a strong sympathomimetic, \"meth\" has the potency to switch off hunger, fatigue and, pain while simultaneously increasing physical and mental performance. The most relevant side effects are heart and circulatory complaints, severe psychotic attacks, personality changes, and progressive neurodegeneration. Another effect is \"meth mouth\", defined as serious tooth and oral health damage after long-standing \"meth\" abuse; this condition may become increasingly relevant in dentistry and oral- and maxillofacial surgery. There might be an association between general methamphetamine abuse and the development of osteonecrosis, similar to the medication-related osteonecrosis of the jaws (MRONJ). Several case reports concerning \"meth\" patients after tooth extractions or oral surgery have presented clinical pictures similar to MRONJ. This overview summarizes the most relevant aspect concerning \"crystal meth\" abuse and \"meth mouth\".",
"title": ""
},
{
"docid": "c0dbd6356ead3a9542c9ec20dd781cc7",
"text": "This paper aims to address the importance of supportive teacher–student interactions within the learning environment. This will be explored through the three elements of the NSW Quality Teaching Model; Intellectual Quality, Quality Learning Environment and Significance. The paper will further observe the influences of gender on the teacher–student relationship, as well as the impact that this relationship has on student academic outcomes and behaviour. Teacher–student relationships have been found to have immeasurable effects on students’ learning and their schooling experience. This paper examines the ways in which educators should plan to improve their interactions with students, in order to allow for quality learning. This journal article is available in Journal of Student Engagement: Education Matters: http://ro.uow.edu.au/jseem/vol2/iss1/2 Journal of Student Engagement: Education matters 2012, 2 (1), 2–9 Lauren Liberante 2 The importance of teacher–student relationships, as explored through the lens of the NSW Quality Teaching Model",
"title": ""
},
{
"docid": "bb482edabdb07f412ca13a728b7fd25c",
"text": "This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. We train the cuboid model jointly and discriminatively. In inference we slide and rotate the box in 3D to score the object hypotheses. We evaluate our approach in indoor and outdoor scenarios, and show that our approach outperforms the state-of-the-art in both 2D [1] and 3D object detection [3].",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "a839016be99c3cb93d30fa48403086d8",
"text": "At synapses of the mammalian central nervous system, release of neurotransmitter occurs at rates transiently as high as 100 Hz, putting extreme demands on nerve terminals with only tens of functional vesicles at their disposal. Thus, the presynaptic vesicle cycle is particularly critical to maintain neurotransmission. To understand vesicle cycling at the most fundamental level, we studied single vesicles undergoing exo/endocytosis and tracked the fate of newly retrieved vesicles. This was accomplished by minimally stimulating boutons in the presence of the membrane-fluorescent styryl dye FM1-43, then selecting for terminals that contained only one dye-filled vesicle. We then observed the kinetics of dye release during single action potential stimulation. We found that most vesicles lost only a portion of their total dye during a single fusion event, but were able to fuse again soon thereafter. We interpret this as direct evidence of \"kiss-and-run\" followed by rapid reuse. Other interpretations such as \"partial loading\" and \"endosomal splitting\" were largely excluded on the basis of multiple lines of evidence. Our data placed an upper bound of <1.4 s on the lifetime of the kiss-and-run fusion event, based on the assumption that aqueous departitioning is rate limiting. The repeated use of individual vesicles held over a range of stimulus frequencies up to 30 Hz and was associated with neurotransmitter release. A small percentage of fusion events did release a whole vesicle's worth of dye in one action potential, consistent with a classical picture of exocytosis as fusion followed by complete collapse or at least very slow retrieval.",
"title": ""
},
{
"docid": "342bcd2509b632480c4f4e8059cfa6a1",
"text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.",
"title": ""
},
{
"docid": "abbafaaf6a93e2a49a692690d4107c9a",
"text": "Virtual teams have become a ubiquitous form of organizing, but the impact of social structures within and between teams on group performance remains understudied. This paper uses the case study of a massively multiplayer online game and server log data from over 10,000 players to examine the connection between group social capital (operationalized through guild network structure measures) and team effectiveness, given a variety of in-game social networks. Three different networks, social, task, and exchange networks, are compared and contrasted while controlling for group size, group age, and player experience. Team effectiveness is maximized at a roughly moderate level of closure across the networks, suggesting that this is the optimal level of the groupâs network density. Guilds with high brokerage, meaning they have diverse connections with other groups, were more effective in achievement-oriented networks. In addition, guilds with central leaders were more effective when they teamed up with other guild leaders.",
"title": ""
},
{
"docid": "3d7eb095e68a9500674493ee58418789",
"text": "Hundreds of scholarly studies have investigated various aspects of the immensely popular Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none of them has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and also provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment, but also for more serious topics such as health information and legal background. Scholars, librarians and students are common users of Wikipedia, and it provides a unique opportunity for educating students in digital",
"title": ""
},
{
"docid": "763983ae894e3b98932233ef0b465164",
"text": "In the rapidly developing world of information technology, computers have been used in various settings for clinical medicine application. Studies have focused on computerized physician order entry (CPOE) system interface design and functional development to achieve a successful technology adoption process. Therefore, the purpose of this study was to evaluate physician satisfaction with the CPOE system. This survey included user attitude toward interface design, operation functions/usage effectiveness, interface usability, and user satisfaction. We used questionnaires for data collection from June to August 2008, and 225 valid questionnaires were returned with a response rate of 84.5 %. Canonical correlation was applied to explore the relationship of personal attributes and usability with user satisfaction. The results of the data analysis revealed that certain demographic groups showed higher acceptance and satisfaction levels, especially residents, those with less pressure when using computers or those with less experience with the CPOE systems. Additionally, computer use pressure and usability were the best predictors of user satisfaction. Based on the study results, it is suggested that future CPOE development should focus on interface design and content links, as well as providing educational training programs for the new users; since a learning curve period should be considered as an indespensible factor for CPOE adoption.",
"title": ""
},
{
"docid": "f94ff39136c71cf2a36253381a042195",
"text": "We present Autonomous Rssi based RElative poSitioning and Tracking (ARREST), a new robotic sensing system for tracking and following a moving, RF-emitting object, which we refer to as the Leader, solely based on signal strength information. Our proposed tracking agent, which we refer to as the TrackBot, uses a single rotating, off-the-shelf, directional antenna, novel angle and relative speed estimation algorithms, and Kalman filtering to continually estimate the relative position of the Leader with decimeter level accuracy (which is comparable to a state-of-the-art multiple access point based RF-localization system) and the relative speed of the Leader with accuracy on the order of 1 m/s. The TrackBot feeds the relative position and speed estimates into a Linear Quadratic Gaussian (LQG) controller to generate a set of control outputs to control the orientation and the movement of the TrackBot. We perform an extensive set of real world experiments with a full-fledged prototype to demonstrate that the TrackBot is able to stay within 5m of the Leader with: (1) more than 99% probability in line of sight scenarios, and (2) more than 75% probability in no line of sight scenarios, when it moves 1.8X faster than the Leader.",
"title": ""
},
{
"docid": "e14b936ecee52765078d77088e76e643",
"text": "In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.",
"title": ""
},
{
"docid": "50d63f05e453468f8e5234910e3d86d1",
"text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: gordon.ross03@ic.ac.uk, gr203@i ic.ac.uk (N.M. Adams), d.tasoulis@ic.ac.uk (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "bbad2fa7a85b7f90d9589adee78a08d7",
"text": "Haze has becoming a yearly occurrence in Malaysia. There exist three dimensionsto the problems associated with air pollution: public ignorance on quality of air, impact of air pollution towards health, and difficulty in obtaining information related to air pollution. This research aims to analyse and visually identify areas and associated level of air pollutant. This study applies the air pollutant index (API) data retrieved from Malaysia Department of Environment (DOE) and Geographic Information System (GIS) via Inverse Distance Weighted (IDW) interpolation methodin ArcGIS 10.1 software to enable haze monitoring visualisation. In this research, study area is narrowed to five major cities in Selangor, Malaysia.",
"title": ""
},
{
"docid": "78ce9ddb8fbfeb801455a76a3a6b0af2",
"text": "Deeply embedded domain-specific languages (EDSLs) intrinsically compromise programmer experience for improved program performance. Shallow EDSLs complement them by trading program performance for good programmer experience. We present Yin-Yang, a framework for DSL embedding that uses Scala macros to reliably translate shallow EDSL programs to the corresponding deep EDSL programs. The translation allows program prototyping and development in the user friendly shallow embedding, while the corresponding deep embedding is used where performance is important. The reliability of the translation completely conceals the deep em- bedding from the user. For the DSL author, Yin-Yang automatically generates the deep DSL embeddings from their shallow counterparts by reusing the core translation. This obviates the need for code duplication and leads to reliability by construction.",
"title": ""
},
{
"docid": "72e6d897e8852fca481d39237cf04e36",
"text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.",
"title": ""
},
{
"docid": "3d23e7b9d8c0e1a3b4916c069bf6f7d6",
"text": "In recent years, depth cameras have become a widely available sensor type that captures depth images at real-time frame rates. Even though recent approaches have shown that 3D pose estimation from monocular 2.5D depth images has become feasible, there are still challenging problems due to strong noise in the depth data and self-occlusions in the motions being captured. In this paper, we present an efficient and robust pose estimation framework for tracking full-body motions from a single depth image stream. Following a data-driven hybrid strategy that combines local optimization with global retrieval techniques, we contribute several technical improvements that lead to speed-ups of an order of magnitude compared to previous approaches. In particular, we introduce a variant of Dijkstra's algorithm to efficiently extract pose features from the depth data and describe a novel late-fusion scheme based on an efficiently computable sparse Hausdorff distance to combine local and global pose estimates. Our experiments show that the combination of these techniques facilitates real-time tracking with stable results even for fast and complex motions, making it applicable to a wide range of inter-active scenarios.",
"title": ""
},
{
"docid": "b18d03e17f05cb0a2bb7a852a53df8cc",
"text": "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.",
"title": ""
}
] |
scidocsrr
|
0c0dbdd3593239ff7941c8219d15c1bd
|
The topology of dark networks
|
[
{
"docid": "8afd1ab45198e9960e6a047091a2def8",
"text": "We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied.",
"title": ""
}
] |
[
{
"docid": "0bfad59874eb7a52c123bb6cd7bc1c16",
"text": "A 12-year-old patient sustained avulsions of both permanent maxillary central incisors. Subsequently, both teeth developed replacement resorption. The left incisor was extracted alio loco. The right incisor was treated by decoronation (removal of crown and pulp, but preservation of the root substance). Comparison of both sites demonstrated complete preservation of the height and width of the alveolar bone at the decoronation site, whereas the tooth extraction site showed considerable bone loss. In addition, some vertical bone apposition was found on top of the decoronated root. Decoronation is a simple and safe surgical procedure for preservation of alveolar bone prior to implant placement. It must be considered as a treatment option for teeth affected by replacement resorption if tooth transplantation is not feasible.",
"title": ""
},
{
"docid": "1ecf01e0c9aec4159312406368ceeff0",
"text": "Image phylogeny is the problem of reconstructing the structure that represents the history of generation of semantically similar images (e.g., near-duplicate images). Typical image phylogeny approaches break the problem into two steps: (1) estimating the dissimilarity between each pair of images and (2) reconstructing the phylogeny structure. Given that the dissimilarity calculation directly impacts the phylogeny reconstruction, in this paper, we propose new approaches to the standard formulation of the dissimilarity measure employed in image phylogeny, aiming at improving the reconstruction of the tree structure that represents the generational relationships between semantically similar images. These new formulations exploit a different method of color adjustment, local gradients to estimate pixel differences and mutual information as a similarity measure. The results obtained with the proposed formulation remarkably outperform the existing counterparts in the literature, allowing a much better analysis of the kinship relationships in a set of images, allowing for more accurate deployment of phylogeny solutions to tackle traitor tracing, copyright enforcement and digital forensics problems.",
"title": ""
},
{
"docid": "22881dd1a1a17441b3a914117e134a28",
"text": "Remote sensing of the reflectance photoplethysmogram using a video camera typically positioned 1 m away from the patient's face is a promising method for monitoring the vital signs of patients without attaching any electrodes or sensors to them. Most of the papers in the literature on non-contact vital sign monitoring report results on human volunteers in controlled environments. We have been able to obtain estimates of heart rate and respiratory rate and preliminary results on changes in oxygen saturation from double-monitored patients undergoing haemodialysis in the Oxford Kidney Unit. To achieve this, we have devised a novel method of cancelling out aliased frequency components caused by artificial light flicker, using auto-regressive (AR) modelling and pole cancellation. Secondly, we have been able to construct accurate maps of the spatial distribution of heart rate and respiratory rate information from the coefficients of the AR model. In stable sections with minimal patient motion, the mean absolute error between the camera-derived estimate of heart rate and the reference value from a pulse oximeter is similar to the mean absolute error between two pulse oximeter measurements at different sites (finger and earlobe). The activities of daily living affect the respiratory rate, but the camera-derived estimates of this parameter are at least as accurate as those derived from a thoracic expansion sensor (chest belt). During a period of obstructive sleep apnoea, we tracked changes in oxygen saturation using the ratio of normalized reflectance changes in two colour channels (red and blue), but this required calibration against the reference data from a pulse oximeter.",
"title": ""
},
{
"docid": "da988486b0a3e82ce5f7fb8aa5467779",
"text": "The benefits of Domain Specific Modeling Languages (DSML), for modeling and design of cyber physical systems, have been acknowledged in previous years. In contrast to general purpose modeling languages, such as Unified Modeling Language, DSML facilitates the modeling of domain specific concepts. The objective of this work is to develop a simple graphical DSML for cyber physical systems, which allow the unified modeling of the structural and behavioral aspects of a system in a single model, and provide model transformation and design verification support in future. The proposed DSML was defined in terms of its abstract and concrete syntax. The applicability of the proposed DSML was demonstrated by its application in two case studies: Traffic Signal and Arbiter case studies. The results showed that the proposed DSML produce simple and unified models with possible model transformation and verification support.",
"title": ""
},
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
},
{
"docid": "a2223d57a866b0a0ef138e52fb515b84",
"text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.",
"title": ""
},
{
"docid": "6a541e92e92385c27ceec1e55a50b46e",
"text": "BACKGROUND\nWe retrospectively studied the outcome of Pavlik harness treatment in late-diagnosed hip dislocation in infants between 6 and 24 months of age (Graf type 3 and 4 or dislocated hips on radiographs) treated in our hospital between 1984 and 2004. The Pavlik harness was progressively applied to improve both flexion and abduction of the dislocated hip. In case of persistent adduction contracture, an abduction splint was added temporarily to improve the abduction.\n\n\nMETHODS\nWe included 24 patients (26 hips) between 6 and 24 months of age who presented with a dislocated hip and primarily treated by Pavlik harness in our hospital between 1984 and 2004. The mean age at diagnosis was 9 months (range 6 to 23 mo). The average follow-up was 6 years 6 months (2 to 12 y). Ultrasound images and radiographs were assessed at the time of diagnosis, one year after reposition and at last follow-up.\n\n\nRESULTS\nTwelve of the twenty-six hips (46%) were successfully reduced with Pavlik harness after an average treatment of 14 weeks (4 to 28 wk). One patient (9%) needed a secondary procedure 1 year 9 months after reposition because of residual dysplasia (Pelvis osteotomy). Seventeen of the 26 hips were primary diagnosed by Ultrasound according to the Graf classification. Ten had a Graf type 3 hip and 7 hips were classified as Graf type 4. The success rate was 60% for the type 3 hips and 0% for the type 4 hips. (P=0.035). None of the hips that were reduced with the Pavlik harness developed an avascular necrosis (AVN). Of the hips that failed the Pavlik harness treatment, three hips showed signs of AVN, 1 after closed reposition and 2 after open reposition.\n\n\nCONCLUSION\nThe use of a Pavlik harness in the late-diagnosed hip dislocation type Graf 3 can be a successful treatment option in the older infant. We have noticed few complications in these patients maybe due to progressive and gentle increase of abduction and flexion, with or without temporary use of an abduction splint. The treatment should be abandoned if the hips are not reduced after 6 weeks. None of the Graf 4 hips could be reduced successfully by Pavlik harness. This was significantly different from the success rate for the Graf type 3 hips.\n\n\nLEVEL OF EVIDENCE\nTherapeutic study, clinical case series: Level IV.",
"title": ""
},
{
"docid": "eb639439559f3e4e3540e3e98de7a741",
"text": "This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale. The attribute vectors, in conjunction with the deformation mechanism of the model, warrant that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.",
"title": ""
},
{
"docid": "e4ade1f0baea7c50d0dff4470bbbfcd9",
"text": "Ad networks for mobile apps require inspection of the visual layout of their ads to detect certain types of placement frauds. Doing this manually is error prone, and does not scale to the sizes of today’s app stores. In this paper, we design a system called DECAF to automatically discover various placement frauds scalably and effectively. DECAF uses automated app navigation, together with optimizations to scan through a large number of visual elements within a limited time. It also includes a framework for efficiently detecting whether ads within an app violate an extensible set of rules that govern ad placement and display. We have implemented DECAF for Windows-based mobile platforms, and applied it to 1,150 tablet apps and 50,000 phone apps in order to characterize the prevalence of ad frauds. DECAF has been used by the ad fraud team in Microsoft and has helped find many instances of ad frauds.",
"title": ""
},
{
"docid": "1836f3cf9c6243b57fd23b8d84b859d1",
"text": "While most Reinforcement Learning work utilizes temporal discounting to evaluate performance, the reasons for this are unclear. Is it out of desire or necessity? We argue that it is not out of desire, and seek to dispel the notion that temporal discounting is necessary by proposing a framework for undiscounted optimization. We present a metric of undiscounted performance and an algorithm for finding action policies that maximize that measure. The technique, which we call Rlearning, is modelled after the popular Q-learning algorithm [17]. Initial experimental results are presented which attest to a great improvement over Q-learning in some simple cases.",
"title": ""
},
{
"docid": "c1d7990c2c94ffd3ed16cce5947e4e27",
"text": "The introduction of online social networks (OSN) has transformed the way people connect and interact with each other as well as share information. OSN have led to a tremendous explosion of network-centric data that could be harvested for better understanding of interesting phenomena such as sociological and behavioural aspects of individuals or groups. As a result, online social network service operators are compelled to publish the social network data for use by third party consumers such as researchers and advertisers. As social network data publication is vulnerable to a wide variety of reidentification and disclosure attacks, developing privacy preserving mechanisms are an active research area. This paper presents a comprehensive survey of the recent developments in social networks data publishing privacy risks, attacks, and privacy-preserving techniques. We survey and present various types of privacy attacks and information exploited by adversaries to perpetrate privacy attacks on anonymized social network data. We present an in-depth survey of the state-of-the-art privacy preserving techniques for social network data publishing, metrics for quantifying the anonymity level provided, and information loss as well as challenges and new research directions. The survey helps readers understand the threats, various privacy preserving mechanisms, and their vulnerabilities to privacy breach attacks in social network data publishing as well as observe common themes and future directions.",
"title": ""
},
{
"docid": "0d6d2413cbaaef5354cf2bcfc06115df",
"text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.",
"title": ""
},
{
"docid": "329343cec99c221e6f6ce8e3f1dbe83f",
"text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.",
"title": ""
},
{
"docid": "b2e62194ce1eb63e0d13659a546db84b",
"text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.",
"title": ""
},
{
"docid": "062f6ecc9d26310de82572f500cb5f05",
"text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.",
"title": ""
},
{
"docid": "3d8df2c8fcbdc994007104b8d21d7a06",
"text": "The purpose of this research was to analysis the efficiency of global strategies. This paper identified six key strategies necessary for firms to be successful when expanding globally. These strategies include differentiation, marketing, distribution, collaborative strategies, labor and management strategies, and diversification. Within this analysis, we chose to focus on the Coca-Cola Company because they have proven successful in their international operations and are one of the most recognized brands in the world. We performed an in-depth review of how effectively or ineffectively Coca-Cola has used each of the six strategies. The paper focused on Coca-Cola's operations in the United States, China, Belarus, Peru, and Morocco. The author used electronic journals from the various countries to determine how effective Coca-Cola was in these countries. The paper revealed that Coca-Cola was very successful in implementing strategies regardless of the country. However, the author learned that Coca-Cola did not effectively utilize all of the strategies in each country.",
"title": ""
},
{
"docid": "c7160083cc96253d305b127929e25107",
"text": "This paper considers the task of matching images and sentences. The challenge consists in discriminatively embedding the two modalities onto a shared visual-textual space. Existing work in this field largely uses Recurrent Neural Networks (RNN) for text feature learning and employs off-the-shelf Convolutional Neural Networks (CNN) for image feature extraction. Our system, in comparison, differs in two key aspects. Firstly, we build a convolutional network amenable for fine-tuning the visual and textual representations, where the entire network only contains four components, i.e., convolution layer, pooling layer, rectified linear unit function (ReLU), and batch normalisation. Endto-end learning allows the system to directly learn from the data and fully utilise the supervisions. Secondly, we propose instance loss according to viewing each multimodal data pair as a class. This works with a large margin objective to learn the inter-modal correspondence between images and their textual descriptions. Experiments on two generic retrieval datasets (Flickr30k and MSCOCO) demonstrate that our method yields competitive accuracy compared to state-of-the-art methods. Moreover, in language person retrieval, we improve the state of the art by a large margin. Code is available at https://github.com/layumi/ Image-Text-Embedding",
"title": ""
},
{
"docid": "e34b8fd3e1fba5306a88e4aac38c0632",
"text": "1 Jomo was an Assistant Secretary General in the United Nations system responsible for economic research during 2005-2015.; Chowdhury (Chief, Multi-Stakeholder Engagement & Outreach, Financing for Development Office, UN-DESA); Sharma (Senior Economic Affairs Officer, Financing for Development Office, UN-DESA); Platz (Economic Affairs Officer, Financing for Development Office, UN-DESA); corresponding author: Anis Chowdhury (chowdhury4@un.org; anis.z.chowdhury@gmail.com). Thanks to colleagues at the Financing for Development Office of UN-DESA and an anonymous referee for their helpful comments. Thanks also to Alexander Kucharski for his excellent support in gathering data and producing figure charts and to Jie Wei for drawing the flow charts. However, the usual caveats apply. ABSTRACT",
"title": ""
},
{
"docid": "5cbc93a9844fcd026a1705ee031c6530",
"text": "Accompanying the rapid urbanization, many developing countries are suffering from serious air pollution problem. The demand for predicting future air quality is becoming increasingly more important to government's policy-making and people's decision making. In this paper, we predict the air quality of next 48 hours for each monitoring station, considering air quality data, meteorology data, and weather forecast data. Based on the domain knowledge about air pollution, we propose a deep neural network (DNN)-based approach (entitled DeepAir), which consists of a spatial transformation component and a deep distributed fusion network. Considering air pollutants' spatial correlations, the former component converts the spatial sparse air quality data into a consistent input to simulate the pollutant sources. The latter network adopts a neural distributed architecture to fuse heterogeneous urban data for simultaneously capturing the factors affecting air quality, e.g. meteorological conditions. We deployed DeepAir in our AirPollutionPrediction system, providing fine-grained air quality forecasts for 300+ Chinese cities every hour. The experimental results on the data from three-year nine Chinese-city demonstrate the advantages of DeepAir beyond 10 baseline methods. Comparing with the previous online approach in AirPollutionPrediction system, we have 2.4%, 12.2%, 63.2% relative accuracy improvements on short-term, long-term and sudden changes prediction, respectively.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
}
] |
scidocsrr
|
f83afd4bc31cef68fee3dd74e299d978
|
Understanding consumer acceptance of mobile payment services: An empirical analysis
|
[
{
"docid": "57b945df75d8cd446caa82ae02074c3a",
"text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …",
"title": ""
},
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
}
] |
[
{
"docid": "193042bd07d5e9672b04ede9160d406c",
"text": "We report on the flip chip packaging of Micro-Electro-Mechanical System (MEMS)-based digital silicon photonic switching device and the characterization results of 12 × 12 switching ports. The challenges in packaging N<sup> 2</sup> electrical and 2N optical interconnections are addressed with single-layer electrical redistribution lines of 25 <italic>μ</italic>m line width and space on aluminum nitride interposer and 13° polished 64-channel lidless fiber array (FA) with a pitch of 127 <italic>μ</italic>m. 50 <italic>μ</italic>m diameter solder spheres are laser-jetted onto the electrical bond pads surrounded by suspended MEMS actuators on the device before fluxless flip-chip bonding. A lidless FA is finally coupled near-vertically onto the device gratings using a 6-degree-of-freedom (6-DOF) alignment system. Fiber-to-grating coupler loss of 4.25 dB/facet, 10<sup>–11 </sup> bit error rate (BER) through the longest optical path, and 0.4 <italic>μ</italic>s switch reconfiguration time have been demonstrated using 10 Gb/s Ethernet data stream.",
"title": ""
},
{
"docid": "db3fc6ae924c0758bb58cd04f395520e",
"text": "Engineering from the University of Michigan, and a Ph.D. in Information Technologies from the MIT Sloan School of Management. His current research interests include IT adoption and diffusion, management of technology and innovation, software development tools and methods, and real options. He has published in Abstract The extent of organizational innovation with IT, an important construct in the IT innovation literature, has been measured in many different ways. Some measures are more narrowly focused while others aggregate innovative behaviors across a set of innovations or across stages in the assimilation lifecycle within organizations. There appear to be some significant tradeoffs involving aggregation. More aggregated measures can be more robust and generalizable and can promote stronger predictive validity, while less aggregated measures allow more context-specific investigations and can preserve clearer theoretical interpretations. This article begins with a conceptual analysis that identifies the circumstances when these tradeoffs are most likely to favor aggregated measures. It is found that aggregation should be favorable when: (1) the researcher's interest is in general innovation or a model that generalizes to a class of innovations, (2) antecedents have effects in the same direction in all assimilation stages, (3) characteristics of organizations can be treated as constant across the innovations in the study, (4) characteristics of innovations can not be treated as constant across organizations in the study, (5) the set of innovations being aggregated includes substitutes or moderate complements, and (6) sources of noise in the measurement of innovation may be present. The article then presents an empirical study using data on the adoption of software process technologies by 608 US based corporations. This study—which had circumstances quite favorable to aggregation—found that aggregating across three innovations within a technology class more than doubled the variance explained compared to single innovation models. Aggregating across assimilation stages had a slight positive effect on predictive validity. Taken together, these results provide initial confirmation of the conclusions from the conceptual analysis regarding the circumstances favoring aggregation.",
"title": ""
},
{
"docid": "ddc37e29f935bd494b54bd4d38abb3e6",
"text": "NAND flash memory-based storage devices are increasingly adopted as one of the main alternatives for magnetic disk drives. The flash translation layer (FTL) is a software/hardware interface inside NAND flash memory, which allows existing disk-based applications to use it without any significant modifications. Since FTL has a critical impact on the performance of NAND flash-based devices, a variety of FTL schemes have been proposed to improve their performance. However, existing FTLs perform well for either a read intensive workload or a write intensive workload, not for both of them due to their fixed and static address mapping schemes. To overcome this limitation, in this paper, we propose a novel FTL addressing scheme named as Convertible Flash Translation Layer (CFTL, for short). CFTL is adaptive to data access patterns so that it can dynamically switch the mapping of a data block to either read-optimized or write-optimized mapping scheme in order to fully exploit the benefits of both schemes. By judiciously taking advantage of both schemes, CFTL resolves the intrinsic problems of the existing FTLs. In addition to this convertible scheme, we propose an efficient caching strategy so as to considerably improve the CFTL performance further with only a simple hint. Consequently, both of the convertible feature and caching strategy empower CFTL to achieve good read performance as well as good write performance. Our experimental evaluation with a variety of realistic workloads demonstrates that the proposed CFTL scheme outperforms other FTL schemes.",
"title": ""
},
{
"docid": "775e0205ef85aa5d04af38748e63aded",
"text": "Monads are a de facto standard for the type-based analysis of impure aspects of programs, such as runtime cost [9, 5]. Recently, the logical dual of a monad, the comonad, has also been used for the cost analysis of programs, in conjunction with a linear type system [6, 8]. The logical duality of monads and comonads extends to cost analysis: In monadic type systems, costs are (side) effects, whereas in comonadic type systems, costs are coeffects. However, it is not clear whether these two methods of cost analysis are related and, if so, how. Are they equally expressive? Are they equally well-suited for cost analysis with all reduction strategies? Are there translations from type systems with effects to type systems with coeffects and viceversa? The goal of this work-in-progress paper is to explore some of these questions in a simple context — the simply typed lambda-calculus (STLC). As we show, even this simple context is already quite interesting technically and it suffices to bring out several key points.",
"title": ""
},
{
"docid": "0e068a4e7388ed456de4239326eb9b08",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "070a1c6b47a0a5c217e747cd7e0e0d0b",
"text": "In this paper we develop a computational model of visual adaptation for realistic image synthesis based on psychophysical experiments. The model captures the changes in threshold visibility, color appearance, visual acuity, and sensitivity over time that are caused by the visual system’s adaptation mechanisms. We use the model to display the results of global illumination simulations illuminated at intensities ranging from daylight down to starlight. The resulting images better capture the visual characteristics of scenes viewed over a wide range of illumination levels. Because the model is based on psychophysical data it can be used to predict the visibility and appearance of scene features. This allows the model to be used as the basis of perceptually-based error metrics for limiting the precision of global illumination computations. CR",
"title": ""
},
{
"docid": "0be5ab2533511ce002d87ff6a12f7b08",
"text": "This paper deals with the solar photovoltaic (SPV) array fed water-pumping system using a Luo converter as an intermediate DC-DC converter and a permanent magnet brushless DC (BLDC) motor to drive a centrifugal water pump. Among the different types of DC-DC converters, an elementary Luo converter is selected in order to extract the maximum power available from the SPV array and for safe starting of BLDC motor. The elementary Luo converter with reduced components and single semiconductor switch has inherent features of reducing the ripples in its output current and possessing a boundless region for maximum power point tracking (MPPT). The electronically commutated BLDC motor is used with a voltage source inverter (VSI) operated at fundamental frequency switching thus avoiding the high frequency switching losses resulting in a high efficiency of the system. The SPV array is designed such that the power at rated DC voltage is supplied to the BLDC motor-pump under standard test condition and maximum switch utilization of Luo converter is achieved which results in efficiency improvement of the converter. Performances at various operating conditions such as starting, dynamic and steady state behavior are analyzed and suitability of the proposed system is demonstrated using MATLAB/Simulink based simulation results.",
"title": ""
},
{
"docid": "5e42cdbe42b9fafb53b8bbd82ec96d5a",
"text": "Fifty years ago, the author published a paper in Operations Research with the title, “A proof for the queuing formula: L = W ” [Little, J. D. C. 1961. A proof for the queuing formula: L = W . Oper. Res. 9(3) 383–387]. Over the years, L = W has become widely known as “Little’s Law.” Basically, it is a theorem in queuing theory. It has become well known because of its theoretical and practical importance. We report key developments in both areas with the emphasis on practice. In the latter, we collect new material and search for insights on the use of Little’s Law within the fields of operations management and computer architecture.",
"title": ""
},
{
"docid": "b13c9597f8de229fb7fec3e23c0694d1",
"text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.",
"title": ""
},
{
"docid": "7f6f26ac42f8f637415a45afc94daa0f",
"text": "We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning. In particular, training a neural network using synthetic data can be viewed as learning a proposal distribution generator for approximate inference in the synthetic-data generative model. We demonstrate this connection in a recognition task where we develop a novel Captcha-breaking architecture and train it using synthetic data, demonstrating both state-of-the-art performance and a way of computing task-specific posterior uncertainty. Using a neural network trained this way, we also demonstrate successful breaking of real-world Captchas currently used by Facebook and Wikipedia. Reasoning from these empirical results and drawing connections with Bayesian modeling, we discuss the robustness of synthetic data results and suggest important considerations for ensuring good neural network generalization when training with synthetic data.",
"title": ""
},
{
"docid": "7a12529d179d9ca6b94dbac57c54059f",
"text": "A novel design of a hand functions task training robotic system was developed for the stroke rehabilitation. It detects the intention of hand opening or hand closing from the stroke person using the electromyography (EMG) signals measured from the hemiplegic side. This training system consists of an embedded controller and a robotic hand module. Each hand robot has 5 individual finger assemblies capable to drive 2 degrees of freedom (DOFs) of each finger at the same time. Powered by the linear actuator, the finger assembly achieves 55 degree range of motion (ROM) at the metacarpophalangeal (MCP) joint and 65 degree range of motion (ROM) at the proximal interphalangeal (PIP) joint. Each finger assembly can also be adjusted to fit for different finger length. With this task training system, stroke subject can open and close their impaired hand using their own intention to carry out some of the daily living tasks.",
"title": ""
},
{
"docid": "430609545d1ce22e341d3682c27629fb",
"text": "In order to meet the increasing environmental and economic requirements, commercial aircraft industries have been challenged to reduce fuel consumption, noise and emissions. As a result, more electrical aircraft (MEA), on which engine driven electrical power replaces other primary powers, is being investigated. However, with the increasing demands of electrical power capacity on MEA, the engines have to be redesigned to supply enough power and space for bigger generators. In order to avoid this problem, fuel cell systems could partially/entirely replace the engine driven generators to supply electric power on board. Compared to the traditional electrical power system which is driven by main engines/auxiliary power unit (APU) on MEA, fuel cell based systems would be more efficient and more environmental friendly. Also, fuel cells could work continuously during the entire flight envelope. This paper introduces fuel cell system concepts on MEA. Characters of solid oxide fuel cell (SOFC) and polymer electrolyte membrane fuel cell (PEMFC) are compared. An SOFC APU application on MEA is introduced. Finally, challenges of fell cells application on MEA are discussed.",
"title": ""
},
{
"docid": "4dc38ae50a2c806321020de4a140ed5f",
"text": "Transcranial direct current stimulation (tDCS) is a promising technology to enhance cognitive and physical performance. One of the major areas of interest is the enhancement of memory function in healthy individuals. The early arrival of tDCS on the market for lifestyle uses and cognitive enhancement purposes lead to the voicing of some important ethical concerns, especially because, to date, there are no official guidelines or evaluation procedures to tackle these issues. The aim of this article is to review ethical issues related to uses of tDCS for memory enhancement found in the ethics and neuroscience literature and to evaluate how realistic and scientifically well-founded these concerns are? In order to evaluate how plausible or speculative each issue is, we applied the methodological framework described by Racine et al. (2014) for \"informed and reflective\" speculation in bioethics. This framework could be succinctly presented as requiring: (1) the explicit acknowledgment of factual assumptions and identification of the value attributed to them; (2) the validation of these assumptions with interdisciplinary literature; and (3) the adoption of a broad perspective to support more comprehensive reflection on normative issues. We identified four major considerations associated with the development of tDCS for memory enhancement: safety, autonomy, justice and authenticity. In order to assess the seriousness and likelihood of harm related to each of these concerns, we analyzed the assumptions underlying the ethical issues, and the level of evidence for each of them. We identified seven distinct assumptions: prevalence, social acceptance, efficacy, ideological stance (bioconservative vs. libertarian), potential for misuse, long term side effects, and the delivery of complete and clear information. We conclude that ethical discussion about memory enhancement via tDCS sometimes involves undue speculation, and closer attention to scientific and social facts would bring a more nuanced analysis. At this time, the most realistic concerns are related to safety and violation of users' autonomy by a breach of informed consent, as potential immediate and long-term health risks to private users remain unknown or not well defined. Clear and complete information about these risks must be provided to research participants and consumers of tDCS products or related services. Broader public education initiatives and warnings would also be worthwhile to reach those who are constructing their own tDCS devices.",
"title": ""
},
{
"docid": "5fe45b44d4e113e1f9b1867ac7244074",
"text": "Wireless sensor networks (WSNs) will play a key role in the extension of the smart grid towards residential premises, and enable various demand and energy management applications. Efficient demand-supply balance and reducing electricity expenses and carbon emissions will be the immediate benefits of these applications. In this paper, we evaluate the performance of an in-home energy management (iHEM) application. The performance of iHEM is compared with an optimization-based residential energy management (OREM) scheme whose objective is to minimize the energy expenses of the consumers. We show that iHEM decreases energy expenses, reduces the contribution of the consumers to the peak load, reduces the carbon emissions of the household, and its savings are close to OREM. On the other hand, iHEM application is more flexible as it allows communication between the controller and the consumer utilizing the wireless sensor home area network (WSHAN). We evaluate the performance of iHEM under the presence of local energy generation capability, prioritized appliances, and for real-time pricing. We show that iHEM reduces the expenses of the consumers for each case. Furthermore, we show that packet delivery ratio, delay, and jitter of the WSHAN improve as the packet size of the monitoring applications, that also utilize the WSHAN, decreases.",
"title": ""
},
{
"docid": "192663cdecdcfda1f86605adbc3c6a56",
"text": "With the introduction of IT to conduct business we accepted the loss of a human control step. For this reason, the introduction of new IT systems was accompanied by the development of the authorization concept. But since, in reality, there is no such thing as 100 per cent security; auditors are commissioned to examine all transactions for misconduct. Since the data exists in digital form already, it makes sense to use computer-based processes to analyse it. Such processes allow the auditor to carry out extensive checks within an acceptable timeframe and with reasonable effort. Once the algorithm has been defined, it only takes sufficient computing power to evaluate larger quantities of data. This contribution presents the state of the art for IT-based data analysis processes that can be used to identify fraudulent activities.",
"title": ""
},
{
"docid": "80de9b0ba596c19bfc8a99fd46201a99",
"text": "We integrate the recently proposed spatial transformer network (SPN) ( Jaderberg & Simonyan , 2015) into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNNSPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input images without deteriorating performance. The down-sampling in RNN-SPN can be thought of as adaptive downsampling that minimizes the information loss in the regions of interest. We attribute the superior performance of the RNN-SPN to the fact that it can attend to a sequence of regions of interest.",
"title": ""
},
{
"docid": "0a7db914781aacb79a7139f3da41efbb",
"text": "This work studies the reliability behaviour of gate oxides grown by in situ steam generation technology. A comparison with standard steam oxides is performed, investigating interface and bulk properties. A reduced conduction at low fields and an improved reliability is found for ISSG oxide. The initial lower bulk trapping, but with similar degradation rate with respect to standard oxides, explains the improved reliability results. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4bf4be69ea3f3afceca056e2b5b8102",
"text": "In this paper we present a conversational dialogue system, Ch2R (Chinese Chatter Robot) for online shopping guide, which allows users to inquire about information of mobile phone in Chinese. The purpose of this paper is to describe our development effort in terms of the underlying human language technologies (HLTs) as well as other system issues. We focus on a mixed-initiative conversation mechanism for interactive shopping guide combining initiative guiding and question understanding. We also present some evaluation on the system in mobile phone shopping guide domain. Evaluation results demonstrate the efficiency of our approach.",
"title": ""
},
{
"docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b",
"text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.",
"title": ""
},
{
"docid": "c83d034e052926520677d0c5880f8800",
"text": "Sperm vitality is a reflection of the proportion of live, membrane-intact spermatozoa determined by either dye exclusion or osmoregulatory capacity under hypo-osmotic conditions. In this chapter we address the two most common methods of sperm vitality assessment: eosin-nigrosin staining and the hypo-osmotic swelling test, both utilized in clinical Andrology laboratories.",
"title": ""
}
] |
scidocsrr
|
1248a2ef2907eae2afb2a8d073912018
|
Simultaneous localization and mapping with infinite planes
|
[
{
"docid": "3b9ad8509b9b59e4673d1f6e375ab722",
"text": "This paper describes a system for performing multisession visual mapping in large-scale environments. Multi-session mapping considers the problem of combining the results of multiple Simultaneous Localisation and Mapping (SLAM) missions performed repeatedly over time in the same environment. The goal is to robustly combine multiple maps in a common metrical coordinate system, with consistent estimates of uncertainty. Our work employs incremental Smoothing and Mapping (iSAM) as the underlying SLAM state estimator and uses an improved appearance-based method for detecting loop closures within single mapping sessions and across multiple sessions. To stitch together pose graph maps from multiple visual mapping sessions, we employ spatial separator variables, called anchor nodes, to link together multiple relative pose graphs. We provide experimental results for multi-session visual mapping in the MIT Stata Center, demonstrating key capabilities that will serve as a foundation for future work in large-scale persistent visual mapping.",
"title": ""
}
] |
[
{
"docid": "be18a6729dc170fc03b61436c99c843d",
"text": "Hepatitis C virus (HCV) is a major cause of liver disease worldwide and a potential cause of substantial morbidity and mortality in the future. The complexity and uncertainty related to the geographic distribution of HCV infection and chronic hepatitis C, determination of its associated risk factors, and evaluation of cofactors that accelerate its progression, underscore the difficulties in global prevention and control of HCV. Because there is no vaccine and no post-exposure prophylaxis for HCV, the focus of primary prevention efforts should be safer blood supply in the developing world, safe injection practices in health care and other settings, and decreasing the number of people who initiate injection drug use.",
"title": ""
},
{
"docid": "5c11d9004e57395641a63cd50f8baefa",
"text": "Current digital painting tools are primarily targeted at professionals and are often overwhelmingly complex for use by novices. At the same time, simpler tools may not invoke the user creatively, or are limited to plain styles that lack visual sophistication. There are many people who are not art professionals, yet would like to partake in digital creative expression. Challenges and rewards for novices differ greatly from those for professionals. In this paper, we leverage existing works in Creativity and Creativity Support Tools (CST) to formulate design goals specifically for digital art creation tools for novices. We implemented these goals within a digital painting system, called Painting with Bob. We evaluate the efficacy of the design and our prototype with a user study, and we find that users are highly satisfied with the user experience, as well as the paintings created with our system.",
"title": ""
},
{
"docid": "9b10757ca3ca84784033c20f064078b7",
"text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.",
"title": ""
},
{
"docid": "4859363a5f64977336d107794251a203",
"text": "The paper treats a modular program in which transfers of control between modules follow a semi-Markov process. Each module is failure-prone, and the different failure processes are assumed to be Poisson. The transfers of control between modules (interfaces) are themselves subject to failure. The overall failure process of the program is described, and an asymptotic Poisson process approximation is given for the case when the individual modules and interfaces are very reliable. A simple formula gives the failure rate of the overall program (and hence mean time between failures) under this limiting condition. The remainder of the paper treats the consequences of failures. Each failure results in a cost, represented by a random variable with a distribution typical of the type of failure. The quantity of interest is the total cost of running the program for a time t, and a simple approximating distribution is given for large t. The parameters of this limiting distribution are functions only of the means and variances of the underlying distributions, and are thus readily estimable. A calculation of program availability is given as an example of the cost process. There follows a brief discussion of methods of estimating the parameters of the model, with suggestions of areas in which it might be used.",
"title": ""
},
{
"docid": "995376c324ff12a0be273e34f44056df",
"text": "Conventional Gabor representation and its extracted features often yield a fairly poor performance in retrieving the rotated and scaled versions of the texture image under query. To address this issue, existing methods exploit multiple stages of transformations for making rotation and/or scaling being invariant at the expense of high computational complexity and degraded retrieval performance. The latter is mainly due to the lost of image details after multiple transformations. In this paper, a rotation-invariant and a scale-invariant Gabor representations are proposed, where each representation only requires few summations on the conventional Gabor filter impulse responses. The optimum setting of the orientation parameter and scale parameter is experimentally determined over the Brodatz and MPEG-7 texture databases. Features are then extracted from these new representations for conducting rotation-invariant or scale-invariant texture image retrieval. Since the dimension of the new feature space is much reduced, this leads to a much smaller metadata storage space and faster on-line computation on the similarity measurement. Simulation results clearly show that our proposed invariant Gabor representations and their extracted invariant features significantly outperform the conventional Gabor representation approach for rotation-invariant and scale-invariant texture image retrieval. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "39fcc45d79680c7e231643d6c75aee18",
"text": "This paper presents a Kernel Entity Salience Model (KESM) that improves text understanding and retrieval by better estimating entity salience (importance) in documents. KESM represents entities by knowledge enriched distributed representations, models the interactions between entities and words by kernels, and combines the kernel scores to estimate entity salience. The whole model is learned end-to-end using entity salience labels. The salience model also improves ad hoc search accuracy, providing effective ranking features by modeling the salience of query entities in candidate documents. Our experiments on two entity salience corpora and two TREC ad hoc search datasets demonstrate the effectiveness of KESM over frequency-based and feature-based methods. We also provide examples showing how KESM conveys its text understanding ability learned from entity salience to search.",
"title": ""
},
{
"docid": "6e4d4e1fa86a0566c24cb045616fd4b7",
"text": "Hardcore, jungle, and drum and bass (HJDB) are fastpaced electronic dance music genres that often employ resequenced breakbeats or drum samples from jazz and funk percussionist solos. We present a style-specific method for downbeat detection specifically designed for HJDB. The presented method combines three forms of metrical information in the prediction of downbeats: lowlevel onset event information; periodicity information from beat tracking; and high-level information from a regression model trained with classic breakbeats. In an evaluation using 206 HJDB pieces, we demonstrate superior accuracy of our style specific method over four general downbeat detection algorithms. We present this result to motivate the need for style-specific knowledge and techniques for improved downbeat detection.",
"title": ""
},
{
"docid": "38570075c31812866646d47d25667a49",
"text": "Mercator is a program that uses hop-limited probes—the same primitive used in traceroute—to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route ca p ble routers wherever possible to enhance the fidelity of the resulting ma p, and employs novel mechanisms for resolvingaliases(interfaces belonging to the same router). This paper describes the design of these heuri stics and our experiences with Mercator, and presents some preliminary a nalysis of the resulting Internet map.",
"title": ""
},
{
"docid": "70fd543752f17237386b3f8e99954230",
"text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency",
"title": ""
},
{
"docid": "0ad68f20acf338f4051a93ba5e273187",
"text": "FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.",
"title": ""
},
{
"docid": "0c79db142f913564654f53b6519f2927",
"text": "For software process improvement -SPIthere are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements” uses the philosophy of the Scrum agile",
"title": ""
},
{
"docid": "6d26e03468a9d9c5b9952a5c07743db3",
"text": "Graphs are a powerful tool to model structured objects, but it is nontrivial to measure the similarity between two graphs. In this paper, we construct a two-graph model to represent human actions by recording the spatial and temporal relationships among local features. We also propose a novel family of context-dependent graph kernels (CGKs) to measure similarity between graphs. First, local features are used as the vertices of the two-graph model and the relationships among local features in the intra-frames and inter-frames are characterized by the edges. Then, the proposed CGKs are applied to measure the similarity between actions represented by the two-graph model. Graphs can be decomposed into numbers of primary walk groups with different walk lengths and our CGKs are based on the context-dependent primary walk group matching. Taking advantage of the context information makes the correctly matched primary walk groups dominate in the CGKs and improves the performance of similarity measurement between graphs. Finally, a generalized multiple kernel learning algorithm with a proposed l12-norm regularization is applied to combine these CGKs optimally together and simultaneously train a set of action classifiers. We conduct a series of experiments on several public action datasets. Our approach achieves a comparable performance to the state-of-the-art approaches, which demonstrates the effectiveness of the two-graph model and the CGKs in recognizing human actions.",
"title": ""
},
{
"docid": "e9402a771cc761e7e6484c2be6bc2cce",
"text": "In this work, we present the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network (GAN) for synthesizing images from their text descriptions. Former approaches have tried to condition the generative process on the textual data; but allying it to the usage of class information, known to diversify the generated samples and improve their structural coherence, has not been explored. We trained the presented TAC-GAN model on the Oxford102 dataset of flowers, and evaluated the discriminability of the generated images with Inception-Score, as well as their diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our approach outperforms the stateof-the-art models, i.e., its inception score is 3.45, corresponding to a relative increase of 7.8% compared to the recently introduced StackGan. A comparison of the mean MS-SSIM scores of the training and generated samples per class shows that our approach is able to generate highly diverse images with an average MS-SSIM of 0.14 over all generated classes.",
"title": ""
},
{
"docid": "1fa056e87c10811b38277d161c81c2ac",
"text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.",
"title": ""
},
{
"docid": "c422ef1c225f2dc2483c5a4093333b57",
"text": "The rapid advancement in the electronic commerce technology makes electronic transaction an indispensable part of our daily life. While, this way of transaction has always been facing security problems. Researchers persevere in looking for fraud transaction detection methodologies. A promising paradigm is to devise dedicated detectors for the typical patterns of fraudulent transactions. Unfortunately, this paradigm is really constrained by the lack of real electronic transaction data, especially real fraudulent samples. In this paper, by analyzing real B2C electronic transaction data provided by an Asian bank, from the perspective of transaction sequence, we discover a typical pattern of fraud transactions: Most of the fraud transactions are fast and repeated transactions between the same customer and the same vendor, and all the transaction amounts are nearly the same. We name this pattern Replay Attack. We prove the prominent existence of Replay Attack by comprehensive statistics, and we propose a novel fraud transaction detector, Replay Attack Killer (RAK). By experiment, we show that RAK can catch up to 92% fraud transactions in real time but only disturb less than 0.06% normal transactions.",
"title": ""
},
{
"docid": "c5147ed058b546048bd72dde768976dd",
"text": "51 Abstract— Cryptarithmetic puzzles are quite old and their inventor is not known. An example in The American Agriculturist of 1864 disproves the popular notion that it was invented by Sam Loyd. The name cryptarithmetic was coined by puzzlist Minos (pseudonym of Maurice Vatriquant) in the May 1931 issue of Sphinx, a Belgian magazine of recreational mathematics. In the 1955, J. A. H. Hunter introduced the word \"alphabetic\" to designate cryptarithms, such as Dudeney's, whose letters from meaningful words or phrases. Solving a cryptarithm by hand usually involves a mix of deductions and exhaustive tests of possibilities. Cryptarithmetic is a puzzle consisting of an arithmetic problem in which the digits have been replaced by letters of the alphabet. The goal is to decipher the letters (i.e. Map them back onto the digits) using the constraints provided by arithmetic and the additional constraint that no two letters can have the same numerical value. Cryptarithmetic is a class of constraint satisfaction problems which includes making mathematical relations between meaningful words using simple arithmetic operators like 'plus' in a way that the result is conceptually true, and assigning digits to the letters of these words and generating numbers in order to make correct arithmetic operations as well",
"title": ""
},
{
"docid": "b174bbcb91d35184674532b6ab22dcdf",
"text": "Many studies have confirmed the benefit of gamification on learners’ motivation. However, gamification may also demotivate some learners, or learners may focus on the gamification elements instead of the learning content. Some researchers have recommended building learner models that can be used to adapt gamification elements based on learners’ personalities. Building such a model requires a strong understanding of the relationship between gamification and personality. Existing empirical work has focused on measuring knowledge gain and learner preference. These findings may not be reliable because the analyses are based on learners who complete the study and because they rely on self-report from learners. This preliminary study explores a different approach by allowing learners to drop out at any time and then uses the number of students left as a proxy for motivation and engagement. Survival analysis is used to analyse the data. The results confirm the benefits of gamification and provide some pointers to how this varies with personality.",
"title": ""
},
{
"docid": "f9d333d7d8aa3f7fb834b202a3b10a3b",
"text": "Human skin is the largest organ in our body which provides protection against heat, light, infections and injury. It also stores water, fat, and vitamin. Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. Skin cancer is the most commonly diagnosed type of cancer among men and women. Exposure to UV rays, modernize diets, smoking, alcohol and nicotine are the main cause. Cancer is increasingly recognized as a critical public health problem in Ethiopia. There are three type of skin cancer and they are recognized based on their own properties. In view of this, a digital image processing technique is proposed to recognize and predict the different types of skin cancers using digital image processing techniques. Sample skin cancer image were taken from American cancer society research center and DERMOFIT which are popular and widely focuses on skin cancer research. The classification system was supervised corresponding to the predefined classes of the type of skin cancer. Combining Self organizing map (SOM) and radial basis function (RBF) for recognition and diagnosis of skin cancer is by far better than KNN, Naïve Bayes and ANN classifier. It was also showed that the discrimination power of morphology and color features was better than texture features but when morphology, texture and color features were used together the classification accuracy was increased. The best classification accuracy (88%, 96.15% and 95.45% for Basal cell carcinoma, Melanoma and Squamous cell carcinoma respectively) were obtained using combining SOM and RBF. The overall classification accuracy was 93.15%.",
"title": ""
},
{
"docid": "1c46fbf6a21aa1c80cec9382bb3d45fa",
"text": "BACKGROUND\nNusinersen is an antisense oligonucleotide drug that modulates pre-messenger RNA splicing of the survival motor neuron 2 ( SMN2) gene. It has been developed for the treatment of spinal muscular atrophy (SMA).\n\n\nMETHODS\nWe conducted a multicenter, double-blind, sham-controlled, phase 3 trial of nusinersen in 126 children with SMA who had symptom onset after 6 months of age. The children were randomly assigned, in a 2:1 ratio, to undergo intrathecal administration of nusinersen at a dose of 12 mg (nusinersen group) or a sham procedure (control group) on days 1, 29, 85, and 274. The primary end point was the least-squares mean change from baseline in the Hammersmith Functional Motor Scale-Expanded (HFMSE) score at 15 months of treatment; HFMSE scores range from 0 to 66, with higher scores indicating better motor function. Secondary end points included the percentage of children with a clinically meaningful increase from baseline in the HFMSE score (≥3 points), an outcome that indicates improvement in at least two motor skills.\n\n\nRESULTS\nIn the prespecified interim analysis, there was a least-squares mean increase from baseline to month 15 in the HFMSE score in the nusinersen group (by 4.0 points) and a least-squares mean decrease in the control group (by -1.9 points), with a significant between-group difference favoring nusinersen (least-squares mean difference in change, 5.9 points; 95% confidence interval, 3.7 to 8.1; P<0.001). This result prompted early termination of the trial. Results of the final analysis were consistent with results of the interim analysis. In the final analysis, 57% of the children in the nusinersen group as compared with 26% in the control group had an increase from baseline to month 15 in the HFMSE score of at least 3 points (P<0.001), and the overall incidence of adverse events was similar in the nusinersen group and the control group (93% and 100%, respectively).\n\n\nCONCLUSIONS\nAmong children with later-onset SMA, those who received nusinersen had significant and clinically meaningful improvement in motor function as compared with those in the control group. (Funded by Biogen and Ionis Pharmaceuticals; CHERISH ClinicalTrials.gov number, NCT02292537 .).",
"title": ""
},
{
"docid": "d0c8a1faccfa3f0469e6590cc26097c8",
"text": "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.",
"title": ""
}
] |
scidocsrr
|
a85c0f790bf8313452e9ea38d4c94096
|
A mobile health application for falls detection and biofeedback monitoring
|
[
{
"docid": "f8cc65321723e9bd54b5aea4052542fc",
"text": "Falls in elderly is a major health problem and a cost burden to social services. Thus automatic fall detectors are needed to support the independence and security of the elderly. The goal of this research is to develop a real-time portable wireless fall detection system, which is capable of automatically discriminating between falls and Activities of Daily Life (ADL). The fall detection system contains a portable fall-detection terminal and a monitoring centre, both of which communicate with ZigBee protocol. To extract the features of falls, falls data and ADL data obtained from young subjects are analyzed. Based on the characteristics of falls, an effective fall detection algorithm using tri-axis accelerometers is introduced, and the results show that falls can be distinguished from ADL with a sensitivity over 95% and a specificity of 100%, for a total set of 270 movements.",
"title": ""
}
] |
[
{
"docid": "4aec21b8c4bf0cd71130f6dccd251376",
"text": "Access to capital in the form of credit through money lending requires that the lender to be able to measure the risk of repayment for a given return. In ancient times money lending needed to occur between known parties or required collateral to secure the loan. In the modern era of banking institutions provide loans to individuals who meet a qualification test. Grameen Bank in Bangladesh has demonstrated that small poor communities benefited from the \"microcredit\" financial innovation, which allowed a priori non-bankable entrepreneurs to engage in self-employment projects. Online P2P (Peer to Peer) lending is considered an evolution of the microcredit concept, and reflects the application of its principles into internet communities. Internet ventures like Prosper.com, Zopa or Lendingclub.com, provide the means for lenders and borrowers to meet, interact and define relationships as part of social groups. This paper measures the influence of social interactions in the risk evaluation of a money request; with special focus on the impact of one-to-one and one-to-many relationships. The results showed that fostering social features increases the chances of getting a loan fully funded, when financial features are not enough to construct a differentiating successful credit request. For this task, a model-based clustering method was applied on actual P2P Lending data provided by Prosper.com.",
"title": ""
},
{
"docid": "ab01efad4c65bbed9e4a499844683326",
"text": "To achieve good generalization in supervised learning, the training and testing examples are usually required to be drawn from the same source distribution. In this paper we propose a method to relax this requirement in the context of logistic regression. Assuming <i>D<sup>p</sup></i> and <i>D<sup>a</sup></i> are two sets of examples drawn from two mismatched distributions, where <i>D<sup>a</sup></i> are fully labeled and <i>D<sup>p</sup></i> partially labeled, our objective is to complete the labels of <i>D<sup>p</sup>.</i> We introduce an auxiliary variable μ for each example in <i>D<sup>a</sup></i> to reflect its mismatch with <i>D<sup>p</sup>.</i> Under an appropriate constraint the μ's are estimated as a byproduct, along with the classifier. We also present an active learning approach for selecting the labeled examples in <i>D<sup>p</sup>.</i> The proposed algorithm, called \"Migratory-Logit\" or M-Logit, is demonstrated successfully on simulated as well as real data sets.",
"title": ""
},
{
"docid": "bfd946e8b668377295a1672a7bb915a3",
"text": "Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.",
"title": ""
},
{
"docid": "f04eb852a050249ba5e6d38ee4a7d54c",
"text": "The project Legal Semantic WebA Recommendation System makes use of the Semantic Web and it is used for the proactive legal decision making. With the help of web semantics, a lawyer handling a new case can filter out similar cases from the court case repository implemented using RDF (Resource Description Framework), and from here he can extract the judgments done on those similar cases. In this way he can better prepare himself with similar judgments in his hands which will guide him to an improved argumentation. The role of web semantics here is that it introduces intelligent matching of the court case details. The search is not only thorough but also accurate and precise to the maximum level of attainment with the use of ontology designed exclusively for this purpose.",
"title": ""
},
{
"docid": "b8d41b4b440641d769f58189db8eaf91",
"text": "Differential diagnosis of trichotillomania is often difficult in clinical practice. Trichoscopy (hair and scalp dermoscopy) effectively supports differential diagnosis of various hair and scalp diseases. The aim of this study was to assess the usefulness of trichoscopy in diagnosing trichotillomania. The study included 370 patients (44 with trichotillomania, 314 with alopecia areata and 12 with tinea capitis). Statistical analysis revealed that the main and most characteristic trichoscopic findings of trichotillomania are: irregularly broken hairs (44/44; 100% of patients), v-sign (24/44; 57%), flame hairs (11/44; 25%), hair powder (7/44; 16%) and coiled hairs (17/44; 39%). Flame hairs, v-sign, tulip hairs, and hair powder were newly identified in this study. In conclusion, we describe here specific trichoscopy features, which may be applied in quick, non-invasive, in-office differential diagnosis of trichotillomania.",
"title": ""
},
{
"docid": "82708e65107a0877a052ce81294f535c",
"text": "Abstract—Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and Critical Information Infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates Organisation Cyber Resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called CSuite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.",
"title": ""
},
{
"docid": "9d9665a21e5126ba98add5a832521cd1",
"text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.",
"title": ""
},
{
"docid": "3323feaddbdf0937cef4ecf7dcedc263",
"text": "Cloud storage services have become increasingly popular. Because of the importance of privacy, many cloud storage encryption schemes have been proposed to protect data from those who do not have access. All such schemes assumed that cloud storage providers are safe and cannot be hacked; however, in practice, some authorities (i.e., coercers) may force cloud storage providers to reveal user secrets or confidential data on the cloud, thus altogether circumventing storage encryption schemes. In this paper, we present our design for a new cloud storage encryption scheme that enables cloud storage providers to create convincing fake user secrets to protect user privacy. Since coercers cannot tell if obtained secrets are true or not, the cloud storage providers ensure that user privacy is still securely protected.",
"title": ""
},
{
"docid": "6d44c4244064634deda30a5059acd87e",
"text": "Currently, gene sequence genealogies of the Oligotrichea Bütschli, 1889 comprise only few species. Therefore, a cladistic approach, especially to the Oligotrichida, was made, applying Hennig's method and computer programs. Twenty-three characters were selected and discussed, i.e., the morphology of the oral apparatus (five characters), the somatic ciliature (eight characters), special organelles (four characters), and ontogenetic particulars (six characters). Nine of these characters developed convergently twice. Although several new features were included into the analyses, the cladograms match other morphological trees in the monophyly of the Oligotrichea, Halteriia, Oligotrichia, Oligotrichida, and Choreotrichida. The main synapomorphies of the Oligotrichea are the enantiotropic division mode and the de novo-origin of the undulating membranes. Although the sister group relationship of the Halteriia and the Oligotrichia contradicts results obtained by gene sequence analyses, no morphologic, ontogenetic or ultrastructural features were found, which support a branching of Halteria grandinella within the Stichotrichida. The cladistic approaches suggest paraphyly of the family Strombidiidae probably due to the scarce knowledge. A revised classification of the Oligotrichea is suggested, including all sufficiently known families and genera.",
"title": ""
},
{
"docid": "ab989f39a5dd2ba3c98c0ffddd5c85cb",
"text": "This paper proposes a revision of the multichannel concept as it has been applied in previous studies on multichannel commerce. Digitalization and technological innovations have blurred the line between physical and electronic channels. A structured literature review on multichannel consumer and firm behaviour is conducted to reveal the established view on multichannel phenomena. By providing empirical evidence on market offerings and consumer perceptions, we expose a significant mismatch between the dominant conceptualization of multichannel commerce applied in research and today’s market realities. This tension highlights the necessity for a changed view on multichannel commerce to study and understand phenomena in converging sales channels. Therefore, an extended conceptualization of multichannel commerce, named the multichannel continuum, is proposed. This is the first study that considers the broad complexity of integrated multichannel decisions. It aims at contributing to the literature on information systems and channel choice by developing a reference frame for studies on how technological advancements that allow the integration of different channels shape consumer and firm decision making in multichannel commerce. Accordingly, a brief research agenda contrasts established findings with unanswered questions, challenges and opportunities that arise in this more complex multichannel market environment.",
"title": ""
},
{
"docid": "e267d6bd0aa5f260095993525b790018",
"text": "Strassen's matrix multiplication (MM) has benefits with respect to any (highly tuned) implementations of MM because Strassen's reduces the total number of operations. Strassen achieved this operation reduction by replacing computationally expensive MMs with matrix additions (MAs). For architectures with simple memory hierarchies, having fewer operations directly translates into an efficient utilization of the CPU and, thus, faster execution. However, for modern architectures with complex memory hierarchies, the operations introduced by the MAs have a limited in-cache data reuse and thus poor memory-hierarchy utilization, thereby overshadowing the (improved) CPU utilization, and making Strassen's algorithm (largely) useless on its own.\n In this paper, we investigate the interaction between Strassen's effective performance and the memory-hierarchy organization. We show how to exploit Strassen's full potential across different architectures. We present an easy-to-use adaptive algorithm that combines a novel implementation of Strassen's idea with the MM from automatically tuned linear algebra software (ATLAS) or GotoBLAS. An additional advantage of our algorithm is that it applies to any size and shape matrices and works equally well with row or column major layout. Our implementation consists of introducing a final step in the ATLAS/GotoBLAS-installation process that estimates whether or not we can achieve any additional speedup using our Strassen's adaptation algorithm. Then we install our codes, validate our estimates, and determine the specific performance.\n We show that, by the right combination of Strassen's with ATLAS/GotoBLAS, our approach achieves up to 30%/22% speed-up versus ATLAS/GotoBLAS alone on modern high-performance single processors. We consider and present the complexity and the numerical analysis of our algorithm, and, finally, we show performance for 17 (uniprocessor) systems.",
"title": ""
},
{
"docid": "9c8204510362de8a5362400fc4d26e24",
"text": "We focus on predicting sleep stages from radio measurements without any attached sensors on subjects. We introduce a new predictive model that combines convolutional and recurrent neural networks to extract sleep-specific subjectinvariant features from RF signals and capture the temporal progression of sleep. A key innovation underlying our approach is a modified adversarial training regime that discards extraneous information specific to individuals or measurement conditions, while retaining all information relevant to the predictive task. We analyze our game theoretic setup and empirically demonstrate that our model achieves significant improvements over state-of-the-art solutions.",
"title": ""
},
{
"docid": "750abc9e51aed62305187d7103e3f267",
"text": "This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set ofguidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. Theseare demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographicliterature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches tolegend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamismand design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphicand The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specificneeds, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements offact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence ofthis work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.",
"title": ""
},
{
"docid": "e16d89d3a6b3d38b5823fae977087156",
"text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.",
"title": ""
},
{
"docid": "e4b6dbd8238160457f14aacb8f9717ff",
"text": "Abs t r ac t . The PKZIP program is one of the more widely used archive/ compression programs on personM, computers. It also has many compatible variants on other computers~ and is used by most BBS's and ftp sites to compress their archives. PKZIP provides a stream cipher which allows users to scramble files with variable length keys (passwords). In this paper we describe a known pla.intext attack on this cipher, which can find the internal representation of the key within a few hours on a personal computer using a few hundred bytes of known plaintext. In many cases, the actual user keys can also be found from the internal representation. We conclude that the PKZIP cipher is weak, and should not be used to protect valuable data.",
"title": ""
},
{
"docid": "f7b911eca27efc3b0535f8b48222f993",
"text": "Numerous entity linking systems are addressing the entity recognition problem by using off-the-shelf NER systems. It is, however, a difficult task to select which specific model to use for these systems, since it requires to judge the level of similarity between the datasets which have been used to train models and the dataset at hand to be processed in which we aim to properly recognize entities. In this paper, we present the newest version of ADEL, our adaptive entity recognition and linking framework, where we experiment with an hybrid approach mixing a model combination method to improve the recognition level and to increase the efficiency of the linking step by applying a filter over the types. We obtain promising results when performing a 4-fold cross validation experiment on the OKE 2016 challenge training dataset. We also demonstrate that we achieve better results that in our previous participation on the OKE 2015 test set. We finally report the results of ADEL on the OKE 2016 test set and we present an error analysis highlighting the main difficulties of this challenge.",
"title": ""
},
{
"docid": "6cbce08be2401cac8a2d04159222aa3a",
"text": "Optimal treatment of symptomatic accessory navicular bones, generally asymptomatic ‘extra’ ossicles in the front interior ankle, remains debated. Incidence and type of accessory navicular bones in Chinese patients were examined as a basis for improving diagnostic and treatment standards. Accessory navicular bones were retrospectively examined in 1,625 (790 men and 835 women) patients with trauma-induced or progressive symptomatic ankle pain grouped by gender and age from August 2011 to May 2012. Anterior–posterior/oblique X-ray images; presence; type; affected side; modified Coughlin’s classification types 1, 2A, 2B, and 3; and subgroups a–c were recorded. Accessory navicular bones were found in 329 (20.2 %) patients (143 men and 186 women; mean age, 47.24 ± 18.34, ranging 14–96 years). Patients aged 51–60 exhibited most accessory navicular bones (29.7 %), with risk slightly higher in women and generally increasing from minimal 10.9 % at ages 11–20 to age 51 and thereafter declining to 0.4 % by age 90. The incidence was 41.6 % for Type 1 (Type 1a: 9.1 %, Type 1b: 15.5 %, and Type 1c: 19.4 %), 36.8 % for Type 2 (Type 2Aa: 2.1 %, Type 2Ab: 13.7 %, Type 2Ac: 5.1 %, Type 2Ba: 2.1 %, 2Bb: 2.1 %, and 2Bc: 11.6 %), and 21.6 % for Type 3 (Type 3a: 4.5 %, Type 3b: 14 %, and Type 3c: 3.0 %). Approximately one-fifth (20.3 %) of ankle pain patients exhibited accessory navicular bones, with Type 2 most common and middle-aged patients most commonly affected. Thus, accessory navicular bones may be less rare than previously thought, underlying treatable symptomatic conditions of foot pain and deformity.",
"title": ""
},
{
"docid": "5285b2b579c8a0f0915e76e41d66330c",
"text": "Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of a software test's outcome. A common scenario in software testing is therefore that test data are generated, and a tester manually adds test oracles. As this is a difficult task, it is important to produce small yet representative test sets, and this representativeness is typically measured using code coverage. There is, however, a fundamental problem with the common approach of targeting one coverage goal at a time: Coverage goals are not independent, not equally difficult, and sometimes infeasible-the result of test generation is therefore dependent on the order of coverage goals and how many of them are feasible. To overcome this problem, we propose a novel paradigm in which whole test suites are evolved with the aim of covering all coverage goals at the same time while keeping the total size as small as possible. This approach has several advantages, as for example, its effectiveness is not affected by the number of infeasible targets in the code. We have implemented this novel approach in the EvoSuite tool, and compared it to the common approach of addressing one goal at a time. Evaluated on open source libraries and an industrial case study for a total of 1,741 classes, we show that EvoSuite achieved up to 188 times the branch coverage of a traditional approach targeting single branches, with up to 62 percent smaller test suites.",
"title": ""
},
{
"docid": "2a63710d79eab2e4bd59a610f874e4ab",
"text": "To a client, one of the simplest services provided by a distributed system is a time service. A client simply requests the time from any set of servers, and uses any reply. The simplicity in this interaction, however, misrepresents the complexity of implementing such a service. An algorithm is needed that will keep a set of clocks synchronized, reasonably correct and accurate with rcspcct to a standard, and able to withstand errors such as communication failures and inaccurate clocks. This paper presents a partial solution to the problem by describing two algorithms which will keep clocks both correct and synchronized.",
"title": ""
},
{
"docid": "9cb2f99aa1c745346999179132df3854",
"text": "As a complementary and alternative medicine in medical field, traditional Chinese medicine (TCM) has drawn great attention in the domestic field and overseas. In practice, TCM provides a quite distinct methodology to patient diagnosis and treatment compared to western medicine (WM). Syndrome (ZHENG or pattern) is differentiated by a set of symptoms and signs examined from an individual by four main diagnostic methods: inspection, auscultation and olfaction, interrogation, and palpation which reflects the pathological and physiological changes of disease occurrence and development. Patient classification is to divide patients into several classes based on different criteria. In this paper, from the machine learning perspective, a survey on patient classification issue will be summarized on three major aspects of TCM: sign classification, syndrome differentiation, and disease classification. With the consideration of different diagnostic data analyzed by different computational methods, we present the overview for four subfields of TCM diagnosis, respectively. For each subfield, we design a rectangular reference list with applications in the horizontal direction and machine learning algorithms in the longitudinal direction. According to the current development of objective TCM diagnosis for patient classification, a discussion of the research issues around machine learning techniques with applications to TCM diagnosis is given to facilitate the further research for TCM patient classification.",
"title": ""
}
] |
scidocsrr
|
4aa62373dedcfacbf87e08d983f0c72b
|
Global Normalization of Convolutional Neural Networks for Joint Entity and Relation Classification
|
[
{
"docid": "d18181640e98086732e5f32682e12127",
"text": "This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies. The proposed neural network architecture is capable of modeling multiple relation instances without knowing the corresponding relation arguments in a sentence. The experimental results show that a simple approach of piggybacking candidate entities to model the label dependencies from relations to entities improves performance. We present state-of-the-art results with improvements of 2.0% and 2.7% for entity recognition and relation classification, respectively on CoNLL04 dataset.",
"title": ""
}
] |
[
{
"docid": "a8ca07bf7784d7ac1d09f84ac76be339",
"text": "AbstructEstimation of 3-D information from 2-D image coordinates is a fundamental problem both in machine vision and computer vision. Circular features are the most common quadratic-curved features that have been addressed for 3-D location estimation. In this paper, a closed-form analytical solution to the problem of 3-D location estimation of circular features is presented. Two different cases are considered: 1) 3-D orientation and 3-D position estimation of a circular feature when its radius is known, and 2) 3-D orientation and 3-D position estimation of a circular feature when its radius is not known. As well, extension of the developed method to 3-D quadratic features is addressed. Specifically, a closed-form analytical solution is derived for 3-D position estimation of spherical features. For experimentation purposes, simulated as well as real setups were employed. Simulated experimental results obtained for all three cases mentioned above verified the analytical method developed in this paper. In the case of real experiments, a set of circles located on a calibration plate, whose locations were known with respect to a reference frame, were used for camera calibration as well as for the application of the developed method. Since various distortion factors had to be compensated in order to obtain accurate estimates of the parameters of the imaged circle-an ellipse-with respect to the camera's image frame, a sequential compensation procedure was applied to the input grey-level image. The experimental results obtained once more showed the validity of the total process involved in the 3-D location estimation of circular features in general and the applicability of the analytical method developed in this paper in particular.",
"title": ""
},
{
"docid": "d698f181eb7682d9bf98b3bc103abaac",
"text": "Current database research identified the use of computational power of GPUs as a way to increase the performance of database systems. As GPU algorithms are not necessarily faster than their CPU counterparts, it is important to use the GPU only if it will be beneficial for query processing. In a general database context, only few research projects address hybrid query processing, i.e., using a mix of CPUand GPU-based processing to achieve optimal performance. In this paper, we extend our CPU/GPU scheduling framework to support hybrid query processing in database systems. We point out fundamental problems and propose an algorithm to create a hybrid query plan for a query using our scheduling framework. Additionally, we provide cost metrics, which consider the possible overlapping of data transfers and computation on the GPU. Furthermore, we present algorithms to create hybrid query plans for query sequences and query trees.",
"title": ""
},
{
"docid": "de48faf1dc4d276460b8369c9d8f36a8",
"text": "Momentum is primarily driven by firms’ performance 12 to seven months prior to portfolio formation, not by a tendency of rising and falling stocks to keep rising and falling. Strategies based on recent past performance generate positive returns but are less profitable than those based on intermediate horizon past performance, especially among the largest, most liquid stocks. These facts are not particular to the momentum observed in the cross section of US equities. Similar results hold for momentum strategies trading international equity indices, commodities, and currencies.",
"title": ""
},
{
"docid": "aa9b9c05bf09e3c6cceeb664e218a753",
"text": "Software development is an inherently team-based activity, and many software-engineering courses are structured around team projects, in order to provide students with an authentic learning experience. The collaborative-development tools through which student developers define, share and manage their tasks generate a detailed record in the process. Albeit not designed for this purpose, this record can provide the instructor with insights into the students' work, the team's progress over time, and the individual team-member's contributions. In this paper, we describe an analysis and visualization toolkit that enables instructors to interactively explore the trace of the team's collaborative work, to better understand the team dynamics, and the tasks of the individual team developers. We also discuss our grounded-theory analysis of one team's work, based on their email exchanges, questionnaires and interviews. Our analyses suggest that the inferences supported by our toolkit are congruent with the developers' feedback, while there are some discrepancies with the reflections of the team as a whole.",
"title": ""
},
{
"docid": "a7373d69f5ff9d894a630cc240350818",
"text": "The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000 series of standards, developed by the International Standards Organization, share a common concern with quality and process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it would satisfy most of the level 2 goals and many of the level 3 goals. Because there are practices in the CMM that are not addressed in ISO 9000, it is possible for a level 1 organization to receive ISO 9001 registration; similarly, there are areas addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.",
"title": ""
},
{
"docid": "f2d27b79f1ac3809f7ea605203136760",
"text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.",
"title": ""
},
{
"docid": "50eaa44f8e89870750e279118a219d7a",
"text": "Fitbit fitness trackers record sensitive personal information, including daily step counts, heart rate profiles, and locations visited. By design, these devices gather and upload activity data to a cloud service, which provides aggregate statistics to mobile app users. The same principles govern numerous other Internet-of-Things (IoT) services that target different applications. As a market leader, Fitbit has developed perhaps the most secure wearables architecture that guards communication with end-to-end encryption. In this article, we analyze the complete Fitbit ecosystem and, despite the brand's continuous efforts to harden its products, we demonstrate a series of vulnerabilities with potentially severe implications to user privacy and device security. We employ a range of techniques, such as protocol analysis, software decompiling, and both static and dynamic embedded code analysis, to reverse engineer previously undocumented communication semantics, the official smartphone app, and the tracker firmware. Through this interplay and in-depth analysis, we reveal how attackers can exploit the Fitbit protocol to extract private information from victims without leaving a trace, and wirelessly flash malware without user consent. We demonstrate that users can tamper with both the app and firmware to selfishly manipulate records or circumvent Fitbit's walled garden business model, making the case for an independent, user-controlled, and more secure ecosystem. Finally, based on the insights gained, we make specific design recommendations that can not only mitigate the identified vulnerabilities, but are also broadly applicable to securing future wearable system architectures.",
"title": ""
},
{
"docid": "ae9bdb80a60dd6820c1c9d9557a73ffc",
"text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.",
"title": ""
},
{
"docid": "244dbf0d36d3d221e12b1844d440ecb2",
"text": "A typical scene contains many different objects that compete for neural representation due to the limited processing capacity of the visual system. At the neural level, competition among multiple stimuli is evidenced by the mutual suppression of their visually evoked responses and occurs most strongly at the level of the receptive field. The competition among multiple objects can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that biasing signals due to selective attention can modulate neural activity in visual cortex not only in the presence but also in the absence of visual stimulation. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals likely derives from a distributed network of areas in frontal and parietal cortex. Competition suggests that once attentional resources are depleted, no further processing is possible. Yet, existing data suggest that emotional stimuli activate brain regions \"automatically,\" largely immune from attentional control. We tested the alternative possibility, namely, that the neural processing of stimuli with emotional content is not automatic and instead requires some degree of attention. Our results revealed that, contrary to the prevailing view, all brain regions responding differentially to emotional faces, including the amygdala, did so only when sufficient attentional resources were available to process the faces. Thus, similar to the processing of other stimulus categories, the processing of facial expression is under top-down control.",
"title": ""
},
{
"docid": "511d631ab0d28039e2b8eeca87b825ac",
"text": "Compressive sensing (CS) is a new technique for the efficient acquisition of signals, images and other data that have a sparse representation in some basis, frame, or dictionary. By sparse we mean that the N-dimensional basis representation has just K <;<; N significant coefficients; in this case, the CS theory maintains that just M = O( K log N) random linear signal measurements will both preserve all of the signal information and enable robust signal reconstruction in polynomial time. In this paper, we extend the CS theory to pulse stream data, which correspond to S -sparse signals/images that are convolved with an unknown F-sparse pulse shape. Ignoring their convolutional structure, a pulse stream signal is K = SF sparse. Such signals figure prominently in a number of applications, from neuroscience to astronomy. Our specific contributions are threefold. First, we propose a pulse stream signal model and show that it is equivalent to an infinite union of subspaces. Second, we derive a lower bound on the number of measurements M required to preserve the essential information present in pulse streams. The bound is linear in the total number of degrees of freedom S + F, which is significantly smaller than the naïve bound based on the total signal sparsity K = SF. Third, we develop an efficient signal recovery algorithm that infers both the shape of the impulse response as well as the locations and amplitudes of the pulses. The algorithm alternatively estimates the pulse locations and the pulse shape in a manner reminiscent of classical deconvolution algorithms. Numerical experiments on synthetic and real data demonstrate the advantages of our approach over standard CS.",
"title": ""
},
{
"docid": "c2d0e11e37c8f0252ce77445bf583173",
"text": "This paper describes a method to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline to infer 3D model shapes including clothed people with 4.5mm reconstruction accuracy. At the core of our approach is the transformation of dynamic body pose into a canonical frame of reference. Our main contribution is a method to transform the silhouette cones corresponding to dynamic human silhouettes to obtain a visual hull in a common reference frame. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. Results on 4 different datasets demonstrate the effectiveness of our approach to produce accurate 3D models. Requiring only an RGB camera, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.",
"title": ""
},
{
"docid": "b2a43491283732082c65f88c9b03016f",
"text": "BACKGROUND\nExpressing breast milk has become increasingly prevalent, particularly in some developed countries. Concurrently, breast pumps have evolved to be more sophisticated and aesthetically appealing, adapted for domestic use, and have become more readily available. In the past, expressed breast milk feeding was predominantly for those infants who were premature, small or unwell; however it has become increasingly common for healthy term infants. The aim of this paper is to systematically explore the literature related to breast milk expressing by women who have healthy term infants, including the prevalence of breast milk expressing, reported reasons for, methods of, and outcomes related to, expressing.\n\n\nMETHODS\nDatabases (Medline, CINAHL, JSTOR, ProQuest Central, PsycINFO, PubMed and the Cochrane library) were searched using the keywords milk expression, breast milk expression, breast milk pumping, prevalence, outcomes, statistics and data, with no limit on year of publication. Reference lists of identified papers were also examined. A hand-search was conducted at the Australian Breastfeeding Association Lactation Resource Centre. Only English language papers were included. All papers about expressing breast milk for healthy term infants were considered for inclusion, with a focus on the prevalence, methods, reasons for and outcomes of breast milk expression.\n\n\nRESULTS\nA total of twenty two papers were relevant to breast milk expression, but only seven papers reported the prevalence and/or outcomes of expressing amongst mothers of well term infants; all of the identified papers were published between 1999 and 2012. Many were descriptive rather than analytical and some were commentaries which included calls for more research, more dialogue and clearer definitions of breastfeeding. While some studies found an association between expressing and the success and duration of breastfeeding, others found the opposite. In some cases these inconsistencies were compounded by imprecise definitions of breastfeeding and breast milk feeding.\n\n\nCONCLUSIONS\nThere is limited evidence about the prevalence and outcomes of expressing breast milk amongst mothers of healthy term infants. The practice of expressing breast milk has increased along with the commercial availability of a range of infant feeding equipment. The reasons for expressing have become more complex while the outcomes, when they have been examined, are contradictory.",
"title": ""
},
{
"docid": "5b5e69bd93f6b809c29596a54c1565fc",
"text": "Variety and veracity are two distinct characteristics of large-scale and heterogeneous data. It has been a great challenge to efficiently represent and process big data with a unified scheme. In this paper, a unified tensor model is proposed to represent the unstructured, semistructured, and structured data. With tensor extension operator, various types of data are represented as subtensors and then are merged to a unified tensor. In order to extract the core tensor which is small but contains valuable information, an incremental high order singular value decomposition (IHOSVD) method is presented. By recursively applying the incremental matrix decomposition algorithm, IHOSVD is able to update the orthogonal bases and compute the new core tensor. Analyzes in terms of time complexity, memory usage, and approximation accuracy of the proposed method are provided in this paper. A case study illustrates that approximate data reconstructed from the core set containing 18% elements can guarantee 93% accuracy in general. Theoretical analyzes and experimental results demonstrate that the proposed unified tensor model and IHOSVD method are efficient for big data representation and dimensionality reduction.",
"title": ""
},
{
"docid": "6cb43a0f16b69cad9a7e5c5a528e23f5",
"text": "New substation technology, such as nonconventional instrument transformers, and a need to reduce design and construction costs are driving the adoption of Ethernet-based digital process bus networks for high-voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include Generic Object Oriented Substation Event, Simple Network Management Protocol, and Sampled Values (SVs). A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP-based traffic segregation method. While this paper focuses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High-volume SV data and time-critical circuit breaker tripping commands do not interact on a full-duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high-voltage switchyards.",
"title": ""
},
{
"docid": "296f18277958621763646519a7224193",
"text": "This chapter examines health promotion and disease prevention from the perspective of social cognitive theory. This theory posits a multifaceted causal structure in which self-efficacy beliefs operate in concert with cognized goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, action, and well-being. Perceived self-efficacy is a key factor in the causal structure because it operates on motivation and action both directly and through its impact on the other determinants. The areas of overlap of sociocognitive determinants with some of the most widely applied psychosocial models of health are identified. Social cognitive theory addresses the sociostructural determinants of health as well as the personal determinants. A comprehensive approach to health promotion requires changing the practices of social systems that have widespread detrimental effects on health rather than solely changing the habits of individuals. Further progress in this field requires building new structures for health promotion, new systems for risk reduction and greater emphasis on health policy initiatives. People's beliefs in their collective efficacy to accomplish social change, therefore, play a key role in the policy and public health perspective to health promotion and disease prevention. Bandura, A. (1998). Health promotion from the perspective of social cognitive theory. Psychology and Health, 13, 623-649.",
"title": ""
},
{
"docid": "1aba7883ca8a1651d951ef55d8f4bbc5",
"text": "This paper presents an improvement of the J-linkage algorithm for fitting multiple instances of a model to noisy data corrupted by outliers. The binary preference analysis implemented by J-linkage is replaced by a continuous (soft, or fuzzy) generalization that proves to perform better than J-linkage on simulated data, and compares favorably with state of the art methods on public domain real datasets.",
"title": ""
},
{
"docid": "813e41234aad749022a4d655af987ad6",
"text": "Three- and four-element eyepiece designs are presented each with a different type of radial gradient-index distribution. Both quadratic and modified quadratic index profiles are shown to provide effective control of the field aberrations. In particular, the three-element design with a quadratic index profile demonstrates that the inhomogeneous power contribution can make significant contributions to the overall system performance, especially the astigmatism correction. Using gradient-index components has allowed for increased eye relief and field of view making these designs comparable with five- and six-element ones.",
"title": ""
},
{
"docid": "da2bc0813d4108606efef507e50100e3",
"text": "Prediction is one of the most attractive aspects in data mining. Link prediction has recently attracted the attention of many researchers as an effective technique to be used in graph based models in general and in particular for social network analysis due to the recent popularity of the field. Link prediction helps to understand associations between nodes in social communities. Existing link prediction-related approaches described in the literature are limited to predict links that are anticipated to exist in the future. To the best of our knowledge, none of the previous works in this area has explored the prediction of links that could disappear in the future. We argue that the latter set of links are important to know about; they are at least equally important as and do complement the positive link prediction process in order to plan better for the future. In this paper, we propose a link prediction model which is capable of predicting both links that might exist and links that may disappear in the future. The model has been successfully applied in two different though very related domains, namely health care and gene expression networks. The former application concentrates on physicians and their interactions while the second application covers genes and their interactions. We have tested our model using different classifiers and the reported results are encouraging. Finally, we compare our approach with the internal links approach and we reached the conclusion that our approach performs very well in both bipartite and non-bipartite graphs.",
"title": ""
},
{
"docid": "d7582552589626891258f52b0d750915",
"text": "Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks.",
"title": ""
},
{
"docid": "3675229608c949f883b7e400a19b66bb",
"text": "SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.",
"title": ""
}
] |
scidocsrr
|
df7c0407671ad437eaf331cf30b7f958
|
KNN-CF Approach: Incorporating Certainty Factor to kNN Classification
|
[
{
"docid": "be369e7935f5a56b0c5ac671c7ec315b",
"text": "Memory-based classification algorithms such as radial basis functions or K-nearest neighbors typically rely on simple distances (Euclidean, dot product ... ), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rotation, scaling, shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.",
"title": ""
}
] |
[
{
"docid": "b1383088b26636e6ac13331a2419f794",
"text": "This paper investigates the problem of blurring caused by motion during image capture of text documents. Motion blurring prevents proper optical character recognition of the document text contents. One area of such applications is to deblur name card images obtained from handheld cameras. In this paper, a complete motion deblurring procedure for document images has been proposed. The method handles both uniform linear motion blur and uniform acceleration motion blur. Experiments on synthetic and real-life blurred images prove the feasibility and reliability of this algorithm provided that the motion is not too irregular. The restoration procedure consumes only small amount of computation time.",
"title": ""
},
{
"docid": "6080612b8858d633c3f63a3d019aef58",
"text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.",
"title": ""
},
{
"docid": "356361bf2ca0e821250e4a32d299d498",
"text": "DRAM has been a de facto standard for main memory, and advances in process technology have led to a rapid increase in its capacity and bandwidth. In contrast, its random access latency has remained relatively stagnant, as it is still around 100 CPU clock cycles. Modern computer systems rely on caches or other latency tolerance techniques to lower the average access latency. However, not all applications have ample parallelism or locality that would help hide or reduce the latency. Moreover, applications' demands for memory space continue to grow, while the capacity gap between last-level caches and main memory is unlikely to shrink. Consequently, reducing the main-memory latency is important for application performance. Unfortunately, previous proposals have not adequately addressed this problem, as they have focused only on improving the bandwidth and capacity or reduced the latency at the cost of significant area overhead.\n We propose asymmetric DRAM bank organizations to reduce the average main-memory access latency. We first analyze the access and cycle times of a modern DRAM device to identify key delay components for latency reduction. Then we reorganize a subset of DRAM banks to reduce their access and cycle times by half with low area overhead. By synergistically combining these reorganized DRAM banks with support for non-uniform bank accesses, we introduce a novel DRAM bank organization with center high-aspect-ratio mats called CHARM. Experiments on a simulated chip-multiprocessor system show that CHARM improves both the instructions per cycle and system-wide energy-delay product up to 21% and 32%, respectively, with only a 3% increase in die area.",
"title": ""
},
{
"docid": "06f562ff86d8a2834616726a1d4b6e15",
"text": "This paper reports about interest operators, region detectors and region descriptors for photogrammetric applications. Features are the primary input for many applications like registration, 3D reconstruction, motion tracking, robot navigation, etc. Nowadays many detectors and descriptors algorithms are available, providing corners, edges and regions of interest together with n-dimensional vectors useful in matching procedures. The main algorithms are here described and analyzed, together with their proprieties. Experiments concerning the repeatability, localization accuracy and quantitative analysis are performed and reported. Details on how improve to location accuracy of region detectors are also reported.",
"title": ""
},
{
"docid": "af4055df4a60a241f43d453f34189d86",
"text": "We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "4930fa19f6374774a5f4575b56159e50",
"text": "We present a study of the correlation between the extent to which the cluster hypothesis holds, as measured by various tests, and the relative effectiveness of cluster-based retrieval with respect to document-based retrieval. We show that the correlation can be affected by several factors, such as the size of the result list of the most highly ranked documents that is analyzed. We further show that some cluster hypothesis tests are often negatively correlated with one another. Moreover, in several settings, some of the tests are also negatively correlated with the relative effectiveness of cluster-based retrieval.",
"title": ""
},
{
"docid": "852578afdb63985d93b1d2d0ee8fc3e8",
"text": "This paper builds on the recent ASPIC formalism, to develop a general framework for argumentation with preferences. We motivate a revised definition of conflict free sets of arguments, adapt ASPIC to accommodate a broader range of instantiating logics, and show that under some assumptions, the resulting framework satisfies key properties and rationality postulates. We then show that the generalised framework accommodates Tarskian logic instantiations extended with preferences, and then study instantiations of the framework by classical logic approaches to argumentation. We conclude by arguing that ASPIC’s modelling of defeasible inference rules further testifies to the generality of the framework, and then examine and counter recent critiques of Dung’s framework and its extensions to accommodate preferences.",
"title": ""
},
{
"docid": "97aab319e3d38d755860b141c5a4fa38",
"text": "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.",
"title": ""
},
{
"docid": "8de4182b607888e6c7cbe6d6ae8ee122",
"text": "In this article, we focus on isolated gesture recognition and explore different modalities by involving RGB stream, depth stream, and saliency stream for inspection. Our goal is to push the boundary of this realm even further by proposing a unified framework that exploits the advantages of multi-modality fusion. Specifically, a spatial-temporal network architecture based on consensus-voting has been proposed to explicitly model the long-term structure of the video sequence and to reduce estimation variance when confronted with comprehensive inter-class variations. In addition, a three-dimensional depth-saliency convolutional network is aggregated in parallel to capture subtle motion characteristics. Extensive experiments are done to analyze the performance of each component and our proposed approach achieves the best results on two public benchmarks, ChaLearn IsoGD and RGBD-HuDaAct, outperforming the closest competitor by a margin of over 10% and 15%, respectively. Our project and codes will be released at https://davidsonic.github.io/index/acm_tomm_2017.html.",
"title": ""
},
{
"docid": "b6c62936aef87ab2cce565f6142424bf",
"text": "Concerns have been raised about the performance of PC-based virtual routers as they do packet processing in software. Furthermore, it becomes challenging to maintain isolation among virtual routers due to resource contention in a shared environment. Hardware vendors recognize this issue and PC hardware with virtualization support (SR-IOV and Intel-VTd) has been introduced in recent years. In this paper, we investigate how such hardware features can be integrated with two different virtualization technologies (LXC and KVM) to enhance performance and isolation of virtual routers on shared environments. We compare LXC and KVM and our results indicate that KVM in combination with hardware support can provide better trade-offs between performance and isolation. We notice that KVM has slightly lower throughput, but has superior isolation properties by providing more explicit control of CPU resources. We demonstrate that KVM allows defining a CPU share for a virtual router, something that is difficult to achieve in LXC, where packet forwarding is done in a kernel shared by all virtual routers.",
"title": ""
},
{
"docid": "7c1b301e45da5af0f5248f04dbf33f75",
"text": "[1] We invert 115 differential interferograms derived from 47 synthetic aperture radar (SAR) scenes for a time-dependent deformation signal in the Santa Clara valley, California. The time-dependent deformation is calculated by performing a linear inversion that solves for the incremental range change between SAR scene acquisitions. A nonlinear range change signal is extracted from the ERS InSAR data without imposing a model of the expected deformation. In the Santa Clara valley, cumulative land uplift is observed during the period from 1992 to 2000 with a maximum uplift of 41 ± 18 mm centered north of Sunnyvale. Uplift is also observed east of San Jose. Seasonal uplift and subsidence dominate west of the Silver Creek fault near San Jose with a maximum peak-to-trough amplitude of 35 mm. The pattern of seasonal versus long-term uplift provides constraints on the spatial and temporal characteristics of water-bearing units within the aquifer. The Silver Creek fault partitions the uplift behavior of the basin, suggesting that it acts as a hydrologic barrier to groundwater flow. While no tectonic creep is observed along the fault, the development of a low-permeability barrier that bisects the alluvium suggests that the fault has been active since the deposition of Quaternary units.",
"title": ""
},
{
"docid": "08aa9d795464d444095bbb73c067c2a9",
"text": "Next-generation sequencing (NGS) is a rapidly evolving set of technologies that can be used to determine the sequence of an individual's genome 1 by calling genetic variants present in an individual using billions of short, errorful sequence reads 2 . Despite more than a decade of effort and thousands of dedicated researchers, the hand-crafted and parameterized statistical models used for variant calling still produce thousands of errors and missed variants in each genome 3,4 . Here we show that a deep convolutional neural network 5 can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships (likelihoods) between images of read pileups around putative variant sites and ground-truth genotype calls. This approach, called DeepVariant, outperforms existing tools, even winning the \"highest performance\" award for SNPs in a FDA-administered variant calling challenge. The learned model generalizes across genome builds and even to other species, allowing non-human sequencing projects to benefit from the wealth of human ground truth data. We further show that, unlike existing tools which perform well on only a specific technology, DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, from deep whole genomes from 10X Genomics to Ion Ampliseq exomes. DeepVariant represents a significant step from expert-driven statistical modeling towards more automatic deep learning approaches for developing software to interpret biological instrumentation data. Main Text Calling genetic variants from NGS data has proven challenging because NGS reads are not only errorful (with rates from ~0.1-10%) but result from a complex error process that depends on properties of the instrument, preceding data processing tools, and the genome sequence itself. State-of-the-art variant callers use a variety of statistical techniques to model these error processes and thereby accurately identify differences between the reads and the reference genome caused by real genetic variants and those arising from errors in the reads. For example, the widely-used GATK uses logistic regression to model base errors, hidden Markov models to compute read likelihoods, and naive Bayes classification to identify variants, which are then filtered to remove likely false positives using a Gaussian mixture model peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. with hand-crafted features capturing common error modes 6 . These techniques allow the GATK to achieve high but still imperfect accuracy on the Illumina sequencing platform . Generalizing these models to other sequencing technologies has proven difficult due to the need for manual retuning or extending these statistical models (see e.g. Ion Torrent 8,9 ), a major problem in an area with such rapid technological progress 1 . Here we describe a variant caller for NGS data that replaces the assortment of statistical modeling components with a single, deep learning model. Deep learning is a revolutionary machine learning technique applicable to a variety of domains, including image classification 10 , translation , gaming , and the life sciences 14–17 . This toolchain, which we call DeepVariant, (Figure 1) begins by finding candidate SNPs and indels in reads aligned to the reference genome with high-sensitivity but low specificity. The deep learning model, using the Inception-v2 architecture , emits probabilities for each of the three diploid genotypes at a locus using a pileup image of the reference and read data around each candidate variant (Figure 1). The model is trained using labeled true genotypes, after which it is frozen and can then be applied to novel sites or samples. Throughout the following experiments, DeepVariant was trained on an independent set of samples or variants to those being evaluated. This deep learning model has no specialized knowledge about genomics or next-generation sequencing, and yet can learn to call genetic variants more accurately than state-of-the-art methods. When applied to the Platinum Genomes Project NA12878 data 18 , DeepVariant produces a callset with better performance than the GATK when evaluated on the held-out chromosomes of the Genome in a Bottle ground truth set (Figure 2A). For further validation, we sequenced 35 replicates of NA12878 using a standard whole-genome sequencing protocol and called variants on 27 replicates using a GATK best-practices pipeline and DeepVariant using a model trained on the other eight replicates (see methods). Not only does DeepVariant produce more accurate results but it does so with greater consistency across a variety of quality metrics (Figure 2B). To further confirm the performance of DeepVariant, we submitted variant calls for a blinded sample, NA24385, to the Food and Drug Administration-sponsored variant calling Truth Challenge in May 2016 and won the \"highest performance\" award for SNPs by an independent team using a different evaluation methodology. Like many variant calling algorithms, GATK relies on a model that assumes read errors are independent . Though long-recognized as an invalid assumption 2 , the true likelihood function that models multiple reads simultaneously is unknown 6,19,20 . Because DeepVariant presents an image of all of the reads relevant for a putative variant together, the convolutional neural network (CNN) is able to account for the complex dependence among the reads by virtue of being a universal approximator 21 . This manifests itself as a tight concordance between the estimated probability of error from the likelihood function and the observed error rate, as seen in Figure 2C where DeepVariant's CNN is well calibrated, strikingly more so than the GATK. That the CNN has approximated this true, but unknown, inter-dependent likelihood function is the essential technical advance enabling us to replace the hand-crafted statistical models in other approaches with a single deep learning model and still achieve such high performance in variant calling. 2 peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/092890 doi: bioRxiv preprint first posted online Dec. 14, 2016; Poplin et al. Creating a universal SNP and small indel variant caller with deep neural networks. We further explored how well DeepVariant’s CNN generalizes beyond its training data. First, a model trained with read data aligned to human genome build GRCh37 and applied to reads aligned to GRCh38 has similar performance (overall F1 = 99.45%) to one trained on GRCh38 and then applied to GRCh38 (overall F1 = 99.53%), thereby demonstrating that a model learned from one version of the human genome reference can be applied to other versions with effectively no loss in accuracy (Table S1). Second, models learned using human reads and ground truth data achieve high accuracy when applied to a mouse dataset 22 (F1 = 98.29%), out-performing training on the mouse data itself (F1 = 97.84%, Table S4). This last experiment is especially demanding as not only do the species differ but nearly all of the sequencing parameters do as well: 50x 2x148bp from an Illumina TruSeq prep sequenced on a HiSeq 2500 for the human sample and 27x 2x100bp reads from a custom sequencing preparation run on an Illumina Genome Analyzer II for mouse . Thus, DeepVariant is robust to changes in sequencing depth, preparation protocol, instrument type, genome build, and even species. The practical benefits of this capability is substantial, as DeepVariant enables resequencing projects in non-human species, which often have no ground truth data to guide their efforts , to leverage the large and growing ground truth data in humans. To further assess its capabilities, we trained DeepVariant to call variants in eight datasets from Genome in a Bottle 24 that span a variety of sequencing instruments and protocols, including whole genome and exome sequencing technologies, with read lengths from fifty to many thousands of basepairs (Table 1 and S6). We used the already processed BAM files to introduce additional variability as these BAMs differ in their alignment and cleaning steps. The results of this experiment all exhibit a characteristic pattern: the candidate variants have the highest sensitivity but a low PPV (mean 57.6%), which varies significantly by dataset. After retraining, all of the callsets achieve high PPVs (mean of 99.3%) while largely preserving the candidate callset sensitivity (mean loss of 2.3%). The high PPVs and low loss of sensitivity indicate that DeepVariant can learn a model that captures the technology-specific error processes in sufficient detail to separate real variation from false positives with high fidelity for many different sequencing technologies. As we already shown above that DeepVariant performs well on Illumina WGS data, we analyze here the behavior of DeepVariant on two non-Illumina WGS datasets and two exome datasets from Illumina and Ion Torrent. The SOLID and Pacific Biosciences (PacBio) WGS datasets have high error rates in the candidate callsets. SOLID (13.9% PPV for SNPs, 96.2% for indels, and 14.3% overall) has many SNP artifacts from the mapping short, color-space reads. The PacBio dataset is the opposite, with many false indels (79.8% PPV for SNPs, 1.4% for indels, and 22.1% overall) due to this technology's high indel error rate. Training DeepVariant to call variants in an exome is likely to be particularly challenging. Exomes have far fewer variants (~20-30k) than found in a whole-genome (~4-5M) 26 . T",
"title": ""
},
{
"docid": "df3ef3feeaf787315188db2689dc6fb9",
"text": "Multi-class weather classification from single images is a fundamental operation in many outdoor computer vision applications. However, it remains difficult and the limited work is carried out for addressing the difficulty. Moreover, existing method is based on the fixed scene. In this paper we present a method for any scenario multi-class weather classification based on multiple weather features and multiple kernel learning. Our approach extracts multiple weather features and takes properly processing. By combining these features into high dimensional vectors, we utilize multiple kernel learning to learn an adaptive classifier. We collect an outdoor image set that contains 20K images called MWI (Multi-class Weather Image) set. Experimental results show that the proposed method can efficiently recognize weather on MWI dataset.",
"title": ""
},
{
"docid": "b8dbc4c33e51350109bf1fec5ef852ce",
"text": "Stack Overflow is one of the most popular question-and-answer sites for programmers. However, there are a great number of duplicate questions that are expected to be detected automatically in a short time. In this paper, we introduce two approaches to improve the detection accuracy: splitting body into different types of data and using word-embedding to treat word ambiguities that are not contained in the general corpuses. The evaluation shows that these approaches improve the accuracy compared with the traditional method.",
"title": ""
},
{
"docid": "8b5d7965ac154da1266874027f0b10a0",
"text": "Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions within spatial manipulation to perform matching in local correspondence. However, they essentially extract fixed representations from pre-divided regions for each image and perform matching based on the extracted representation subsequently. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This thus, leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial depenEmail addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), xueli@itee.uq.edu.au (Xue Li), junbin.gao@sydney.edu.au (Junbin Gao) Preprint submitted to Elsevier 25·7·2017 ar X iv :1 70 7. 07 07 4v 1 [ cs .C V ] 2 1 Ju l 2 01 7 dency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.",
"title": ""
},
{
"docid": "79bfb0820e43af3d7012b61f677ed206",
"text": "We derive generalizations of AdaBoost and related gradient-based coordinate descent methods that incorporate sparsity-promoting penalties for the norm of the predictor that is being learned. The end result is a family of coordinate descent algorithms that integrate forward feature induction and back-pruning through regularization and give an automatic stopping criterion for feature induction. We study penalties based on the l1, l2, and l∞ norms of the predictor and introduce mixed-norm penalties that build upon the initial penalties. The mixed-norm regularizers facilitate structural sparsity in parameter space, which is a useful property in multiclass prediction and other related tasks. We report empirical results that demonstrate the power of our approach in building accurate and structurally sparse models.",
"title": ""
},
{
"docid": "7a9a7b888b9e3c2b82e6c089d05e2803",
"text": "Background:\nBullous pemphigoid (BP) is a chronic, autoimmune blistering skin disease that affects patients' daily life and psychosocial well-being.\n\n\nObjective:\nThe aim of the study was to evaluate the quality of life, anxiety, depression and loneliness in BP patients.\n\n\nMethods:\nFifty-seven BP patients and fifty-seven healthy controls were recruited for the study. The quality of life of each patient was assessed using the Dermatology Life Quality Index (DLQI) scale. Moreover, they were evaluated for anxiety and depression according to the Hospital Anxiety Depression Scale (HADS-scale), while loneliness was measured through the Loneliness Scale-Version 3 (UCLA) scale.\n\n\nResults:\nThe mean DLQI score was 9.45±3.34. Statistically significant differences on the HADS total scale and in HADS-depression subscale (p=0.015 and p=0.002, respectively) were documented. No statistically significant difference was found between the two groups on the HADS-anxiety subscale. Furthermore, significantly higher scores were recorded on the UCLA Scale compared with healthy volunteers (p=0.003).\n\n\nConclusion:\nBP had a significant impact on quality of life and the psychological status of patients, probably due to the appearance of unattractive lesions on the skin, functional problems and disease chronicity.",
"title": ""
},
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
},
{
"docid": "07e713880604e82559ccfeece0149228",
"text": "The modern research has found a variety of applications and systems with vastly varying requirements and characteristics in Wireless Sensor Networks (WSNs). The research has led to materialization of many application specific routing protocols which must be energy-efficient. As a consequence, it is becoming increasingly difficult to discuss the design issues requirements regarding hardware and software support. Implementation of efficient system in a multidisciplinary research such as WSNs is becoming very difficult. In this paper we discuss the design issues in routing protocols for WSNs by considering its various dimensions and metrics such as QoS requirement, path redundancy etc. The paper concludes by presenting",
"title": ""
},
{
"docid": "7a202dfa59cb8c50a6999fe8a50895a9",
"text": "The process for transferring knowledge of multiple reinforcement learning policies into a single multi-task policy via distillation technique is known as policy distillation. When policy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state space for each task domain, it requires extensive computational efforts to train the multi-task policy network. In this paper, we propose a new policy distillation architecture for deep reinforcement learning, where we assume that each task uses its taskspecific high-level convolutional features as the inputs to the multi-task policy network. Furthermore, we propose a new sampling framework termed hierarchical prioritized experience replay to selectively choose experiences from the replay memories of each task domain to perform learning on the network. With the above two attempts, we aim to accelerate the learning of the multi-task policy network while guaranteeing a good performance. We use Atari 2600 games as testing environment to demonstrate the efficiency and effectiveness of our proposed solution for policy distillation.",
"title": ""
}
] |
scidocsrr
|
00a92d6f3afd28c97c9b0a6b70372fe3
|
ML-KNN: A lazy learning approach to multi-label learning
|
[
{
"docid": "8b498cfaa07f0b2858e417e0e0d5adb4",
"text": "In classic pattern recognition problems, classes are mutually exclusive by de\"nition. Classi\"cation errors occur when the classes overlap in the feature space. We examine a di5erent situation, occurring when the classes are, by de\"nition, not mutually exclusive. Such problems arise in semantic scene and document classi\"cation and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classi\"cation, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a \"eld scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a di5erent treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classi\"cation; furthermore, our work appears to generalize to other classi\"cation problems of the same nature. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "5a03bfd124df29ed5607a13fe546e661",
"text": "Employees and/or functional managers increasingly adopt and use IT systems and services that the IS management of the organization does neither provide nor approve. To effectively counteract such shadow IT in organizations, the understanding of employees’ motivations and drivers is necessary. However, the scant literature on this topic primarily focused on various governance approaches at firm level. With the objective to open the black box of shadow IT usage at the individual unit of analysis, we develop a research model and propose a laboratory experiment to examine users’ justifications for violating implicit and explicit IT usage restrictions based on neutralization theory. To be precise, in this research-in-progress, we posit positive associations between shadow IT usage and human tendencies to downplay such kind of rule-breaking behaviors due to necessity, no injury, and injustice. We expect a lower impact of these neutralization effects in the presence of behavioral IT guidelines that explicitly prohibit users to employ exactly those shadow IT systems.",
"title": ""
},
{
"docid": "57d8f78ac76925f17b28b78992b7a7b9",
"text": "The effects of long-term aerobic exercise on endothelial function in patients with essential hypertension remain unclear. To determine whether endothelial function relating to forearm hemodynamics in these patients differs from normotensive subjects and whether endothelial function can be modified by continued physical exercise, we randomized patients with essential hypertension into a group that engaged in 30 minutes of brisk walking 5 to 7 times weekly for 12 weeks (n=20) or a group that underwent no activity modifications (control group, n=7). Forearm blood flow was measured using strain-gauge plethysmography during reactive hyperemia to test for endothelium-dependent vasodilation and after sublingual nitroglycerin administration to test endothelium-independent vasodilation. Forearm blood flow in hypertensive patients during reactive hyperemia was significantly less than that in normotensive subjects (n=17). Increases in forearm blood flow after nitroglycerin were similar between hypertensive and normotensive subjects. Exercise lowered mean blood pressure from 115.7+/-5.3 to 110.2+/-5.1 mm Hg (P<0.01) and forearm vascular resistance from 25.6+/-3.2 to 23. 2+/-2.8 mm Hg/mL per minute per 100 mL tissue (P<0.01); no change occurred in controls. Basal forearm blood flow, body weight, and heart rate did not differ with exercise. After 12 weeks of exercise, maximal forearm blood flow response during reactive hyperemia increased significantly, from 38.4+/-4.6 to 47.1+/-4.9 mL/min per 100 mL tissue (P<0.05); this increase was not seen in controls. Changes in forearm blood flow after sublingual nitroglycerin administration were similar before and after 12 weeks of exercise. Intra-arterial infusion of the nitric oxide synthase inhibitor NG-monomethyl-L-arginine abolished the enhancement of reactive hyperemia induced by 12 weeks of exercise. These findings suggest that through increased release of nitric oxide, continued physical exercise alleviates impairment of reactive hyperemia in patients with essential hypertension.",
"title": ""
},
{
"docid": "4d6c21ed39ef5d9d7e9b616338cc2dfa",
"text": "Due to the increasing threat from malicious software (malware), monitoring of vulnerable systems is becoming increasingly important. The need to log and analyze activity encompasses networks, individual computers, as well as mobile devices. While there are various automatic approaches and techniques available to detect, identify, or capture malware, the actual analysis of the ever-increasing number of suspicious samples is a time-consuming process for malware analysts. The use of visualization and highly interactive visual analytics systems can help to support this analysis process with respect to investigation, comparison, and summarization of malware samples. Currently, there is no survey available that reviews available visualization systems supporting this important and emerging field. We provide a systematic overview and categorization of malware visualization systems from the perspective of visual analytics. Additionally, we identify and evaluate data providers and commercial tools that produce meaningful input data for the reviewed malware visualization systems. This helps to reveal data types that are currently underrepresented, enabling new research opportunities in the visualization community.",
"title": ""
},
{
"docid": "1d8765a407f2b9f8728982f54ddb6ae1",
"text": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. Materials and Methods: The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. Results: We evaluate similar medical concepts across diagnosis, medication and procedure. The results show xx% relevancy between similar pairs of medical concepts. Our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. Conclusion: We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance.",
"title": ""
},
{
"docid": "bdd9760446a6412195e0742b5f1c7035",
"text": "Cyanobacteria are found globally due to their adaptation to various environments. The occurrence of cyanobacterial blooms is not a new phenomenon. The bloom-forming and toxin-producing species have been a persistent nuisance all over the world over the last decades. Evidence suggests that this trend might be attributed to a complex interplay of direct and indirect anthropogenic influences. To control cyanobacterial blooms, various strategies, including physical, chemical, and biological methods have been proposed. Nevertheless, the use of those strategies is usually not effective. The isolation of natural compounds from many aquatic and terrestrial plants and seaweeds has become an alternative approach for controlling harmful algae in aquatic systems. Seaweeds have received attention from scientists because of their bioactive compounds with antibacterial, antifungal, anti-microalgae, and antioxidant properties. The undesirable effects of cyanobacteria proliferations and potential control methods are here reviewed, focusing on the use of potent bioactive compounds, isolated from seaweeds, against microalgae and cyanobacteria growth.",
"title": ""
},
{
"docid": "2088be2c5623d7491c5692b6ebd4f698",
"text": "Machine learning (ML) is now widespread. Traditional software engineering can be applied to the development ML applications. However, we have to consider specific problems with ML applications in therms of their quality. In this paper, we present a survey of software quality for ML applications to consider the quality of ML applications as an emerging discussion. From this survey, we raised problems with ML applications and discovered software engineering approaches and software testing research areas to solve these problems. We classified survey targets into Academic Conferences, Magazines, and Communities. We targeted 16 academic conferences on artificial intelligence and software engineering, including 78 papers. We targeted 5 Magazines, including 22 papers. The results indicated key areas, such as deep learning, fault localization, and prediction, to be researched with software engineering and testing.",
"title": ""
},
{
"docid": "8057b33aa53c8017fd4050b9db401c2f",
"text": "Recent work in computer vision has yielded impressive results in automatically describing images with natural language. Most of these systems generate captions in a single language, requiring multiple language-specific models to build a multilingual captioning system. We propose a very simple technique to build a single unified model across languages, using artificial tokens to control the language, making the captioning system more compact. We evaluate our approach on generating English and Japanese captions, and show that a typical neural captioning architecture is capable of learning a single model that can switch between two different languages.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "102e1718e03b3a4e96ee8c2212738792",
"text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.",
"title": ""
},
{
"docid": "2720f2aa50ddfc9150d6c2718f4433d3",
"text": "This paper describes InP/InGaAs double heterojunction bipolar transistor (HBT) technology that uses SiN/SiO2 sidewall spacers. This technology enables the formation of ledge passivation and narrow base metals by i-line lithography. With this process, HBTs with various emitter sizes and emitter-base (EB) spacings can be fabricated on the same wafer. The impact of the emitter size and EB spacing on the current gain and high-frequency characteristics is investigated. The reduction of the current gain is <;5% even though the emitter width decreases from 0.5 to 0.25 μm. A high current gain of over 40 is maintained even for a 0.25-μm emitter HBT. The HBTs with emitter widths ranging from 0.25 to 0.5 μm also provide peak ft of over 430 GHz. On the other hand, peak fmax greatly increases from 330 to 464 GHz with decreasing emitter width from 0.5 to 0.25 μm. These results indicate that the 0.25-μm emitter HBT with the ledge passivaiton exhibits balanced high-frequency performance (ft = 452 GHz and fmax = 464 GHz), while maintaining a current gain of over 40.",
"title": ""
},
{
"docid": "954d0ef5a1a648221ce8eb3f217f4071",
"text": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this",
"title": ""
},
{
"docid": "4cfedb5e516692b12a610c4211e6fdd4",
"text": "Supporters of market-based education reforms argue that school autonomy and between-school competition can raise student achievement. Yet U.S. reforms based in part on these ideas charter schools, school-based management, vouchers and school choice are limited in scope, complicating evaluations of their impact. In contrast, a series of remarkable reforms enacted by the Thatcher Government in Britain in the 1980s provide an ideal testing ground for examining the effects of school autonomy and between-school competition. In this paper I study one reform described by Chubb and Moe (1992) as ‘truly revolutionary’ that allowed public high schools to ‘opt out’ of the local school authority and become quasi-independent, funded directly by central Government. In order to opt out schools had to first win a majority vote of current parents, and I assess the impact of school autonomy via a regression discontinuity design, comparing student achievement levels at schools where the vote barely won to those where it barely lost. To assess the effects of competition I use this same idea to compare student achievement levels at neighbouring schools of barely winners to neighbouring schools of barely losers. My results suggest two conclusions. First, there were large gains to schools that won the vote and opted out, on the order of a onequarter standard deviation improvement on standardised national examinations. Since results improved for those students already enrolled in the school at the time of the vote, this outcome is not likely to be driven by changes in student-body composition (cream-skimming). Second, the gains enjoyed by the opted-out schools appear not to have spilled over to their neighbours I can never reject the hypothesis of no spillovers and can always reject effects bigger than one half of the ‘own-school’ impact. I interpret my results as supportive of education reforms that seek to hand power to schools, with the caveat that I do not know precisely what opted-out schools did to improve. With regards to competition, although I cannot rule out small but economically important competition effects, my results suggest caution as to the likely benefits.",
"title": ""
},
{
"docid": "686e9d38bbbec3b6e6150789e14575a0",
"text": "Automatic License Plate Recognition (ALPR) is an important task with many applications in Intelligent Transportation and Surveillance systems. As in other computer vision tasks, Deep Learning (DL) methods have been recently applied in the context of ALPR, focusing on country-specific plates, such as American or European, Chinese, Indian and Korean. However, either they are not a complete DL-ALPR pipeline, or they are commercial and utilize private datasets and lack detailed information. In this work, we proposed an end-to-end DL-ALPR system for Brazilian license plates based on state-of-the-art Convolutional Neural Network architectures. Using a publicly available dataset with Brazilian plates, the system was able to correctly detect and recognize all seven characters of a license plate in 63.18% of the test set, and 97.39% when considering at least five correct characters (partial match). Considering the segmentation and recognition of each character individually, we are able to segment 99% of the characters, and correctly recognize 93% of them.",
"title": ""
},
{
"docid": "6ae78c5e82030e76c87ef9759ba8a464",
"text": "The European innovation project PERFoRM (Production harmonizEd Reconfiguration of Flexible Robots and Machinery) is aiming for a harmonized integration of research results in the area of flexible and reconfigurable manufacturing systems. Based on the cyber-physical system (CPS) paradigm, existing technologies and concepts are researched and integrated in an architecture which is enabling the application of these new technologies in real industrial environments. To implement such a flexible cyber-physical system, one of the core requirements for each involved component is a harmonized communication, which enables the capability to collaborate with each other in an intelligent way. But especially when integrating multiple already existing production components into such a cyber-physical system, one of the major issues is to deal with the various communication protocols and data representations coming with each individual cyber-physical component. To tackle this issue, the solution foreseen within PERFoRM's architecture is to use an integration platform, the PERFoRM Industrial Manufacturing Middleware, to enable all connected components to interact with each other through the Middleware and without having to implement new interfaces for each. This paper describes the basic requirements of such a Middleware and how it fits into the PERFoRM architecture and gives an overview about the internal design and functionality.",
"title": ""
},
{
"docid": "48019a3106c6d74e4cfcc5ac596d4617",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "0b5431e668791d180239849c53faa7f7",
"text": "Crowdfunding is quickly emerging as an alternative to traditional methods of funding new products. In a crowdfunding campaign, a seller solicits financial contributions from a crowd, usually in the form of pre-buying an unrealized product, and commits to producing the product if the total amount pledged is above a certain threshold. We provide a model of crowdfunding in which consumers arrive sequentially and make decisions about whether to pledge or not. Pledging is not costless, and hence consumers would prefer not to pledge if they think the campaign will not succeed. This can lead to cascades where a campaign fails to raise the required amount even though there are enough consumers who want the product. The paper introduces a novel stochastic process --- anticipating random walks --- to analyze this problem. The analysis helps explain why some campaigns fail and some do not, and provides guidelines about how sellers should design their campaigns in order to maximize their chances of success. More broadly, Anticipating Random Walks can also find application in settings where agents make decisions sequentially and these decisions are not just affected by past actions of others, but also by how they will impact the decisions of future actors as well.",
"title": ""
},
{
"docid": "4e253e57dd1dba0ef804017d0ee9a2eb",
"text": "This paper presents an original probabilistic method for the numerical computations of Greeks (i.e. price sensitivities) in finance. Our approach is based on theintegration-by-partsformula, which lies at the core of the theory of variational stochastic calculus, as developed in the Malliavin calculus. The Greeks formulae, both with respect to initial conditions and for smooth perturbations of the local volatility, are provided for general discontinuous path-dependent payoff functionals of multidimensional diffusion processes. We illustrate the results by applying the formula to exotic European options in the framework of the Black and Scholes model. Our method is compared to the Monte Carlo finite difference approach and turns out to be very efficient in the case of discontinuous payoff functionals.",
"title": ""
},
{
"docid": "ac2eee03876d4260390972862ac12452",
"text": "Cross-validation (CV) is often used to select the regularization parameter in high dimensional problems. However, when applied to the sparse modeling method Lasso, CV leads to models that are unstable in high-dimensions, and consequently not suited for reliable interpretation. In this paper, we propose a model-free criterion ESCV based on a new estimation stability (ES) metric and CV . Our proposed ESCV finds a smaller and locally ES -optimal model smaller than the CV choice so that the it fits the data and also enjoys estimation stability property. We demonstrate that ESCV is an effective alternative to CV at a similar easily parallelizable computational cost. In particular, we compare the two approaches with respect to several performance measures when applied to the Lasso on both simulated and real data sets. For dependent predictors common in practice, our main finding is that, ESCV cuts down false positive rates often by a large margin, while sacrificing little of true positive rates. ESCV usually outperforms CV in terms of parameter estimation while giving similar performance as CV in terms of prediction. For the two real data sets from neuroscience and cell biology, the models found by ESCV are less than half of the model sizes by CV , but preserves CV’s predictive performance and corroborates with subject knowledge and independent work. We also discuss some regularization parameter alignment issues that come up in both approaches. Supplementary materials are available online.",
"title": ""
},
{
"docid": "4d8cc4d8a79f3d35ccc800c9f4f3dfdc",
"text": "Many common events in our daily life affect us in positive and negative ways. For example, going on vacation is typically an enjoyable event, while being rushed to the hospital is an undesirable event. In narrative stories and personal conversations, recognizing that some events have a strong affective polarity is essential to understand the discourse and the emotional states of the affected people. However, current NLP systems mainly depend on sentiment analysis tools, which fail to recognize many events that are implicitly affective based on human knowledge about the event itself and cultural norms. Our goal is to automatically acquire knowledge of stereotypically positive and negative events from personal blogs. Our research creates an event context graph from a large collection of blog posts and uses a sentiment classifier and semi-supervised label propagation algorithm to discover affective events. We explore several graph configurations that propagate affective polarity across edges using local context, discourse proximity, and event-event co-occurrence. We then harvest highly affective events from the graph and evaluate the agreement of the polarities with human judgements.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] |
scidocsrr
|
144dece26525a57f4c531eb4f1d3760b
|
Dynamic trees as search trees via Euler tours, applied to the network simplex algorithm
|
[
{
"docid": "5e5780bbd151ccf981fe69d5eb70b067",
"text": "We give efficient algorithms for maintaining a minimum spanning forest of a planar graph subject to on-line modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices. To implement the algorithms, we develop a data structure called an edge-or&reck dynumic tree, which is a variant of the dynamic tree data structure of Sleator and Tarjan. Using this data structure, our algorithms run in O(logn) time per operation and O(n) space. The algorithms can be used to maintain the connected components of a dynamic planar graph in O(logn) time per operation. *Computer Science Laboratory, Xerox PARC, 3333 Coyote Hill Rd., Palo Alto, CA 94304. This work was done while the author was at the Department of Computer Science, Columbia University, New York, NY 10027. **Department of Computer Science, Columbia University, New York, NY 10027 and Dipartmento di Informatica e Sistemistica, Universitb di Roma, Rome, Italy. ***Department of Computer Science, Brown University, Box 1910, Providence, RI 02912-1910. #Department of Computer Science, Princeton University, Princeton, NJ 08544, and AT&T Bell Laboratories, Murray Hill, New Jersey 07974. ##Department of Computer Science, Stanford University, Stanford, CA 94305. This work was done while the author was at Department of Computer Science, Princeton University, Princeton, NJ 08544. ###IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY 10598. + Research supported in part by NSF grant CCR-8X-14977, NSF grant DCR-86-05962, ONR Contract N00014-87-H-0467 and Esprit II Basic Research Actions Program of the European Communities Contract No. 3075.",
"title": ""
}
] |
[
{
"docid": "1f1c4c69a4c366614f0cc9ecc24365ba",
"text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.",
"title": ""
},
{
"docid": "07381e533ec04794a74abc0560d7c8af",
"text": "Many applications in several domains such as telecommunications, network security, large-scale sensor networks, require online processing of continuous data flows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static configurations that lead to either under or overprovisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation, and a thorough evaluation of the scalability and elasticity of the fully implemented system.",
"title": ""
},
{
"docid": "662ae9d792b3889dbd0450a65259253a",
"text": "We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing.",
"title": ""
},
{
"docid": "5f21a1348ad836ded2fd3d3264455139",
"text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.",
"title": ""
},
{
"docid": "88530d3d70df372b915556eab919a3fe",
"text": "The airway mucosa is lined by a continuous epithelium comprised of multiple cell phenotypes, several of which are secretory. Secretions produced by these cells mix with a variety of macromolecules, ions and water to form a respiratory tract fluid that protects the more distal airways and alveoli from injury and infection. The present article highlights the structure of the mucosa, particularly its secretory cells, gives a synopsis of the structure of mucus, and provides new information on the localization of mucin (MUC) genes that determine the peptide sequence of the protein backbone of the glycoproteins, which are a major component of mucus. Airway secretory cells comprise the mucous, serous, Clara and dense-core granulated cells of the surface epithelium, and the mucous and serous acinar cells of the submucosal glands. Several transitional phenotypes may be found, especially during irritation or disease. Respiratory tract mucins constitute a heterogeneous group of high molecular weight, polydisperse richly glycosylated molecules: both secreted and membrane-associated forms of mucin are found. Several mucin (MUC) genes encoding the protein core of mucin have been identified. We demonstrate the localization of MUC gene expression to a number of distinct cell types and their upregulation both in response to experimentally administered lipopolysaccharide and cystic fibrosis.",
"title": ""
},
{
"docid": "654b7a674977969237301cd874bda5d1",
"text": "This paper and its successor examine the gap between ecotourism theory as revealed in the literature and ecotourism practice as indicated by its on-site application. A framework is suggested which, if implemented through appropriate management, can help to achieve a balance between conservation and development through the promotion of synergistic relationships between natural areas, local populations and tourism. The framework can also be used to assess the status of ecotourism at particular sites. ( 1999 Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c15618df21bce45cbad6766326de3dbd",
"text": "The birth of intersexed infants, babies born with genitals that are neither clearly male nor clearly female, has been documented throughout recorded time.' In the late twentieth century, medical technology has advanced to allow scientists to determine chromosomal and hormonal gender, which is typically taken to be the real, natural, biological gender, usually referred to as \"sex.\"2 Nevertheless, physicians who handle the cases of intersexed infants consider several factors beside biological ones in determining, assigning, and announcing the gender of a particular infant. Indeed, biological factors are often preempted in their deliberations by such cultural factors as the \"correct\" length of the penis and capacity of the vagina.",
"title": ""
},
{
"docid": "8433f58b63632abf9074eefdf5fa429f",
"text": "We are developing a monopivot centrifugal pump for circulatory assist for a period of more than 2 weeks. The impeller is supported by a pivot bearing at one end and by a passive magnetic bearing at the other. The pivot undergoes concentrated exposure to the phenomena of wear, hemolysis, and thrombus formation. The pivot durability, especially regarding the combination of male/female pivot radii, was examined through rotating wear tests and animal tests. As a result, combinations of similar radii for the male/female pivots were found to provide improved pump durability. In the extreme case, the no-gap combination would result in no thrombus formation.",
"title": ""
},
{
"docid": "74ea9bde4e265dba15cf9911fce51ece",
"text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.",
"title": ""
},
{
"docid": "98889e4861485fdc04cff54640f4d3ab",
"text": "The design, prototype implementation, and demonstration of an ethical governor capable of restricting lethal action of an autonomous system in a manner consistent with the Laws of War and Rules of Engagement is presented.",
"title": ""
},
{
"docid": "c07f30465dc4ed355847d015fee1cadb",
"text": "0747-5632/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.chb.2008.06.002 * Corresponding author. Tel.: +86 13735892489. E-mail addresses: luyb@mail.hust.edu.cn (Y. Lu), zh binwang@utpa.edu (B. Wang). 1 Tel.: +1 956 3813336. Instant messaging (IM) is a popular Internet application around the world. In China, the competition in the IM market is very intense and there are over 10 IM products available. We examine the intrinsic and extrinsic motivations that affect Chinese users’ acceptance of IM based on the theory of planned behavior (TPB), the technology acceptance model (TAM), and the flow theory. Results demonstrate that users’ perceived usefulness and perceived enjoyment significantly influence their attitude towards using IM, which in turn impacts their behavioral intention. Furthermore, perceived usefulness, users’ concentration, and two components of the theory of planned behavior (TPB): subjective norm and perceived behavioral control, also have significant impact on the behavioral intention. Users’ intention determines their actual usage behavior. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1f6637ecfc9415dd0f827ab6d3149af3",
"text": "Impaired renal function due to acute kidney injury (AKI) and/or chronic kidney diseases (CKD) is frequent in cirrhosis. Recurrent episodes of AKI may occur in end-stage cirrhosis. Differential diagnosis between functional (prerenal and hepatorenal syndrome) and acute tubular necrosis (ATN) is crucial. The concept that AKI and CKD represent a continuum rather than distinct entities, is now emerging. Not all patients with AKI have a potential for full recovery. Precise evaluation of kidney function and identification of kidney changes in patients with cirrhosis is central in predicting reversibility. This review examines current biomarkers for assessing renal function and identifying the cause and mechanisms of impaired renal function. When CKD is suspected, clearance of exogenous markers is the reference to assess glomerular filtration rate, as creatinine is inaccurate and cystatin C needs further evaluation. Recent biomarkers may help differentiate ATN from hepatorenal syndrome. Neutrophil gelatinase-associated lipocalin has been the most extensively studied biomarker yet, however, there are no clear-cut values that differentiate each of these conditions. Studies comparing ATN and hepatorenal syndrome in cirrhosis, do not include a gold standard. Combinations of innovative biomarkers are attractive to identify patients justifying simultaneous liver and kidney transplantation. Accurate biomarkers of underlying CKD are lacking and kidney biopsy is often contraindicated in this population. Urinary microRNAs are attractive although not definitely validated. Efforts should be made to develop biomarkers of kidney fibrosis, a common and irreversible feature of CKD, whatever the cause. Biomarkers of maladaptative repair leading to irreversible changes and CKD after AKI are also promising.",
"title": ""
},
{
"docid": "c6645086397ba0825f5f283ba5441cbf",
"text": "Anomalies have broad patterns corresponding to their causes. In industry, anomalies are typically observed as equipment failures. Anomaly detection aims to detect such failures as anomalies. Although this is usually a binary classification task, the potential existence of unseen (unknown) failures makes this task difficult. Conventional supervised approaches are suitable for detecting seen anomalies but not for unseen anomalies. Although, unsupervised neural networks for anomaly detection now detect unseen anomalies well, they cannot utilize anomalous data for detecting seen anomalies even if some data have been made available. Thus, providing an anomaly detector that finds both seen and unseen anomalies well is still a tough problem. In this paper, we introduce a novel probabilistic representation of anomalies to solve this problem. The proposed model defines the normal and anomaly distributions using the analogy between a set and the complementary set. We applied these distributions to an unsupervised variational autoencoder (VAE)-based method and turned it into a supervised VAE-based method. We tested the proposed method with well-known data and real industrial data to show that the proposed method detects seen anomalies better than the conventional unsupervised method without degrading the detection performance for unseen anomalies.",
"title": ""
},
{
"docid": "12cac87e781307224db2c3edf0d217b8",
"text": "Fetal ventriculomegaly (VM) refers to the enlargement of the cerebral ventricles in utero. It is associated with the postnatal diagnosis of hydrocephalus. VM is clinically diagnosed on ultrasound and is defined as an atrial diameter greater than 10 mm. Because of the anatomic detailed seen with advanced imaging, VM is often further characterized by fetal magnetic resonance imaging (MRI). Fetal VM is a heterogeneous condition with various etiologies and a wide range of neurodevelopmental outcomes. These outcomes are heavily dependent on the presence or absence of associated anomalies and the direct cause of the ventriculomegaly rather than on the absolute degree of VM. In this review article, we discuss diagnosis, work-up, counseling, and management strategies as they relate to fetal VM. We then describe imaging-based research efforts aimed at using prenatal data to predict postnatal outcome. Finally, we review the early experience with fetal therapy such as in utero shunting, as well as the advances in prenatal diagnosis and fetal surgery that may begin to address the limitations of previous therapeutic efforts.",
"title": ""
},
{
"docid": "7ec2bb00153e124e76fa7d6ab39c0b77",
"text": "Goal: Sensorimotor-based brain-computer interfaces (BCIs) have achieved successful control of real and virtual devices in up to three dimensions; however, the traditional sensor-based paradigm limits the intuitive use of these systems. Many control signals for state-of-the-art BCIs involve imagining the movement of body parts that have little to do with the output command, revealing a cognitive disconnection between the user's intent and the action of the end effector. Therefore, there is a need to develop techniques that can identify with high spatial resolution the self-modulated neural activity reflective of the actions of a helpful output device. Methods: We extend previous EEG source imaging (ESI) work to decoding natural hand/wrist manipulations by applying a novel technique to classifying four complex motor imaginations of the right hand: flexion, extension, supination, and pronation. Results: We report an increase of up to 18.6% for individual task classification and 12.7% for overall classification using the proposed ESI approach over the traditional sensor-based method. Conclusion: ESI is able to enhance BCI performance of decoding complex right-hand motor imagery tasks. Significance: This study may lead to the development of BCI systems with naturalistic and intuitive motor imaginations, thus facilitating broad use of noninvasive BCIs.",
"title": ""
},
{
"docid": "f2c846f200d9c59362bf285b2b68e2cd",
"text": "A Root Cause Failure Analysis (RCFA) for repeated impeller blade failures in a five stage centrifugal propane compressor is described. The initial failure occurred in June 2007 with a large crack found in one blade on the third impeller and two large pieces released from adjacent blades on the fourth impeller. An RCFA was performed to determine the cause of the failures. The failure mechanism was identified to be high cycle fatigue. Several potential causes related to the design, manufacture, and operation of the compressor were examined. The RCFA concluded that the design and manufacture were sound and there were no conclusive issues with respect to operation. A specific root cause was not identified. In June 2009, a second case of blade cracking occurred with a piece once again released from a single blade on the fourth impeller. Due to the commonality with the previous instance this was identified as a repeat failure. Specifically, both cases had occurred in the same compressor whereas, two compressors operating in identical service in adjacent Liquefied natural Gas (LNG) trains had not encountered the problem. A second RCFA was accordingly launched with the ultimate objective of preventing further repeated failures. Both RCFA teams were established comprising of engineers from the End User (RasGas), the OEM (Elliott Group) and an independent consultancy (Southwest Research Institute). The scope of the current investigation included a detailed metallurgical assessment, impeller modal frequency assessment, steady and unsteady computational fluid dynamics (CFD) assessment, finite element analyses (FEA), fluid structure interaction (FSI) assessment, operating history assessment and a comparison change analysis. By the process of elimination, the most probable causes were found to be associated with: • vane wake excitation of either the impeller blade leading edge modal frequency from severe mistuning and/or unusual response of the 1-diameter cover/blades modal frequency • mist carry over from third side load upstream scrubber • end of curve operation in the compressor rear section INTRODUCTION RasGas currently operates seven LNG trains at Ras Laffan Industrial City, Qatar. Train 3 was commissioned in 2004 with a nameplate LNG production of 4.7 Mtpa which corresponds to a wet sour gas feed of 790 MMscfd (22.37 MMscmd). Trains 4 and 5 were later commissioned in 2005 and 2006 respectively. They were also designed for a production 4.7 Mtpa LNG but have higher wet sour gas feed rates of 850 MMscfd (24.05 MMscmd). Despite these differences, the rated operation of the propane compressor is identical in each train. Figure 1. APCI C3-MR Refrigeration system for Trains 3, 4 and 5 The APCI C3-MR refrigeration cycle (Roberts, et al. 2002), depicted in Figure 1 is common for all three trains. Propane is circulated in a continuous loop between four compressor inlets and a single discharge. The compressed discharge gas is cooled and condensed in three sea water cooled heat exchangers before being routed to the LLP, LP, MP and HP evaporators. Here, the liquid propane is evaporated by the transfer of heat from the warmer feed and MR gas streams. It finally passes through one of the four suction scrubbers before re-entering the compressor as a gas. Although not shown, each section inlet has a dedicated anti-surge control loop from the de-superheater discharge to the suction scrubber inlet. A cross section of the propane compressor casing and rotor is illustrated in Figure 2. It is a straight through centrifugal unit with a horizontally split casing. Five impellers are mounted upon the 21.3 ft (6.5 m) long shaft. Three side loads add gas upstream of the suction at impellers 2, 3 & 4. The impellers are of two piece construction, with each piece fabricated from AISI 4340 forgings that were heat treated such that the material has sufficient strength and toughness for operation at temperatures down to -50F (-45.5C). The blades are milled to the hub piece and the cover piece was welded to the blades using a robotic metal inert gas (MIG) welding process. The impellers are mounted to the shaft with an interference fit. The thrust disc is mounted to the shaft with a line on line fit and antirotation key. The return channel and side load inlets are all vaned to align the downstream swirl angle. The impeller diffusers are all vaneless. A summary of the relevant compressor design parameters is given in Table 1. The complete compressor string is also depicted in Figure 1. The propane compressor is coupled directly to the HP MR compressor and driven by a GE Frame 7EA gas turbine and ABB 16086 HP (12 MW) helper motor at 3600 rpm rated shaft speed. Table 1. Propane Compressor design parameters Component Material No of",
"title": ""
},
{
"docid": "ccefef1618c7fa637de366e615333c4b",
"text": "Context: Systems development normally takes place in a specific organizational context, including organizational culture. Previous research has identified organizational culture as a factor that potentially affects the deployment systems development methods. Objective: The purpose is to analyze the relationship between organizational culture and the postadoption deployment of agile methods. Method: This study is a theory development exercise. Based on the Competing Values Model of organizational culture, the paper proposes a number of hypotheses about the relationship between organizational culture and the deployment of agile methods. Results: Inspired by the agile methods thirteen new hypotheses are introduced and discussed. They have interesting implications, when contrasted with ad hoc development and with traditional systems devel-",
"title": ""
},
{
"docid": "1b2991f84433c96c6f0d61378baebbea",
"text": "This article analyzes the topic of leadership from an evolutionary perspective and proposes three conclusions that are not part of mainstream theory. First, leading and following are strategies that evolved for solving social coordination problems in ancestral environments, including in particular the problems of group movement, intragroup peacekeeping, and intergroup competition. Second, the relationship between leaders and followers is inherently ambivalent because of the potential for exploitation of followers by leaders. Third, modern organizational structures are sometimes inconsistent with aspects of our evolved leadership psychology, which might explain the alienation and frustration of many citizens and employees. The authors draw several implications of this evolutionary analysis for leadership theory, research, and practice.",
"title": ""
},
{
"docid": "e3d1b0383d0f8b2382586be15961a765",
"text": "The critical study of political discourse has up until very recently rested solely within the domain of the social sciences. Working within a linguistics framework, Critical Discourse Analysis (CDA), in particular Fairclough (Fairclough 1989, 1995a, 1995b, 2001; Fairclough and Wodak 1997), has been heavily influenced by Foucault. 2 The linguistic theory that CDA and critical linguistics especially (which CDA subsumes) has traditionally drawn upon is Halliday‟s Systemic-Functional Grammar, which is largely concerned with the function of language in the social structure 3 (Fowler et al. 1979; Fowler 1991; Kress and Hodge 1979).",
"title": ""
},
{
"docid": "832c48916e04744188ed71bf3ab1f784",
"text": "Internet is commonly accepted as an important aspect in successful tourism promotion as well as destination marketing in this era. The main aim of this study is to explore how online promotion and its influence on destination awareness and loyalty in the tourism industry. This study proposes a structural model of the relationships among online promotion (OP), destination awareness (DA), tourist satisfaction (TS) and destination loyalty (DL). Randomly-selected respondents from the population of international tourists departing from Vietnamese international airports were selected as the questionnaire samples in the study. Initially, the exploratory factor analysis (EFA) was performed to test the validity of constructs, and the confirmatory factor analysis (CFA), using AMOS, was used to test the significance of the proposed hypothesizes model. The results show that the relationships among OP, DA, TS and DL appear significant in this study. The result also indicates that online promotion could improve the destination loyalty. Finally, the academic contribution, implications of the findings for tourism marketers and limitation are also discussed in this study. JEL classification numbers: L11",
"title": ""
}
] |
scidocsrr
|
bf5b515ee871395f23464714e30d64e3
|
Digital Didactical Designs in iPad-Classrooms
|
[
{
"docid": "f1af321a5d7c2e738c181373d5dbfc9a",
"text": "This research examined how motivation (perceived control, intrinsic motivation, and extrinsic motivation), cognitive learning strategies (deep and surface strategies), and intelligence jointly predict long-term growth in students' mathematics achievement over 5 years. Using longitudinal data from six annual waves (Grades 5 through 10; Mage = 11.7 years at baseline; N = 3,530), latent growth curve modeling was employed to analyze growth in achievement. Results showed that the initial level of achievement was strongly related to intelligence, with motivation and cognitive strategies explaining additional variance. In contrast, intelligence had no relation with the growth of achievement over years, whereas motivation and learning strategies were predictors of growth. These findings highlight the importance of motivation and learning strategies in facilitating adolescents' development of mathematical competencies.",
"title": ""
}
] |
[
{
"docid": "e6be28ac4a4c74ca2f8967b6a661b9cf",
"text": "This paper describes the design and simulation of a MEMS-based oscillator using a synchronous amplitude limiter. The proposed solution does not require external control signals to keep the resonator drive amplitude within the desired range. In a MEMS oscillator the oscillation amplitude needs to be limited to avoid over-driving the resonator which could cause unwanted nonlinear behavior [1] or component failure. The interface electronics has been implemented and simulated in 0.35μm HV CMOS process. The resonator was fabricated using a custom rapid-prototyping process involving Focused Ion Beam masking and Cryogenic Deep Reactive Ion Etching.",
"title": ""
},
{
"docid": "cb396e80b143c76a5be5aa4cff169ac2",
"text": "This article describes a quantitative model, which suggests what the underlying mechanisms of cognitive control in a particular task-switching paradigm are, with relevance to task-switching performance in general. It is suggested that participants dynamically control response accuracy by selective attention, in the particular paradigm being used, by controlling stimulus representation. They are less efficient in dynamically controlling response representation. The model fits reasonably well the pattern of reaction time results concerning task switching, congruency, cue-target interval and response-repetition in a mixed task condition, as well as the differences between mixed task and pure task conditions.",
"title": ""
},
{
"docid": "3d9c02413c80913cb32b5094dcf61843",
"text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.",
"title": ""
},
{
"docid": "1d4f89bb3e289ed138f45af0f1e3fc39",
"text": "The “covariance” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudo-covariance. A characterization of uncorrelatedness and wide-sense stationarity in terms of covariance and pseudocovariance is given. Complex random variables and processes with a vanishing pseudo-covariance are called proper. It is shown that properness is preserved under affine transformations and that the complex-multivariate Gaussian density assumes a natural form only for proper random variables. The maximum-entropy theorem is generalized to the complex-multivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zero-mean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discrete-time channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unit-sample response is determined. This derivation is considerably simpler than an earlier derivation for the real discrete-time Gaussian channel with intersymbol interference, whose capacity is obtained as a by-product of the results for the complex channel. Znder Terms-Proper complex random processes, circular stationarity, intersymbol interference, capacity.",
"title": ""
},
{
"docid": "f5311de600d7e50d5c9ecff5c49f7167",
"text": "Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.",
"title": ""
},
{
"docid": "4dc05debbbe6c8103d772d634f91c86c",
"text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.",
"title": ""
},
{
"docid": "6bd9fc02c8e26e64cecb13dab1a93352",
"text": "Kohlberg, who was born in 1927, grew up in Bronxville, New York, and attended the Andover Academy in Massachusetts, a private high school for bright and usually wealthy students. He did not go immediately to college, but instead went to help the Israeli cause, in which he was made the Second Engineer on an old freighter carrying refugees from parts of Europe to Israel. After this, in 1948, he enrolled at the University of Chicago, where he scored so high on admission tests that he had to take only a few courses to earn his bachelor's degree. This he did in one year. He stayed on at Chicago for graduate work in psychology, at first thinking he would become a clinical psychologist. However, he soon became interested in Piaget and began interviewing children and adolescents on moral issues. The result was his doctoral dissertation (1958a), the first rendition of his new stage theory.",
"title": ""
},
{
"docid": "0696f518544589e4f7dbee4b50886685",
"text": "This research was designed to theoretically address and empirically examine research issues related to customer’s satisfaction with social commerce. To investigate these research issues, data were collected using a written survey as part of a free simulation experiment. In this experiment, 136 participants were asked to evaluate two social commerce websites using an instrument designed to measure relationships between s-commerce website quality, customer psychological empowerment and customer satisfaction. A total of 278 usable s-commerce site evaluations were collected and analyzed. The results showed that customer satisfaction with social commerce is correlated with social commerce sites quality and customer psychological empowerment.",
"title": ""
},
{
"docid": "7ce9f8cbba0bf56e68443f1ed759b6d3",
"text": "We present a Connected Learning Analytics (CLA) toolkit, which enables data to be extracted from social media and imported into a Learning Record Store (LRS), as defined by the new xAPI standard. A number of implementation issues are discussed, and a mapping that will enable the consistent storage and then analysis of xAPI verb/object/activity statements across different social media and online environments is introduced. A set of example learning activities are proposed, each facilitated by the Learning Analytics beyond the LMS that the toolkit enables.",
"title": ""
},
{
"docid": "8689f4b13343fc9a09135fca1f259976",
"text": "In this work, we propose a novel framework named Coconditional Autoencoding Adversarial Networks (CocoAAN) for Chinese font learning, which jointly learns a generation network and two encoding networks of different feature domains using an adversarial process. The encoding networks map the glyph images into style and content features respectively via the pairwise substitution optimization strategy, and the generation network maps these two kinds of features to glyph samples. Together with a discriminative network conditioned on the extracted features, our framework succeeds in producing realistic-looking Chinese glyph images flexibly. Unlike previous models relying on the complex segmentation of Chinese components or strokes, our model can “parse” structures in an unsupervised way, through which the content feature representation of each character is captured. Experiments demonstrate our framework has a powerful generalization capacity to other unseen fonts and characters.",
"title": ""
},
{
"docid": "c8d2e69a0f58204a648dd4b18447e11a",
"text": "Today, the common vision of smart components is usually based on the concept of the Internet of Things (IoT). Intelligent infrastructures combine sensor networks, network connectivity and software to oversee and analyze complex systems to identify inefficiencies and inform operational decision-making. Wireless Sensor nodes collect operational information over time and provide real-time data on current conditions such as (volcano activities, disaster parameters in general). The security of wireless sensor networks in the world of the Internet of Things is a big challenge, since there are several types of attacks against different layers of OSI model, in their goal is falsified the values of the detected parameters.",
"title": ""
},
{
"docid": "9b8317646ce6cad433e47e42198be488",
"text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "784d75662234e45f78426c690356d872",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "2e11a8170ec8b2547548091443d46cc6",
"text": "This chapter presents the theory of the Core Elements of the Gaming Experience (CEGE). The CEGE are the necessary but not sufficient conditions to provide a positive experience while playing video-games. This theory, formulated using qualitative methods, is presented with the aim of studying the gaming experience objectively. The theory is abstracted using a model and implemented in questionnaire. This chapter discusses the formulation of the theory, introduces the model, and shows the use of the questionnaire in an experiment to differentiate between two different experiences. In loving memory of Samson Cairns 4.1 The Experience of Playing Video-games The experience of playing video-games is usually understood as the subjective relation between the user and the video-game beyond the actual implementation of the game. The implementation is bound by the speed of the microprocessors of the gaming console, the ergonomics of the controllers, and the usability of the interface. Experience is more than that, it is also considered as a personal relationship. Understanding this relationship as personal is problematic under a scientific scope. Personal and subjective knowledge does not allow a theory to be generalised or falsified (Popper 1994). In this chapter, we propose a theory for understanding the experience of playing video-games, or gaming experience, that can be used to assess and compare different experiences. This section introduces the approach taken towards understanding the gaming experience under the aforementioned perspective. It begins by presenting an E.H. Calvillo-Gámez (B) División de Nuevas Tecnologías de la Información, Universidad Politécnica de San Luis Potosí, San Luis Potosí, México e-mail: e.calvillo@upslp.edu.mx 47 R. Bernhaupt (ed.), Evaluating User Experience in Games, Human-Computer Interaction Series, DOI 10.1007/978-1-84882-963-3_4, C © Springer-Verlag London Limited 2010 48 E.H. Calvillo-Gámez et al. overview of video-games and user experience in order to familiarise the reader with such concepts. Last, the objective and overview of the whole chapter are presented. 4.1.",
"title": ""
},
{
"docid": "0adf96e7c34bfb374b81c579d952a839",
"text": "Metric learning has attracted wide attention in face and kinship verification, and a number of such algorithms have been presented over the past few years. However, most existing metric learning methods learn only one Mahalanobis distance metric from a single feature representation for each face image and cannot make use of multiple feature representations directly. In many face-related tasks, we can easily extract multiple features for a face image to extract more complementary information, and it is desirable to learn distance metrics from these multiple features, so that more discriminative information can be exploited than those learned from individual features. To achieve this, we present a large-margin multi-metric learning (LM3L) method for face and kinship verification, which jointly learns multiple global distance metrics under which the correlations of different feature representations of each sample are maximized, and the distance of each positive pair is less than a low threshold and that of each negative pair is greater than a high threshold. To better exploit the local structures of face images, we also propose a local metric learning and local LM3Lmethods to learn a set of local metrics. Experimental results on three face data sets show that the proposed methods achieve very competitive results compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "16488fc65794a318e06777189edc3e4b",
"text": "This work details Sighthoundś fully automated license plate detection and recognition system. The core technology of the system is built using a sequence of deep Convolutional Neural Networks (CNNs) interlaced with accurate and efficient algorithms. The CNNs are trained and fine-tuned so that they are robust under different conditions (e.g. variations in pose, lighting, occlusion, etc.) and can work across a variety of license plate templates (e.g. sizes, backgrounds, fonts, etc). For quantitative analysis, we show that our system outperforms the leading license plate detection and recognition technology i.e. ALPR on several benchmarks. Our system is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud",
"title": ""
},
{
"docid": "b3068a1b1acb0782d2c2b1dac65042cf",
"text": "Measurement of N (nitrogen), P (phosphorus) and K ( potassium) contents of soil is necessary to decide how much extra contents of these nutrients are to b e added in the soil to increase crop fertility. Thi s improves the quality of the soil which in turn yields a good qua lity crop. In the present work fiber optic based c olor sensor has been developed to determine N, P, and K values in t he soil sample. Here colorimetric measurement of aq ueous solution of soil has been carried out. The color se nsor is based on the principle of absorption of col or by solution. It helps in determining the N, P, K amounts as high, m edium, low, or none. The sensor probe along with p roper signal conditioning circuits is built to detect the defici ent component of the soil. It is useful in dispensi ng only required amount of fertilizers in the soil.",
"title": ""
}
] |
scidocsrr
|
a90986c95d2e4c08094b461909151d99
|
Web-Service Clustering with a Hybrid of Ontology Learning and Information-Retrieval-Based Term Similarity
|
[
{
"docid": "639bbe7b640c514ab405601c7c3cfa01",
"text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"title": ""
}
] |
[
{
"docid": "27488ded8276967b9fd71ec40eec28d8",
"text": "This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.",
"title": ""
},
{
"docid": "7c8f38386322d9095b6950c4f31515a0",
"text": "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.",
"title": ""
},
{
"docid": "23ef781d3230124360f24cc6e38fb15f",
"text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "301fb951bb2720ebc71202ee7be37be2",
"text": "This work incorporates concepts from the behavioral confirmation tradition, self tradition, and interdependence tradition to identify an interpersonal process termed the Michelangelo phenomenon. The Michelangelo phenomenon describes the means by which the self is shaped by a close partner's perceptions and behavior. Specifically, self movement toward the ideal self is described as a product of partner affirmation, or the degree to which a partner's perceptions of the self and behavior toward the self are congruent with the self's ideal. The results of 4 studies revealed strong associations between perceived partner affirmation and self movement toward the ideal self, using a variety of participant populations and measurement methods. In addition, perceived partner affirmation--particularly perceived partner behavioral affirmation--was strongly associated with quality of couple functioning and stability in ongoing relationships.",
"title": ""
},
{
"docid": "b4ac5df370c0df5fdb3150afffd9158b",
"text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.",
"title": ""
},
{
"docid": "39e38d7825ff7a74e6bbf9975826ddea",
"text": "Online display advertising has becomes a billion-dollar industry, and it keeps growing. Advertisers attempt to send marketing messages to attract potential customers via graphic banner ads on publishers’ webpages. Advertisers are charged for each view of a page that delivers their display ads. However, recent studies have discovered that more than half of the ads are never shown on users’ screens due to insufficient scrolling. Thus, advertisers waste a great amount of money on these ads that do not bring any return on investment. Given this situation, the Interactive Advertising Bureau calls for a shift toward charging by viewable impression, i.e., charge for ads that are viewed by users. With this new pricing model, it is helpful to predict the viewability of an ad. This paper proposes two probabilistic latent class models (PLC) that predict the viewability of any given scroll depth for a user-page pair. Using a real-life dataset from a large publisher, the experiments demonstrate that our models outperform comparison systems.",
"title": ""
},
{
"docid": "8ed6c9e82c777aa092a78959391a37b2",
"text": "The trie data structure has many properties which make it especially attractive for representing large files of data. These properties include fast retrieval time, quick unsuccessful search determination, and finding the longest match to a given identifier. The main drawback is the space requirement. In this paper the concept of trie compaction is formalized. An exact algorithm for optimal trie compaction and three algorithms for approximate trie compaction are given, and an analysis of the three algorithms is done. The analysis indicate that for actual tries, reductions of around 70 percent in the space required by the uncompacted trie can be expected. The quality of the compaction is shown to be insensitive to the number of nodes, while a more relevant parameter is the alphabet size of the key.",
"title": ""
},
{
"docid": "4e2b0b82a6f7e342f10d1a66795e57f6",
"text": "A fully electrical startup boost converter is presented in this paper. With a three-stage stepping-up architecture, the proposed circuit is capable of performing thermoelectric energy harvesting at an input voltage as low as 50 mV. Due to the zero-current-switching (ZCS) operation of the boost converter and automatic shutdown of the low-voltage starter and the auxiliary converter, conversion efficiency up to 73% is demonstrated. The boost converter does not require bulky transformers or mechanical switches for kick-start, making it very attractive for body area sensor network applications.",
"title": ""
},
{
"docid": "51fbebff61232e46381b243023c35dc5",
"text": "In this paper, mechanical design of a novel spherical wheel shape for a omni-directional mobile robot is presented. The wheel is used in a omnidirectional mobile robot realizing high step-climbing capability with its hemispherical wheel. Conventional Omniwheels can realize omnidirectional motion, however they have a poor step overcoming ability due to the sub-wheel small size. The proposed design solves this drawback by means of a 4 wheeled design. \"Omni-Ball\" is formed by two passive rotational hemispherical wheels and one active rotational axis. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Omnidirectional vehicle with this proposed Omni-Ball mechanism was confirmed. An prototype has been developed to illustrate the concept. Motion experiments, with a test vehicle are also presented.",
"title": ""
},
{
"docid": "c1c177ee96a0da0a4bbc6749364a14e5",
"text": "Knowledge graphs are used to represent relational information in terms of triples. To enable learning about domains, embedding models, such as tensor factorization models, can be used to make predictions of new triples. Often there is background taxonomic information (in terms of subclasses and subproperties) that should also be taken into account. We show that existing fully expressive (a.k.a. universal) models cannot provably respect subclass and subproperty information. We show that minimal modifications to an existing knowledge graph completion method enables injection of taxonomic information. Moreover, we prove that our model is fully expressive, assuming a lower-bound on the size of the embeddings. Experimental results on public knowledge graphs show that despite its simplicity our approach is surprisingly effective. The AI community has long noticed the importance of structure in data. While traditional machine learning techniques have been mostly focused on feature-based representations, the primary form of data in the subfield of Statistical Relational AI (STARAI) (Getoor and Taskar, 2007; Raedt et al., 2016) is in the form of entities and relationships among them. Such entity-relationships are often in the form of (head, relationship, tail) triples, which can also be expressed in the form of a graph, with nodes as entities and labeled directed edges as relationships among entities. Predicting the existence, identity, and attributes of entities and their relationships are among the main goals of StaRAI. Knowledge Graphs (KGs) are graph structured knowledge bases that store facts about the world. A large number of KGs have been created such as NELL (Carlson et al., 2010), FREEBASE (Bollacker et al., 2008), and Google Knowledge Vault (Dong et al., 2014). These KGs have applications in several fields including natural language processing, search, automatic question answering and recommendation systems. Since accessing and storing all the facts in the world is difficult, KGs are incomplete. The goal of link prediction for KGs – a.k.a. KG completion – is to predict the unknown links or relationships in a KG based on the existing ones. This often amounts to infer (the probability of) new triples from the existing triples. Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. A common approach to apply machine learning to symbolic data, such as text, graph and entity-relationships, is through embeddings. Word, sentence and paragraph embeddings (Mikolov et al., 2013; Pennington, Socher, and Manning, 2014), which vectorize words, sentences and paragraphs using context information, are widely used in a variety of natural language processing tasks from syntactic parsing to sentiment analysis. Graph embeddings (Hoff, Raftery, and Handcock, 2002; Grover and Leskovec, 2016; Perozzi, Al-Rfou, and Skiena, 2014) are used in social network analysis for link prediction and community detection. In relational learning, embeddings for entities and relationships are used to generalize from existing data. These embeddings are often formulated in terms of tensor factorization (Nickel, Tresp, and Kriegel, 2012; Bordes et al., 2013; Trouillon et al., 2016; Kazemi and Poole, 2018c). Here, the embeddings are learned such that their interaction through (tensor-)products best predicts the (probability of the) existence of the observed triples; see (Nguyen, 2017; Wang et al., 2017) for details and discussion. Tensor factorization methods have been very successful, yet they rely on a large number of annotated triples to learn useful representations. There is often other information in ontologies which specifies the meaning of the symbols used in a knowledge base. One type of ontological information is represented in a hierarchical structure called a taxonomy. For example, a knowledge base might contain information that DJTrump, whose name is “Donald Trump” is a president, but may not contain information that he is a person, a mammal and an animal, because these are implied by taxonomic knowledge. Being told that mammals are chordates, lets us conclude that DJTrump is also a chordate, without needing to have triples specifying this about multiple mammals. We could also have information about subproperties, such as that being president is a subproperty of “managing”, which in turn is a subproperty of “interacts with”. This paper is about combining taxonomic information in the form of subclass and subproperty (e.g., managing implies interaction) into relational embedding models. We show that existing factorization models that are fully expressive cannot reflect such constraints for all legal entity embeddings. We propose a model that is provably fully expressive and can represent such taxonomic information, and evaluate its performance on real-world datasets. ar X iv :1 81 2. 03 23 5v 1 [ cs .L G ] 7 D ec 2 01 8 Factorization and Embedding Let E represent the set of entities and R represent the set of relations. Let W be a set of triples (h, r, t) that are true in the world, where h, t ∈ E are head and tail, and r ∈ R is the relation in the triple. We use W to represent the triples that are false – i.e., W ≐ {(h, r, t) ∈ E × R × E ∣ (h, r, t) ∉ W}. An example of a triple in W can be (Paris, CapitalCityOfCountry, France) and an example of a triple in W can be (Paris, CapitalCityOfCountry, Germany). A KG K ⊆ W is a subset of all the facts. The problem of the KG completion is to infer W from its subset KG. There exists a variety of methods for KG completion. Here, we consider embedding methods and in particular using tensor-factorization. For a broader review of the existing KG completion that can use background information see Related Work. Embeddings: An embedding is a function from an entity or a relation to a vector (or sometimes higher order tensors) over a field. We use bold lower-case for vectors – that is s ∈ R is an embedding of an entity and r ∈ R is an embedding of a relation. Taxonomies: It is common to have structure over the symbols used in the triples, see (e.g., Shoham, 2016). The Ontology Web Language (OWL) (Hitzler et al., 2012) defines (among many other meta-relations) subproperties and subclasses, where p1 is a subproperty of p2 if ∀x, y ∶ (x, p1, y)→ (x, p2, y), that is whenever p1 is true, p2 is also true. Classes can be defined either as a set with a class assertion (often called “type”) between an entity and a class, e.g., saying x is in class C using (x, type,C) or in terms of the characteristic function of the class, a function that is true of element of the class. If c is the characteristic function of class C, then x is in class c is written (x, c, true). For representations that treat entities and properties symmetrically, the two ways to define classes are essentially the same. C1 is a subclass of C2 if every entity in class C1 is in class C2, that is, ∀x ∶ (x, type,C1) → (x, type,C2) or ∀x ∶ (x, c1, true) → (x, c2, true) . If we treat true as an entity, then subclass can be seen as a special case of subproperty. For the rest of the paper we will refer to subsumption in terms of subproperty (and so also of subclass). A non-trivial subsumption is one which is not symmetric; p1 is a subproperty of p2 and there is some relations that is true of p1 that is not true of p2. We want the subsumption to be over all possible entities; those entities that have a legal embedding according to the representation used, not just those we know exist. Let E∗ be the set of all possible entities with a legal embedding according to the representation used. Tensor factorization: For KG completion a tensor factorization defines a function μ ∶ R ×R ×R → [0,1] that takes the embeddings h, r and t of a triple (h, r, t) as input, and generates a prediction, e.g., a probability, of the triple being true (h, r, t) ∈ W . In particular, μ is often a nonlinearity applied to a multi-linear function of h, r, t. The family of methods that we study uses the following multilinear form: Let x, y, and z be vectors of length k. Define ⟨x,y,z⟩ to be the sum of their element-wise product, namely",
"title": ""
},
{
"docid": "7b6c93b9e787ab0ba512cc8aaff185af",
"text": "INTRODUCTION The field of second (or foreign) language teaching has undergone many fluctuations and dramatic shifts over the years. As opposed to physics or chemistry, where progress is more or less steady until a major discovery causes a radical theoretical revision (Kuhn, 1970), language teaching is a field where fads and heroes have come and gone in a manner fairly consistent with the kinds of changes that occur in youth culture. I believe that one reason for the frequent changes that have been taking place until recently is the fact that very few language teachers have even the vaguest sense of history about their profession and are unclear concerning the historical bases of the many methodological options they currently have at their disposal. It is hoped that this brief and necessarily oversimplified survey will encourage many language teachers to learn more about the origins of their profession. Such knowledge will give some healthy perspective in evaluating the socalled innovations or new approaches to methodology that will continue to emerge over time.",
"title": ""
},
{
"docid": "78c477aeb6a27cf5b4de028c0ecd7b43",
"text": "This paper addresses the problem of speaker clustering in telephone conversations. Recently, a new clustering algorithm named affinity propagation (AP) is proposed. It exhibits fast execution speed and finds clusters with low error. However, AP is an unsupervised approach which may make the resulting number of clusters different from the actual one. This deteriorates the speaker purity dramatically. This paper proposes a modified method named supervised affinity propagation (SAP), which automatically reruns the AP procedure to make the final number of clusters converge to the specified number. Experiments are carried out to compare SAP with traditional k-means and agglomerative hierarchical clustering on 4-hour summed channel conversations in the NIST 2004 Speaker Recognition Evaluation. Experiment results show that the SAP method leads to a noticeable speaker purity improvement with slight cluster purity decrease compared with AP.",
"title": ""
},
{
"docid": "c84a0f630b4fb2e547451d904e1c63a5",
"text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.",
"title": ""
},
{
"docid": "8b2d6ce5158c94f2e21ff4ebd54af2b5",
"text": "Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.",
"title": ""
},
{
"docid": "864adf6f82a0d1af98339f92035b15fc",
"text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.",
"title": ""
},
{
"docid": "205a44a35cc1af14f2b40424cc2654bc",
"text": "This paper focuses on human-pose estimation using a stationary depth sensor. The main challenge concerns reducing the feature ambiguity and modeling human poses in high-dimensional human-pose space because of the curse of dimensionality. We propose a 3-D-point-cloud system that captures the geometric properties (orientation and shape) of the 3-D point cloud of a human to reduce the feature ambiguity, and use the result from action classification to discover low-dimensional manifolds in human-pose space in estimating the underlying probability distribution of human poses. In the proposed system, a 3-D-point-cloud feature called viewpoint and shape feature histogram (VISH) is proposed to extract the 3-D points from a human and arrange them into a tree structure that preserves the global and local properties of the 3-D points. A nonparametric action-mixture model (AMM) is then proposed to model human poses using low-dimensional manifolds based on the concept of distributed representation. Since human poses estimated using the proposed AMM are in discrete space, a kinematic model is added in the last stage of the proposed system to model the spatial relationship of body parts in continuous space to reduce the quantization error in the AMM. The proposed system has been trained and evaluated on a benchmark dataset. Computer-simulation results showed that the overall error and standard deviation of the proposed 3-D-point-cloud system were reduced compared with some existing approaches without action classification.",
"title": ""
},
{
"docid": "63d19f75bc0baee93404488a1d307a32",
"text": "Mitochondria can unfold importing precursor proteins by unraveling them from their N-termini. However, how this unraveling is induced is not known. Two candidates for the unfolding activity are the electrical potential across the inner mitochondrial membrane and mitochondrial Hsp70 in the matrix. Here, we propose that many precursors are unfolded by the electrical potential acting directly on positively charged amino acid side chains in the targeting sequences. Only precursor proteins with targeting sequences that are long enough to reach the matrix at the initial interaction with the import machinery are unfolded by mitochondrial Hsp70, and this unfolding occurs even in the absence of a membrane potential.",
"title": ""
},
{
"docid": "cd55fc3fafe2618f743a845d89c3a796",
"text": "According to the notation proposed by the International Federation for the Theory of Mechanisms and Machines IFToMM (Ionescu, 2003); a parallel manipulator is a mechanism where the motion of the end-effector, namely the moving or movable platform, is controlled by means of at least two kinematic chains. If each kinematic chain, also known popularly as limb or leg, has a single active joint, then the mechanism is called a fully-parallel mechanism, in which clearly the nominal degree of freedom equates the number of limbs. Tire-testing machines (Gough & Whitehall, 1962) and flight simulators (Stewart, 1965), appear to be the first transcendental applications of these complex mechanisms. Parallel manipulators, and in general mechanisms with parallel kinematic architectures, due to benefits --over their serial counterparts-such as higher stiffness and accuracy, have found interesting applications such as walking machines, pointing devices, multi-axis machine tools, micro manipulators, and so on. The pioneering contributions of Gough and Stewart, mainly the theoretical paper of Stewart (1965), influenced strongly the development of parallel manipulators giving birth to an intensive research field. In that way, recently several parallel mechanisms for industrial purposes have been constructed using the, now, classical hexapod as a base mechanism: Octahedral Hexapod HOH-600 (Ingersoll), HEXAPODE CMW 300 (CMW), Cosmo Center PM-600 (Okuma), F-200i (FANUC) and so on. On the other hand one cannot ignore that this kind of parallel kinematic structures have a limited and complex-shaped workspace. Furthermore, their rotation and position capabilities are highly coupled and therefore the control and calibration of them are rather complicated. It is well known that many industrial applications do not require the six degrees of freedom of a parallel manipulator. Thus in order to simplify the kinematics, mechanical assembly and control of parallel manipulators, an interesting trend is the development of the so called defective parallel manipulators, in other words, spatial parallel manipulators with fewer than six degrees of freedom. Special mention deserves the Delta robot, invented by Clavel (1991); which proved that parallel robotic manipulators are an excellent option for industrial applications where the accuracy and stiffness are fundamental characteristics. Consider for instance that the Adept Quattro robot, an application of the Delta robot, developed by Francois Pierrot in collaboration with Fatronik (Int. patent appl. WO/2006/087399), has a",
"title": ""
},
{
"docid": "8ddb7c62f032fb07116e7847e69b51d1",
"text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools",
"title": ""
},
{
"docid": "32b96d4d23a03b1828f71496e017193e",
"text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.",
"title": ""
}
] |
scidocsrr
|
c159c06516b5e75bd8ea00789a521c43
|
A new posterolateral approach without fibula osteotomy for the treatment of tibial plateau fractures.
|
[
{
"docid": "f91007844639e431b2f332f6f32df33b",
"text": "Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.",
"title": ""
}
] |
[
{
"docid": "ea5a07b07631248a2f5cbee80420924d",
"text": "Coordinating fleets of autonomous, non-holonomic vehicles is paramount to many industrial applications. While there exists solutions to efficiently calculate trajectories for individual vehicles, an effective methodology to coordinate their motions and to avoid deadlocks is still missing. Decoupled approaches, where motions are calculated independently for each vehicle and then centrally coordinated for execution, have the means to identify deadlocks, but not to solve all of them. We present a novel approach that overcomes this limitation and that can be used to complement the deficiencies of decoupled solutions with centralized coordination. Here, we formally define an extension of the framework of lattice-based motion planning to multi-robot systems and we validate it experimentally. Our approach can jointly plan for multiple vehicles and it generates kinematically feasible and deadlock-free motions.",
"title": ""
},
{
"docid": "34667babdde26a81244c7e1c929e7653",
"text": "Noise level estimation is crucial in many image processing applications, such as blind image denoising. In this paper, we propose a novel noise level estimation approach for natural images by jointly exploiting the piecewise stationarity and a regular property of the kurtosis in bandpass domains. We design a $K$ -means-based algorithm to adaptively partition an image into a series of non-overlapping regions, each of whose clean versions is assumed to be associated with a constant, but unknown kurtosis throughout scales. The noise level estimation is then cast into a problem to optimally fit this new kurtosis model. In addition, we develop a rectification scheme to further reduce the estimation bias through noise injection mechanism. Extensive experimental results show that our method can reliably estimate the noise level for a variety of noise types, and outperforms some state-of-the-art techniques, especially for non-Gaussian noises.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "c8d690eb4dd2831f28106c3cfca4552c",
"text": "While ASCII art is a worldwide popular art form, automatic generating structure-based ASCII art from natural photographs remains challenging. The major challenge lies on extracting the perception-sensitive structure from the natural photographs so that a more concise ASCII art reproduction can be produced based on the structure. However, due to excessive amount of texture in natural photos, extracting perception-sensitive structure is not easy, especially when the structure may be weak and within the texture region. Besides, to fit different target text resolutions, the amount of the extracted structure should also be controllable. To tackle these challenges, we introduce a visual perception mechanism of non-classical receptive field modulation (non-CRF modulation) from physiological findings to this ASCII art application, and propose a new model of non-CRF modulation which can better separate the weak structure from the crowded texture, and also better control the scale of texture suppression. Thanks to our non-CRF model, more sensible ASCII art reproduction can be obtained. In addition, to produce more visually appealing ASCII arts, we propose a novel optimization scheme to obtain the optimal placement of proportional-font characters. We apply our method on a rich variety of images, and visually appealing ASCII art can be obtained in all cases.",
"title": ""
},
{
"docid": "eec886c9c758e90acc4b97df85057b61",
"text": "A full-term male foal born in a farm holidays in Maremma (Tuscany, Italy) was euthanized shortly after birth due to the presence of several malformations. The rostral maxilla and the nasal septum were deviated to the right (wry nose), and a severe cervico-thoracic scoliosis and anus atresia were evident. Necropsy revealed ileum atresia and agenesis of the right kidney. The brain showed an incomplete separation of the hemispheres of the rostral third of the forebrain and the olfactory bulbs and tracts were absent (olfactory aplasia). A diagnosis of semilobar holoprosencephaly (HPE) was achieved. This is the first case of semilobar HPE associated with other organ anomalies in horses.",
"title": ""
},
{
"docid": "83709dc50533c28221d89490bcb3a5aa",
"text": "Hyperspectral image classification has attracted extensive research efforts in the recent decades. The main difficulty lies in the few labeled samples versus high dimensional features. The spectral-spatial classification method using Markov random field (MRF) has been shown to perform well in improving the classification performance. Moreover, active learning (AL), which iteratively selects the most informative unlabeled samples and enlarges the training set, has been widely studied and proven useful in remotely sensed data. In this paper, we focus on the combination of MRF and AL in the classification of hyperspectral images, and a new MRF model-based AL (MRF-AL) framework is proposed. In the proposed framework, the unlabeled samples whose predicted results vary before and after the MRF processing step is considered as uncertain. In this way, subset is firstly extracted from the entire unlabeled set, and AL process is then performed on the samples in the subset. Moreover, hybrid AL methods which combine the MRF-AL framework with either the passive random selection method or the existing AL methods are investigated. To evaluate and compare the proposed AL approaches with other state-of-the-art techniques, experiments were conducted on two hyperspectral data sets. Results demonstrated the effectiveness of the hybrid AL methods, as well as the advantage of the proposed MRF-AL framework.",
"title": ""
},
{
"docid": "a436bdc20d63dcf4f0647005bb3314a7",
"text": "The purpose of this study is to evaluate the feasibility of the integration of concept maps and tablet PCs in anti-phishing education for enhancing students’ learning motivation and achievement. The subjects were 155 students from grades 8 and 9. They were divided into an experimental group (77 students) and a control group (78 students). To begin with, the two groups received identical anti-phishing training: the teacher explained the concept of anti-phishing and asked the students questions; the students then used tablet PCs for polling and answering the teachers’ questions. Afterwards, the two groups performed different group activities: the experimental group was divided into smaller groups, which used tablet PCs to draw concept maps; the control group was also divided into groups which completed worksheets. The study found that the use of concept maps on tablet PCs during the anti-phishing education significantly enhanced the students’ learning motivation when their initial motivation was already high. For learners with low initial motivation or prior knowledge, the use of worksheets could increase their posttest achievement and motivation. This study therefore proposes that motivation and achievement in teaching the anti-phishing concept can be effectively enhanced if the curriculum is designed based on the students’ learning preferences or prior knowledge, in conjunction with the integration of mature and accessible technological media into the learning activities. The findings can also serve as a reference for anti-phishing educators and researchers.",
"title": ""
},
{
"docid": "cc3f821bd9617d31a8b303c4982e605f",
"text": "Body composition in older adults can be assessed using simple, convenient but less precise anthropometric methods to assess (regional) body fat and skeletal muscle, or more elaborate, precise and costly methods such as computed tomography and magnetic resonance imaging. Body weight and body fat percentage generally increase with aging due to an accumulation of body fat and a decline in skeletal muscle mass. Body weight and fatness plateau at age 75–80 years, followed by a gradual decline. However, individual weight patterns may differ and the periods of weight loss and weight (re)gain common in old age may affect body composition. Body fat redistributes with aging, with decreasing subcutaneous and appendicular fat and increasing visceral and ectopic fat. Skeletal muscle mass declines with aging, a process called sarcopenia. Obesity in old age is associated with a higher risk of mobility limitations, disability and mortality. A higher waist circumference and more visceral fat increase these risks, independent of overall body fatness, as do involuntary weight loss and weight cycling. The role of low skeletal muscle mass in the development of mobility limitations and disability remains controversial, but it is much smaller than the role of high body fat. Low muscle mass does not seem to increase mortality risk in older adults.",
"title": ""
},
{
"docid": "b134cf07e01f1568d127880777492770",
"text": "This paper addresses the problem of recovering 3D nonrigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full face and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, Tomasi and Kanades’ factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three-step process to yield pose, configuration and shape. To the best of our knowledge, this is the first model free approach that can recover from single-view video sequences nonrigid shape models. We demonstrate this new algorithm on several video sequences. We were able to recover 3D non-rigid human face and animal models with high accuracy.",
"title": ""
},
{
"docid": "87eb54a981fca96475b73b3dfa99b224",
"text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.",
"title": ""
},
{
"docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25",
"text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.",
"title": ""
},
{
"docid": "914b38c4a5911a481bf9088f75adef30",
"text": "This paper presents a mixed-integer LP approach to the solution of the long-term transmission expansion planning problem. In general, this problem is large-scale, mixed-integer, nonlinear, and nonconvex. We derive a mixed-integer linear formulation that considers losses and guarantees convergence to optimality using existing optimization software. The proposed model is applied to Garver’s 6-bus system, the IEEE Reliability Test System, and a realistic Brazilian system. Simulation results show the accuracy as well as the efficiency of the proposed solution technique.",
"title": ""
},
{
"docid": "dae2ef494ca779e701288414e1cbf0ef",
"text": "API example code search is an important applicationin software engineering. Traditional approaches to API codesearch are based on information retrieval. Recent advance inWord2Vec has been applied to support the retrieval of APIexamples. In this work, we perform a preliminary study thatcombining traditional IR with Word2Vec achieves better retrievalaccuracy. More experiments need to be done to study differenttypes of combination among two lines of approaches.",
"title": ""
},
{
"docid": "a2253bf241f7e5f60e889258e4c0f40c",
"text": "BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct evaluation strategies were identified, wherein the most common one, “Pre-Post Comparison,” was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, “Project” represents the majority with 66 percent. CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.",
"title": ""
},
{
"docid": "e584549afba4c444c32dfe67ee178a84",
"text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d8fc5a8bc075343b2e70a9b441ecf6e5",
"text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.",
"title": ""
},
{
"docid": "0056d305c7689d45e7cd9f4b87cac79e",
"text": "A method is presented that uses a vectorial multiscale feature image for wave front propagation between two or more user defined points to retrieve the central axis of tubular objects in digital images. Its implicit scale selection mechanism makes the method more robust to overlap and to the presence of adjacent structures than conventional techniques that propagate a wave front over a scalar image representing the maximum of a range of filters. The method is shown to retain its potential to cope with severe stenoses or imaging artifacts and objects with varying widths in simulated and actual two-dimensional angiographic images.",
"title": ""
},
{
"docid": "844dcf80b2feba89fced99a0f8cbe9bf",
"text": "Communication could potentially be an effective way for multi-agent cooperation. However, information sharing among all agents or in predefined communication architectures that existing methods adopt can be problematic. When there is a large number of agents, agents cannot differentiate valuable information that helps cooperative decision making from globally shared information. Therefore, communication barely helps, and could even impair the learning of multi-agent cooperation. Predefined communication architectures, on the other hand, restrict communication among agents and thus restrain potential cooperation. To tackle these difficulties, in this paper, we propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making. Our model leads to efficient and effective communication for large-scale multi-agent cooperation. Empirically, we show the strength of our model in a variety of cooperative scenarios, where agents are able to develop more coordinated and sophisticated strategies than existing methods.",
"title": ""
},
{
"docid": "f4fb632268bbbf76878472183c511b05",
"text": "Mid-way through the 2007 DARPA Urban Challenge, MIT’s autonomous Land Rover LR3 ‘Talos’ and Team Cornell’s autonomous Chevrolet Tahoe ‘Skynet’ collided in a low-speed accident, one of the first well-documented collisions between two full-size autonomous vehicles. This collaborative study between MIT and Cornell examines the root causes of the collision, which are identified in both teams’ system designs. Systems-level descriptions of both autonomous vehicles are given, and additional detail is provided on sub-systems and algorithms implicated in the collision. A brief summary of robot–robot interactions during the race is presented, followed by an in-depth analysis of both robots’ behaviors leading up to and during the Skynet–Talos collision. Data logs from the vehicles are used to show the gulf between autonomous and human-driven vehicle behavior at low speeds and close proximities. Contributing factors are shown to be: (1) difficulties in sensor data association leading to phantom obstacles and an inability to detect slow moving vehicles, (2) failure to anticipate vehicle intent, and (3) an over emphasis on lane constraints versus vehicle proximity in motion planning. Eye contact between human road users is a crucial communications channel for slow-moving close encounters between vehicles. Inter-vehicle communication may play a similar role for autonomous vehicles; however, there are availability and denial-of-service issues to be addressed.",
"title": ""
},
{
"docid": "6ed4d5ae29eef70f5aae76ebed76b8ca",
"text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.",
"title": ""
}
] |
scidocsrr
|
2f40ac55162bde7a6b103798fdcdb1ac
|
Robust Top-k Multiclass SVM for Visual Category Recognition
|
[
{
"docid": "b5347e195b44d5ae6d4674c685398fa3",
"text": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensional image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory.",
"title": ""
}
] |
[
{
"docid": "28d8ef2f63b0b4f55c60ae06484365d1",
"text": "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data.\n We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks.\n In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.",
"title": ""
},
{
"docid": "eab052e8172c62fec9b532400fe5eeb6",
"text": "An overview on state of the art automotive radar usage is presented and the changing requirements from detection and ranging towards radar based environmental understanding for highly automated and autonomous driving deduced. The traditional segmentation in driving, manoeuvering and parking tasks vanishes at the driver less stage. Situation assessment and trajectory/manoeuver planning need to operate in a more thorough way. Hence, fast situational up-date, motion prediction of all kind of dynamic objects, object dimension, ego-motion estimation, (self)-localisation and more semantic/classification information, which allows to put static and dynamic world into correlation/context with each other is mandatory. All these are new areas for radar signal processing and needs revolutionary new solutions. The article outlines the benefits that make radar essential for autonomous driving and presents recent approaches in radar based environmental perception.",
"title": ""
},
{
"docid": "244df843f56a59f20a2fc1d2293a7b53",
"text": "We propose a new time-release protocol based on the bitcoin protocol and witness encryption. We derive a “public key” from the bitcoin block chain for encryption. The decryption key are the unpredictable information in the future blocks (e.g., transactions, nonces) that will be computed by the bitcoin network. We build this protocol by witness encryption and encrypt with the bitcoin proof-of-work constraints. The novelty of our protocol is that the decryption key will be automatically and publicly available in the bitcoin block chain when the time is due. Witness encryption was originally proposed by Garg, Gentry, Sahai and Waters. It provides a means to encrypt to an instance, x, of an NP language and to decrypt by a witness w that x is in the language. Encoding CNF-SAT in the existing witness encryption schemes generate poly(n · k) group elements in the ciphertext where n is the number of variables and k is the number of clauses of the CNF formula. We design a new witness encryption for CNF-SAT which achieves ciphertext size of 2n + 2k group elements. Our witness encryption is based on an intuitive reduction from SAT to Subset-Sum problem. Our scheme uses the framework of multilinear maps, but it is independent of the implementation details of multilinear maps.",
"title": ""
},
{
"docid": "62d23e00d13903246cc7128fe45adf12",
"text": "The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.)",
"title": ""
},
{
"docid": "5e6990d8f1f81799e2e7fdfe29d14e4d",
"text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.",
"title": ""
},
{
"docid": "f61ea212d71eebf43fd677016ce9770a",
"text": "Learning to drive faithfully in highly stochastic urban settings remains an open problem. To that end, we propose a Multi-task Learning from Demonstration (MTLfD) framework which uses supervised auxiliary task prediction to guide the main task of predicting the driving commands. Our framework involves an end-to-end trainable network for imitating the expert demonstrator’s driving commands. The network intermediately predicts visual affordances and action primitives through direct supervision which provide the aforementioned auxiliary supervised guidance. We demonstrate that such joint learning and supervised guidance facilitates hierarchical task decomposition, assisting the agent to learn faster, achieve better driving performance and increases transparency of the otherwise black-box end-to-end network. We run our experiments to validate the MT-LfD framework in CARLA, an open-source urban driving simulator. We introduce multiple non-player agents in CARLA and induce temporal noise in them for realistic stochasticity.",
"title": ""
},
{
"docid": "1ac0b1971ee476d3343c8746c5f3dc1f",
"text": "OBJECTIVE\nThis work describes the experimental validation of a cardiac simulator for three heart rates (60, 80 and 100 beats per minute), under physiological conditions, as a suitable environment for prosthetic heart valves testing in the mitral or aortic position.\n\n\nMETHODS\nIn the experiment, an aortic bileaflet mechanical valve and a mitral bioprosthesis were employed in the left ventricular model. A test fluid of 47.6% by volume of glycerin solution in water at 36.5ºC was used as blood analogue fluid. A supervisory control and data acquisition system implemented previously in LabVIEW was applied to induce the ventricular operation and to acquire the ventricular signals. The parameters of the left ventricular model operation were based on in vivo and in vitro data. The waves of ventricular and systemic pressures, aortic flow, stroke volume, among others, were acquired while manual adjustments in the arterial impedance model were also established.\n\n\nRESULTS\nThe acquired waves showed good results concerning some in vivo data and requirements from the ISO 5840 standard.\n\n\nCONCLUSION\nThe experimental validation was performed, allowing, in future studies, characterizing the hydrodynamic performance of prosthetic heart valves.",
"title": ""
},
{
"docid": "097414fbbbf19f7b244d4726d5d27f96",
"text": "Touch is both the first sense to develop and a critical means of information acquisition and environmental manipulation. Physical touch experiences may create an ontological scaffold for the development of intrapersonal and interpersonal conceptual and metaphorical knowledge, as well as a springboard for the application of this knowledge. In six experiments, holding heavy or light clipboards, solving rough or smooth puzzles, and touching hard or soft objects nonconsciously influenced impressions and decisions formed about unrelated people and situations. Among other effects, heavy objects made job candidates appear more important, rough objects made social interactions appear more difficult, and hard objects increased rigidity in negotiations. Basic tactile sensations are thus shown to influence higher social cognitive processing in dimension-specific and metaphor-specific ways.",
"title": ""
},
{
"docid": "bd3cc8370fd8669768f62d465f2c5531",
"text": "Cognitive radio technology has been proposed to improve spectrum efficiency by having the cognitive radios act as secondary users to opportunistically access under-utilized frequency bands. Spectrum sensing, as a key enabling functionality in cognitive radio networks, needs to reliably detect signals from licensed primary radios to avoid harmful interference. However, due to the effects of channel fading/shadowing, individual cognitive radios may not be able to reliably detect the existence of a primary radio. In this paper, we propose an optimal linear cooperation framework for spectrum sensing in order to accurately detect the weak primary signal. Within this framework, spectrum sensing is based on the linear combination of local statistics from individual cognitive radios. Our objective is to minimize the interference to the primary radio while meeting the requirement of opportunistic spectrum utilization. We formulate the sensing problem as a nonlinear optimization problem. By exploiting the inherent structures in the problem formulation, we develop efficient algorithms to solve for the optimal solutions. To further reduce the computational complexity and obtain solutions for more general cases, we finally propose a heuristic approach, where we instead optimize a modified deflection coefficient that characterizes the probability distribution function of the global test statistics at the fusion center. Simulation results illustrate significant cooperative gain achieved by the proposed strategies. The insights obtained in this paper are useful for the design of optimal spectrum sensing in cognitive radio networks.",
"title": ""
},
{
"docid": "655a95191700e24c6dcd49b827de4165",
"text": "With the increasing demand for express delivery, a courier needs to deliver many tasks in one day and it's necessary to deliver punctually as the customers expect. At the same time, they want to schedule the delivery tasks to minimize the total time of a courier's one-day delivery, considering the total travel time. However, most of scheduling researches on express delivery focus on inter-city transportation, and they are not suitable for the express delivery to customers in the “last mile”. To solve the issue above, this paper proposes a personalized service for scheduling express delivery, which not only satisfies all the customers' appointment time but also makes the total time minimized. In this service, personalized and accurate travel time estimation is important to guarantee delivery punctuality when delivering shipments. Therefore, the personalized scheduling service is designed to consist of two basic services: (1) personalized travel time estimation service for any path in express delivery using courier trajectories, (2) an express delivery scheduling service considering multiple factors, including customers' appointments, one-day delivery costs, etc., which is based on the accurate travel time estimation provided by the first service. We evaluate our proposed service based on extensive experiments, using GPS trajectories generated by more than 1000 couriers over a period of two months in Beijing. The results demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "fb812ad6355e10dafff43c3d4487f6a7",
"text": "Image priors are of great importance in image restoration tasks. These problems can be addressed by decomposing the degraded image into overlapping patches, treating the patches individually and averaging them back together. Recently, the Expected Patch Log Likelihood (EPLL) method has been introduced, arguing that the chosen model should be enforced on the final reconstructed image patches. In the context of a Gaussian Mixture Model (GMM), this idea has been shown to lead to state-of-the-art results in image denoising and debluring. In this paper we combine the EPLL with a sparse-representation prior. Our derivation leads to a close yet extended variant of the popular K-SVD image denoising algorithm, where in order to effectively maximize the EPLL the denoising process should be iterated. This concept lies at the core of the K-SVD formulation, but has not been addressed before due the need to set different denoising thresholds in the successive sparse coding stages. We present a method that intrinsically determines these thresholds in order to improve the image estimate. Our results show a notable improvement over K-SVD in image denoising and inpainting, achieving comparable performance to that of EPLL with GMM in denoising.",
"title": ""
},
{
"docid": "e91ace8f6eaf2fc2101bd715c7a43f1d",
"text": "We demonstrated the in vivo feasibility of using focused ultrasound (FUS) to transiently modulate (through either stimulation or suppression) the function of regional brain tissue in rabbits. FUS was delivered in a train of pulses at low acoustic energy, far below the cavitation threshold, to the animal's somatomotor and visual areas, as guided by anatomical and functional information from magnetic resonance imaging (MRI). The temporary alterations in the brain function affected by the sonication were characterized by both electrophysiological recordings and functional brain mapping achieved through the use of functional MRI (fMRI). The modulatory effects were bimodal, whereby the brain activity could either be stimulated or selectively suppressed. Histological analysis of the excised brain tissue after the sonication demonstrated that the FUS did not elicit any tissue damages. Unlike transcranial magnetic stimulation, FUS can be applied to deep structures in the brain with greater spatial precision. Transient modulation of brain function using image-guided and anatomically-targeted FUS would enable the investigation of functional connectivity between brain regions and will eventually lead to a better understanding of localized brain functions. It is anticipated that the use of this technology will have an impact on brain research and may offer novel therapeutic interventions in various neurological conditions and psychiatric disorders.",
"title": ""
},
{
"docid": "ee240969f586cb9f8ef51a192daa0526",
"text": "The location of mobile terminals has received considerable attention in the recent years. The performance of mobile location systems is limited by errors primarily caused by nonline-of-sight (NLOS) propagation conditions. We investigate the NLOS error identification and correction techniques for mobile user location in wireless cellular systems. Based on how much a priori knowledge of the NLOS error is available, two NLOS mitigation algorithms are proposed. Simulation results demonstrate that with the prior information database, the location estimate can be obtained with good accuracy even in severe NLOS propagation conditions.",
"title": ""
},
{
"docid": "62353069a6c29c4f8bccce46b257e19e",
"text": "Abstract -This paper presents the overall concept of Road Power Generator (RPG) that deals with the mechanism to generate electricity from the wasted kinetic energy of vehicles. It contains a flip-plate, gear mechanism, flywheel, and finally a generator is coupled at the end so that the rotational motion of the flywheel is used to rotate the shaft of the generator, thus producing electricity. RPG does not require any piezoelectric material. It is novel concept based on flip-plate mechanism. The project can be installed at highways where a huge number of vehicles pass daily, thus resulting in more amount of electricity generated. This generated electricity can be utilized for different types of applications and mainly for street lighting, on road battery charging units and many domestic applications like air conditioning, lighting, heating, etc.",
"title": ""
},
{
"docid": "422d13161686a051be201fb17bece304",
"text": "Due to the growing demand on electricity, how to improve the efficiency of equipment in a thermal power plant has become one of the critical issues. Reports indicate that efficiency and availability are heavily dependant upon high reliability and maintainability. Recently, the concept of e-maintenance has been introduced to reduce the cost of maintenance. In e-maintenance systems, the intelligent fault detection system plays a crucial role for identifying failures. Data mining techniques are at the core of such intelligent systems and can greatly influence their performance. Applying these techniques to fault detection makes it possible to shorten shutdown maintenance and thus increase the capacity utilization rates of equipment. Therefore, this work proposes a support vector machines (SVM) based model which integrates a dimension reduction scheme to analyze the failures of turbines in thermal power facilities. Finally, a real case from a thermal power plant is provided to evaluate the effectiveness of the proposed SVM based model. Experimental results show that SVM outperforms linear discriminant analysis (LDA) and back-propagation neural networks (BPN) in classification performance. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "95c634481e8c4468483ef447676098b6",
"text": "The success of cancer immunotherapy has generated tremendous interest in identifying new immunotherapeutic targets. To date, the majority of therapies have focussed on stimulating the adaptive immune system to attack cancer, including agents targeting CTLA-4 and the PD-1/PD-L1 axis. However, macrophages and other myeloid immune cells offer much promise as effectors of cancer immunotherapy. The CD47/signal regulatory protein alpha (SIRPα) axis is a critical regulator of myeloid cell activation and serves a broader role as a myeloid-specific immune checkpoint. CD47 is highly expressed on many different types of cancer, and it transduces inhibitory signals through SIRPα on macrophages and other myeloid cells. In a diverse range of preclinical models, therapies that block the CD47/SIRPα axis stimulate phagocytosis of cancer cells in vitro and anti-tumour immune responses in vivo. A number of therapeutics that target the CD47/SIRPα axis are under preclinical and clinical investigation. These include anti-CD47 antibodies, engineered receptor decoys, anti-SIRPα antibodies and bispecific agents. These therapeutics differ in their pharmacodynamic, pharmacokinetic and toxicological properties. Clinical trials are underway for both solid and haematologic malignancies using anti-CD47 antibodies and recombinant SIRPα proteins. Since the CD47/SIRPα axis also limits the efficacy of tumour-opsonising antibodies, additional trials will examine their potential synergy with agents such as rituximab, cetuximab and trastuzumab. Phagocytosis in response to CD47/SIRPα-blocking agents results in antigen uptake and presentation, thereby linking the innate and adaptive immune systems. CD47/SIRPα blocking therapies may therefore synergise with immune checkpoint inhibitors that target the adaptive immune system. As a critical regulator of macrophage phagocytosis and activation, the potential applications of CD47/SIRPα blocking therapies extend beyond human cancer. They may be useful for the treatment of infectious disease, conditioning for stem cell transplant, and many other clinical indications.",
"title": ""
},
{
"docid": "dc48b68a202974f62ae63d1d14002adf",
"text": "In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively. The proposed scheme only needs the error signal between q axis estimated current and q axis actual current. Then estimated speed is gained by using RBFN regulator which adjusted error signal. Comparing study of simulation and experimental results between this novel sensorless scheme and the scheme in reference literature, the results show that this novel method is capable of precise estimating the rotor position and speed under the condition of high or low speed. It also possesses good performance of static and dynamic.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "243391e804c06f8a53af906b31d4b99a",
"text": "As key decisions are often made based on information contained in a database, it is important for the database to be as complete and correct as possible. For this reason, many data cleaning tools have been developed to automatically resolve inconsistencies in databases. However, data cleaning tools provide only best-effort results and usually cannot eradicate all errors that may exist in a database. Even more importantly, existing data cleaning tools do not typically address the problem of determining what information is missing from a database.\n To overcome the limitations of existing data cleaning techniques, we present QOCO, a novel query-oriented system for cleaning data with oracles. Under this framework, incorrect (resp. missing) tuples are removed from (added to) the result of a query through edits that are applied to the underlying database, where the edits are derived by interacting with domain experts which we model as oracle crowds. We show that the problem of determining minimal interactions with oracle crowds to derive database edits for removing (adding) incorrect (missing) tuples to the result of a query is NP-hard in general and present heuristic algorithms that interact with oracle crowds. Finally, we implement our algorithms in our prototype system QOCO and show that it is effective and efficient through a comprehensive suite of experiments.",
"title": ""
},
{
"docid": "da088acea8b1d2dc68b238e671649f4f",
"text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.",
"title": ""
}
] |
scidocsrr
|
55e2dc25b7119ad55fec5cb1fee9e87f
|
Co-analysis of RAS Log and Job Log on Blue Gene/P
|
[
{
"docid": "f910996af5983cf121b7912080c927d6",
"text": "In large-scale networked computing systems, component failures become norms instead of exceptions. Failure prediction is a crucial technique for self-managing resource burdens. Failure events in coalition systems exhibit strong correlations in time and space domain. In this paper, we develop a spherical covariance model with an adjustable timescale parameter to quantify the temporal correlation and a stochastic model to describe spatial correlation. We further utilize the information of application allocation to discover more correlations among failure instances. We cluster failure events based on their correlations and predict their future occurrences. We implemented a failure prediction framework, called PREdictor of Failure Events Correlated Temporal-Spatially (hPREFECTs), which explores correlations among failures and forecasts the time-between-failure of future instances. We evaluate the performance of hPREFECTs in both offline prediction of failure by using the Los Alamos HPC traces and online prediction in an institute-wide clusters coalition environment. Experimental results show the system achieves more than 76% accuracy in offline prediction and more than 70% accuracy in online prediction during the time from May 2006 to April 2007.",
"title": ""
}
] |
[
{
"docid": "4ce6063786afa258d8ae982c7f17a8b1",
"text": "This paper proposes a hybrid phase-shift-controlled three-level (TL) and LLC dc-dc converter. The TL dc-dc converter and LLC dc-dc converter have their own transformers. Compared with conventional half-bridge TL dc-dc converters, the proposed one has no additional switch at the primary side of the transformer, where the TL converter shares the lagging switches with the LLC converter. At the secondary side of the transformers, the TL and LLC converters are connected by an active switch. With the aid of the LLC converter, the zero voltage switching (ZVS) of the lagging switches can be achieved easily even under light load conditions. Wide ZVS range for all the switches can be ensured. Both the circulating current at the primary side and the output filter inductance are reduced. Furthermore, the efficiency of the converter is improved dramatically. The features of the proposed converter are analyzed, and the design guidelines are given in the paper. Finally, the performance of the converter is verified by a 1-kW experimental prototype.",
"title": ""
},
{
"docid": "2274f3d3dc25bec4b86988615d421f10",
"text": "Sepsis is a dangerous condition that is a leading cause of patient mortality. Treating sepsis is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. In this work, we explore the use of continuous state-space model-based reinforcement learning (RL) to discover high-quality treatment policies for sepsis patients. Our quantitative evaluation reveals that by blending the treatment strategy discovered with RL with what clinicians follow, we can obtain improved policies, potentially allowing for better medical treatment for sepsis.",
"title": ""
},
{
"docid": "688bacdee25152e1de6bcc5005b75d9a",
"text": "Data Mining provides powerful techniques for various fields including education. The research in the educational field is rapidly increasing due to the massive amount of students’ data which can be used to discover valuable pattern pertaining students’ learning behaviour. This paper proposes a framework for predicting students’ academic performance of first year bachelor students in Computer Science course. The data were collected from 8 year period intakes from July 2006/2007 until July 2013/2014 that contains the students’ demographics, previous academic records, and family background information. Decision Tree, Naïve Bayes, and Rule Based classification techniques are applied to the students’ data in order to produce the best students’ academic performance prediction model. The experiment result shows the Rule Based is a best model among the other techniques by receiving the highest accuracy value of 71.3%. The extracted knowledge from prediction model will be used to identify and profile the student to determine the students’ level of success in the first semester.",
"title": ""
},
{
"docid": "8c0f20061bd09b328748d256d5ece7cc",
"text": "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"title": ""
},
{
"docid": "d7d66f89e5f5f2d6507e0939933b3a17",
"text": "The discarded clam shell waste, fossil and edible oil as biolubricant feedstocks create environmental impacts and food chain dilemma, thus this work aims to circumvent these issues by using activated saltwater clam shell waste (SCSW) as solid catalyst for conversion of Jatropha curcas oil as non-edible sources to ester biolubricant. The characterization of solid catalyst was done by Differential Thermal Analysis-Thermo Gravimetric Analysis (DTATGA), X-Ray Fluorescence (XRF), X-Ray Diffraction (XRD), Brunauer-Emmett-Teller (BET), Field Emission Scanning Electron Microscopy (FESEM) and Fourier Transformed Infrared Spectroscopy (FTIR) analysis. The calcined catalyst was used in the transesterification of Jatropha oil to methyl ester as the first step, and the second stage was involved the reaction of Jatropha methyl ester (JME) with trimethylolpropane (TMP) based on the various process parameters. The formated biolubricant was analyzed using the capillary column (DB-5HT) equipped Gas Chromatography (GC). The conversion results of Jatropha oil to ester biolubricant can be found nearly 96.66%, and the maximum distribution composition mainly contains 72.3% of triester (TE). Keywords—Conversion, ester biolubricant, Jatropha curcas oil, solid catalyst.",
"title": ""
},
{
"docid": "00e5c92435378e4fdcee5f9fa58271b5",
"text": "Because the position transducers commonly used (optical encoders and electromagnetic resolvers) do not inherently produce a true, instantaneous velocity measurement, some signal processing techniques are generally used to estimate the velocity at each sample instant. This estimated signal is then used as the velocity feedback signal for the velocity loop control. An analysis is presented of the limitations of such approaches, and a technique which optimally estimates the velocity at each sample instant is presented. The method is shown to offer a significant improvement in command-driven systems and to reduce the effect of quantized angular resolution which limits the ultimate performance of all digital servo drives. The noise reduction is especially relevant for AC servo drives due to the high current loop bandwidths required for their correct operation. The method demonstrates improved measurement performance over a classical DC tachometer.<<ETX>>",
"title": ""
},
{
"docid": "b61c9f69a2fffcf2c3753e51a3bbfa14",
"text": "..............................................................................................................ix 1 Interoperability .............................................................................................1 1.",
"title": ""
},
{
"docid": "0660dc780eda869aabc1f856ec3f193f",
"text": "This paper provides a study of the smart grid projects realised in Europe and presents their technological solutions with a focus on smart metering Low Voltage (LV) applications. Special attention is given to the telecommunications technologies used. For this purpose, we present the telecommunication technologies chosen by several European utilities for the accomplishment of their smart meter national roll-outs. Further on, a study is performed based on the European Smart Grid Projects, highlighting their technological options. The range of the projects analysed covers the ones including smart metering implementation as well as those in which smart metering applications play a significant role in the overall project success. The survey reveals that various topics are directly or indirectly linked to smart metering applications, like smart home/building, energy management, grid monitoring and integration of Renewable Energy Sources (RES). Therefore, the technological options that lie behind such projects are pointed out. For reasons of completeness, we also present the main characteristics of the telecommunication technologies that are found to be used in practice for the LV grid.",
"title": ""
},
{
"docid": "e6a97c3365e16d77642a84f0a80863e2",
"text": "The current statuses and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano-Things (IoNT) are extensively reviewed and a summarized survey is presented. The analysis clearly distinguishes between IoT and IoE, which are wrongly considered to be the same by many commentators. After evaluating the current trends of advancement in the fields of IoT, IoE and IoNT, this paper identifies the 21 most significant current and future challenges as well as scenarios for the possible future expansion of their applications. Despite possible negative aspects of these developments, there are grounds for general optimism about the coming technologies. Certainly, many tedious tasks can be taken over by IoT devices. However, the dangers of criminal and other nefarious activities, plus those of hardware and software errors, pose major challenges that are a priority for further research. Major specific priority issues for research are identified.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "30a6a3df784c2a8cc69a1bd75ad1998b",
"text": "Traditional stock market prediction approaches commonly utilize the historical price-related data of the stocks to forecast their future trends. As the Web information grows, recently some works try to explore financial news to improve the prediction. Effective indicators, e.g., the events related to the stocks and the people’s sentiments towards the market and stocks, have been proved to play important roles in the stocks’ volatility, and are extracted to feed into the prediction models for improving the prediction accuracy. However, a major limitation of previous methods is that the indicators are obtained from only a single source whose reliability might be low, or from several data sources but their interactions and correlations among the multi-sourced data are largely ignored. In this work, we extract the events from Web news and the users’ sentiments from social media, and investigate their joint impacts on the stock price movements via a coupled matrix and tensor factorization framework. Specifically, a tensor is firstly constructed to fuse heterogeneous data and capture the intrinsic ∗Corresponding author Email addresses: zhangx@bupt.edu.cn (Xi Zhang), 2011213120@bupt.edu.cn (Yunjia Zhang), szwang@nuaa.edu.cn (Senzhang Wang), yaoyuntao@bupt.edu.cn (Yuntao Yao), fangbx@bupt.edu.cn (Binxing Fang), psyu@uic.edu (Philip S. Yu) Preprint submitted to Journal of LTEX Templates September 2, 2018 ar X iv :1 80 1. 00 58 8v 1 [ cs .S I] 2 J an 2 01 8 relations among the events and the investors’ sentiments. Due to the sparsity of the tensor, two auxiliary matrices, the stock quantitative feature matrix and the stock correlation matrix, are constructed and incorporated to assist the tensor decomposition. The intuition behind is that stocks that are highly correlated with each other tend to be affected by the same event. Thus, instead of conducting each stock prediction task separately and independently, we predict multiple correlated stocks simultaneously through their commonalities, which are enabled via sharing the collaboratively factorized low rank matrices between matrices and the tensor. Evaluations on the China A-share stock data and the HK stock data in the year 2015 demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "741078742178d09f911ef9633befeb9b",
"text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "553a86035f5013595ef61c4c19997d7c",
"text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.",
"title": ""
},
{
"docid": "563183ff51d1a218bf54db6400e25365",
"text": "In this paper wireless communication using white, high brightness LEDs (light emitting diodes) is considered. In particular, the use of OFDM (orthogonal frequency division multiplexing) for intensity modulation is investigated. The high peak-to-average ratio (PAR) in OFDM is usually considered a disadvantage in radio frequency transmission systems due to non-linearities of the power amplifier. It is demonstrated theoretically and by means of an experimental system that the high PAR in OFDM can be exploited constructively in visible light communication to intensity modulate LEDs. It is shown that the theoretical and the experimental results match very closely, and that it is possible to cover a distance of up to one meter using a single LED",
"title": ""
},
{
"docid": "c3bfe9b5231c5f9b4499ad38b6e8eac6",
"text": "As the World Wide Web has increasingly become a necessity in daily life, the acute need to safeguard user privacy and security has become manifestly apparent. After users realized that browser cookies could allow websites to track their actions without permission or notification, many have chosen to reject cookies in order to protect their privacy. However, more recently, methods of fingerprinting a web browser have become an increasingly common practice. In this paper, we classify web browser fingerprinting into four main categories: (1) Browser Specific, (2) Canvas, (3) JavaScript Engine, and (4) Cross-browser. We then summarize the privacy and security implications, discuss commercial fingerprinting techniques, and finally present some detection and prevention methods.",
"title": ""
},
{
"docid": "ff6b4840787027df75873f38fbb311b4",
"text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] |
scidocsrr
|
c8855abd771a62b93c7112efeece4ecd
|
Extracting sclera features for cancelable identity verification
|
[
{
"docid": "9fc7f8ef20cf9c15f9d2d2ce5661c865",
"text": "This paper presents a new iris database that contains images with noise. This is in contrast with the existing databases, that are noise free. UBIRIS is a tool for the development of robust iris recognition algorithms for biometric proposes. We present a detailed description of the many characteristics of UBIRIS and a comparison of several image segmentation approaches used in the current iris segmentation methods where it is evident their small tolerance to noisy images.",
"title": ""
}
] |
[
{
"docid": "befc5dbf4da526963f8aa180e1fda522",
"text": "Charities publicize the donations they receive, generally according to dollar categories rather than the exact amount. Donors in turn tend to give the minimum amount necessary to get into a category. These facts suggest that donors have a taste for having their donations made public. This paper models the effects of such a taste for ‘‘prestige’’ on the behavior of donors and charities. I show how a taste for prestige means that charities can increase donations by using categories. The paper also discusses the effect of a taste for prestige on competition between charities. 1998 Elsevier Science S.A.",
"title": ""
},
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "ea982e20cc739fc88ed6724feba3d896",
"text": "We report new evidence on the emotional, demographic, and situational correlates of boredom from a rich experience sample capturing 1.1 million emotional and time-use reports from 3,867 U.S. adults. Subjects report boredom in 2.8% of the 30-min sampling periods, and 63% of participants report experiencing boredom at least once across the 10-day sampling period. We find that boredom is more likely to co-occur with negative, rather than positive, emotions, and is particularly predictive of loneliness, anger, sadness, and worry. Boredom is more prevalent among men, youths, the unmarried, and those of lower income. We find that differences in how such demographic groups spend their time account for up to one third of the observed differences in overall boredom. The importance of situations in predicting boredom is additionally underscored by the high prevalence of boredom in specific situations involving monotonous or difficult tasks (e.g., working, studying) or contexts where one's autonomy might be constrained (e.g., time with coworkers, afternoons, at school). Overall, our findings are consistent with cognitive accounts that cast boredom as emerging from situations in which engagement is difficult, and are less consistent with accounts that exclusively associate boredom with low arousal or with situations lacking in meaning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "85221954ced857c449acab8ee5cf801e",
"text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.",
"title": ""
},
{
"docid": "d3e35963e85ade6e3e517ace58cb3911",
"text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.",
"title": ""
},
{
"docid": "619af7dc39e21690c1d164772711d7ed",
"text": "The prevalence of smart mobile devices has promoted the popularity of mobile applications (a.k.a. apps). Supporting mobility has become a promising trend in software engineering research. This article presents an empirical study of behavioral service profiles collected from millions of users whose devices are deployed with Wandoujia, a leading Android app-store service in China. The dataset of Wandoujia service profiles consists of two kinds of user behavioral data from using 0.28 million free Android apps, including (1) app management activities (i.e., downloading, updating, and uninstalling apps) from over 17 million unique users and (2) app network usage from over 6 million unique users. We explore multiple aspects of such behavioral data and present patterns of app usage. Based on the findings as well as derived knowledge, we also suggest some new open opportunities and challenges that can be explored by the research community, including app development, deployment, delivery, revenue, etc.",
"title": ""
},
{
"docid": "7f47434e413230faf04849cf43a845fa",
"text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.",
"title": ""
},
{
"docid": "8e8905e6ae4c4d6cd07afa157b253da9",
"text": "Blockchain technology enables the execution of collaborative business processes involving untrusted parties without requiring a central authority. Specifically, a process model comprising tasks performed by multiple parties can be coordinated via smart contracts operating on the blockchain. The consensus mechanism governing the blockchain thereby guarantees that the process model is followed by each party. However, the cost required for blockchain use is highly dependent on the volume of data recorded and the frequency of data updates by smart contracts. This paper proposes an optimized method for executing business processes on top of commodity blockchain technology. The paper presents a method for compiling a process model into a smart contract that encodes the preconditions for executing each task in the process using a space-optimized data structure. The method is empirically compared to a previously proposed baseline by replaying execution logs, including one from a real-life business process, and measuring resource consumption.",
"title": ""
},
{
"docid": "41820b51dbea5801281e6ca86defed2e",
"text": "This paper is an exploration in the semantics and pragmatics of linguistic feedback, i.e., linguistic mechanisms which enable the participants in spoken interaction to exchange information about basic communicative functions, such as contact, perception, understanding, and attitudinal reactions to the communicated content. Special attention is given to the type of reaction conveyed by feedback utterances, the communicative status of the information conveyed (i. e., the level of awareness and intentionality of the communicating sender), and the context sensitivity of feedback expressions. With regard to context sensitivity, which is one of the most characteristic features of feedback expressions, the discussion focuses on the way in which the type of speech act (mood), the factual polarity and the information status of the preceding utterance influence the interpretation of feedback utterances. The different content dimensions are exemplified by data from recorded dialogues and by data given through linguistic intuition. Finally, two different ways of formalizing the analysis are examined, one using attribute-value matrices and one based on the theory of situation semantics. ___________________________________________________________________ Authors' address: Jens Allwood, Joakim Nivre, and Elisabeth Ahlsén Department of Lingustics University of Göteborg Box 200 S-405 30 Göteborg Sweden",
"title": ""
},
{
"docid": "099dbf8d4c0b401cd3389583eb4495f3",
"text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.",
"title": ""
},
{
"docid": "777243cb514414dd225a9d5f41dc49b7",
"text": "We have built and tested a decision tool which will help organisations properly select one business process maturity model (BPMM) over another. This prototype consists of a novel questionnaire with decision criteria for BPMM selection, linked to a unique data set of 69 BPMMs. Fourteen criteria (questions) were elicited from an international Delphi study, and weighed by the analytical hierarchy process. Case studies have shown (non-)profit and academic applications. Our purpose was to describe criteria that enable an informed BPMM choice (conform to decision-making theories, rather than ad hoc). Moreover, we propose a design process for building BPMM decision tools. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "90d0d75ca8413dad8ffe42b6d064905b",
"text": "BACKGROUND\nDebate continues about the consequences of adolescent cannabis use. Existing data are limited in statistical power to examine rarer outcomes and less common, heavier patterns of cannabis use than those already investigated; furthermore, evidence has a piecemeal approach to reporting of young adult sequelae. We aimed to provide a broad picture of the psychosocial sequelae of adolescent cannabis use.\n\n\nMETHODS\nWe integrated participant-level data from three large, long-running longitudinal studies from Australia and New Zealand: the Australian Temperament Project, the Christchurch Health and Development Study, and the Victorian Adolescent Health Cohort Study. We investigated the association between the maximum frequency of cannabis use before age 17 years (never, less than monthly, monthly or more, weekly or more, or daily) and seven developmental outcomes assessed up to age 30 years (high-school completion, attainment of university degree, cannabis dependence, use of other illicit drugs, suicide attempt, depression, and welfare dependence). The number of participants varied by outcome (N=2537 to N=3765).\n\n\nFINDINGS\nWe recorded clear and consistent associations and dose-response relations between the frequency of adolescent cannabis use and all adverse young adult outcomes. After covariate adjustment, compared with individuals who had never used cannabis, those who were daily users before age 17 years had clear reductions in the odds of high-school completion (adjusted odds ratio 0·37, 95% CI 0·20-0·66) and degree attainment (0·38, 0·22-0·66), and substantially increased odds of later cannabis dependence (17·95, 9·44-34·12), use of other illicit drugs (7·80, 4·46-13·63), and suicide attempt (6·83, 2·04-22·90).\n\n\nINTERPRETATION\nAdverse sequelae of adolescent cannabis use are wide ranging and extend into young adulthood. Prevention or delay of cannabis use in adolescence is likely to have broad health and social benefits. Efforts to reform cannabis legislation should be carefully assessed to ensure they reduce adolescent cannabis use and prevent potentially adverse developmental effects.\n\n\nFUNDING\nAustralian Government National Health and Medical Research Council.",
"title": ""
},
{
"docid": "066d3a381ffdb2492230bee14be56710",
"text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.",
"title": ""
},
{
"docid": "d64b3b68f094ade7881f2bb0f2572990",
"text": "Large-scale transactional systems still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective. A semantic layer built upon a basic blockchain infrastructure would join the benefits of flexible resource/service discovery and validation by consensus. This paper proposes a novel Service-oriented Architecture (SOA) based on a semantic blockchain. Registration, discovery, selection and payment operations are implemented as smart contracts, allowing decentralized execution and trust. Potential applications include material and immaterial resource marketplaces and trustless collaboration among autonomous entities, spanning many areas of interest for smart cities and communities.",
"title": ""
},
{
"docid": "5038df440c0db19e1588cc69b10cc3c4",
"text": "Electronic document management (EDM) technology has the potential to enhance the information management in construction projects considerably, without radical changes to current practice. Over the past fifteen years this topic has been overshadowed by building product modelling in the construction IT research world, but at present EDM is quickly being introduced in practice, in particular in bigger projects. Often this is done in the form of third party services available over the World Wide Web. In the paper, a typology of research questions and methods is presented, which can be used to position the individual research efforts which are surveyed in the paper. Questions dealt with include: What features should EMD systems have? How much are they used? Are there benefits from use and how should these be measured? What are the barriers to wide-spread adoption? Which technical questions need to be solved? Is there scope for standardisation? How will the market for such systems evolve?",
"title": ""
},
{
"docid": "14d5c8ed0b48d5625287fecaf5f72691",
"text": "In this paper we attempt to demonstrate the strengths of Hierarchical Hidden Markov Models (HHMMs) in the representation and modelling of musical structures. We show how relatively simple HHMMs, containing a minimum of expert knowledge, use their advantage of having multiple layers to perform well on tasks where flat Hidden Markov Models (HMMs) struggle. The examples in this paper show a HHMM’s performance at extracting higherlevel musical properties through the construction of simple pitch sequences, correctly representing the data set on which it was trained.",
"title": ""
},
{
"docid": "5320d7790348cc0e48dcf76428811d7b",
"text": "central and, in some ways, most familiar concepts in AI, the most fundamental question about it—What is it?—has rarely been answered directly. Numerous papers have lobbied for one or another variety of representation, other papers have argued for various properties a representation should have, and still others have focused on properties that are important to the notion of representation in general. In this article, we go back to basics to address the question directly. We believe that the answer can best be understood in terms of five important and distinctly different roles that a representation plays, each of which places different and, at times, conflicting demands on the properties a representation should have. We argue that keeping in mind all five of these roles provides a usefully broad perspective that sheds light on some long-standing disputes and can invigorate both research and practice in the field.",
"title": ""
},
{
"docid": "185f209e92314fdf15bbbe3238f1c616",
"text": "This paper studies the opportunistic routing (OR) in unmanned aerial vehicle (UAV) assisted wireless sensor networks (WSNs). We consider the scenario where a UAV collects data from randomly deployed mobile sensors that are moving with different velocities along a predefined route. Due to the dynamic topology, mobile sensors have different opportunities to communicate with the UAV. This paper proposes the All Neighbors Opportunistic Routing (ANOR) and Highest Velocity Opportunistic Routing (HVOR) protocols. In essence, ANOR forwards packets to all neighbors and HVOR forwards them to one neighbor with highest velocity. HVOR is a new OR protocol which dynamically selects route on a pre-transmission basis in multi-hop network. HVOR helps the sensor which has little opportunity to communicate with the UAV to determine which sensor, among all the sensors that are within its range, is the forwarder. The selected node forwards the packet. As a result, in each hop, the packet moves to the sensor that has higher opportunity to communicate with the UAV. In addition, we focus on various performance metrics, including Packets Delivery Ratio (PDR), Routing Overhead Ratio (ROR), Average Latency (AL) and Average Hop Count (AHC), to evaluate the proposed algorithms and compare them with a Direct Communication (DC) protocol. Through extensive simulations, we have shown that both HVOR and ANOR algorithms work better than DC. Moreover, the HVOR algorithm outperforms the other two algorithms in terms of the average overhead.",
"title": ""
},
{
"docid": "3dcb6a88aafb7a9c917ccdd306768f51",
"text": "Protein quality describes characteristics of a protein in relation to its ability to achieve defined metabolic actions. Traditionally, this has been discussed solely in the context of a protein's ability to provide specific patterns of amino acids to satisfy the demands for synthesis of protein as measured by animal growth or, in humans, nitrogen balance. As understanding of protein's actions expands beyond its role in maintaining body protein mass, the concept of protein quality must expand to incorporate these newly emerging actions of protein into the protein quality concept. New research reveals increasingly complex roles for protein and amino acids in regulation of body composition and bone health, gastrointestinal function and bacterial flora, glucose homeostasis, cell signaling, and satiety. The evidence available to date suggests that quality is important not only at the minimum Recommended Dietary Allowance level but also at higher intakes. Currently accepted methods for measuring protein quality do not consider the diverse roles of indispensable amino acids beyond the first limiting amino acid for growth or nitrogen balance. As research continues to evolve in assessing protein's role in optimal health at higher intakes, there is also need to continue to explore implications for protein quality assessment.",
"title": ""
},
{
"docid": "20a2390dede15514cd6a70e9b56f5432",
"text": "The ability to record and replay program executions with low overhead enables many applications, such as reverse-execution debugging, debugging of hard-toreproduce test failures, and “black box” forensic analysis of failures in deployed systems. Existing record-andreplay approaches limit deployability by recording an entire virtual machine (heavyweight), modifying the OS kernel (adding deployment and maintenance costs), requiring pervasive code instrumentation (imposing significant performance and complexity overhead), or modifying compilers and runtime systems (limiting generality). We investigated whether it is possible to build a practical record-and-replay system avoiding all these issues. The answer turns out to be yes — if the CPU and operating system meet certain non-obvious constraints. Fortunately modern Intel CPUs, Linux kernels and user-space frameworks do meet these constraints, although this has only become true recently. With some novel optimizations, our system RR records and replays real-world lowparallelism workloads with low overhead, with an entirely user-space implementation, using stock hardware, compilers, runtimes and operating systems. RR forms the basis of an open-source reverse-execution debugger seeing significant use in practice. We present the design and implementation of RR, describe its performance on a variety of workloads, and identify constraints on hardware and operating system design required to support our approach.",
"title": ""
}
] |
scidocsrr
|
d1b743428f24a649b697cff3b7c15ca3
|
Towards Accurate Distant Supervision for Relational Facts Extraction
|
[
{
"docid": "904db9e8b0deb5027d67bffbd345b05f",
"text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.",
"title": ""
},
{
"docid": "3f2312e385fc1c9aafc6f9f08e2e2d4f",
"text": "Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
}
] |
[
{
"docid": "7974d8e70775f1b7ef4d8c9aefae870e",
"text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.",
"title": ""
},
{
"docid": "15b8b0f3682e2eb7c1b1a62be65d6327",
"text": "Data augmentation is widely used to train deep neural networks for image classification tasks. Simply flipping images can help learning by increasing the number of training images by a factor of two. However, data augmentation in natural language processing is much less studied. Here, we describe two methods for data augmentation for Visual Question Answering (VQA). The first uses existing semantic annotations to generate new questions. The second method is a generative approach using recurrent neural networks. Experiments show the proposed schemes improve performance of baseline and state-of-the-art VQA algorithms.",
"title": ""
},
{
"docid": "500e8ab316398313c90a0ea374f28ee8",
"text": "Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community.",
"title": ""
},
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
},
{
"docid": "5e858796f025a9e2b91109835d827c68",
"text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.",
"title": ""
},
{
"docid": "9888a7723089d2f1218e6e1a186a5e91",
"text": "This classic text offers you the key to understanding short circuits, open conductors and other problems relating to electric power systems that are subject to unbalanced conditions. Using the method of symmetrical components, acknowledged expert Paul M. Anderson provides comprehensive guidance for both finding solutions for faulted power systems and maintaining protective system applications. You'll learn to solve advanced problems, while gaining a thorough background in elementary configurations. Features you'll put to immediate use: Numerous examples and problems Clear, concise notation Analytical simplifications Matrix methods applicable to digital computer technology Extensive appendices",
"title": ""
},
{
"docid": "0ba8b4a1dc59e9f1fe68fbb1e491aa2b",
"text": "Capparis spinosa contained many biologically active chemical groups including, alkaloids, glycosides, tannins, phenolics, flavonoids, triterpenoids steroids, carbohydrates, saponins and a wide range of minerals and trace elements. It exerted many pharmacological effects including antimicrobial, cytotoxic, antidiabetic, anti-inflammatory, antioxidant, cardiovascular, bronchorelaxant and many other effects. The present review will designed to highlight the chemical constituents and the pharmacological effects of Capparis spinosa.",
"title": ""
},
{
"docid": "8abcf3e56e272c06da26a40d66afcfb0",
"text": "As internet use becomes increasingly integral to modern life, the hazards of excessive use are also becoming apparent. Prior research suggests that socially anxious individuals are particularly susceptible to problematic internet use. This vulnerability may relate to the perception of online communication as a safer means of interacting, due to greater control over self-presentation, decreased risk of negative evaluation, and improved relationship quality. To investigate these hypotheses, a general sample of 338 completed an online survey. Social anxiety was confirmed as a significant predictor of problematic internet use when controlling for depression and general anxiety. Social anxiety was associated with perceptions of greater control and decreased risk of negative evaluation when communicating online, however perceived relationship quality did not differ. Negative expectations during face-to-face interactions partially accounted for the relationship between social anxiety and problematic internet use. There was also preliminary evidence that preference for online communication exacerbates face-to-face avoidance.",
"title": ""
},
{
"docid": "46d5ecaeb529341dedcd724cfb3696bb",
"text": "Big Data stellt heute ein zentrales Thema der Informatik dar: Insbesondere durch die zunehmende Datafizierung unserer Umwelt entstehen neue und umfangreiche Datenquellen, während sich gleichzeitig die Verarbeitungsgeschwindigkeit von Daten wesentlich erhöht und diese Quellen somit immer häufiger in nahezu Echtzeit analysiert werden können. Neben der Bedeutung in der Informatik nimmt jedoch auch die Relevanz von Daten im täglichen Leben zu: Immer mehr Informationen sind das Ergebnis von Datenanalysen und immer häufiger werden Entscheidungen basierend auf Analyseergebnissen getroffen. Trotz der Relevanz von Daten und Datenverarbeitung im Alltag werden moderne Formen der Datenanalyse im Informatikunterricht bisher jedoch allenfalls am Rand betrachtet, sodass die Schülerinnen und Schüler weder die Möglichkeiten noch die Gefahren dieser Methoden erfahren können. In diesem Beitrag stellen wir daher ein prototypisches Unterrichtskonzept zum Thema Datenanalyse im Kontext von Big Data vor, in dem die Schülerinnen und Schüler wesentliche Grundlagen von Datenanalysen kennenlernen und nachvollziehen können. Um diese komplexen Systeme für den Informatikunterricht möglichst einfach zugänglich zu machen und mit realen Daten arbeiten zu können, wird dabei ein selbst implementiertes Datenstromsystem zur Verarbeitung des Datenstroms von Twitter eingesetzt.",
"title": ""
},
{
"docid": "4c2a41869b0ae000473a8623bd51b4c8",
"text": "This paper presents a novel voltage-angle-based field-weakening control scheme appropriate for the operation of permanent-magnet synchronous machines over a wide range of speed. At high rotational speed, the stator voltage is limited by the inverter dc bus voltage. To control the machine torque above the base speed, the proposed method controls the angle of the limited stator voltage by the integration of gain-scheduled q-axis current error. The stability of the drive is increased by a feedback loop, which compensates dynamic disturbances and smoothes the transition into field weakening. The proposed method can fully utilize the available dc bus voltage. Due to its simplicity, it is robust to the variation of machine parameters. Excellent performance of the proposed method is demonstrated through the experiments performed with and without speed and position sensors.",
"title": ""
},
{
"docid": "049c6062613d0829cf39cbfe4aedca7a",
"text": "Deep neural networks (DNN) are widely used in many applications. However, their deployment on edge devices has been difficult because they are resource hungry. Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit. We propose an improved binary training method (BNN+), by introducing a regularization function that encourages training weights around binary values. In addition to this, to enhance model performance we add trainable scaling factors to our regularization functions. Furthermore, we use an improved approximation of the derivative of the sign activation function in the backward computation. These additions are based on linear operations that are easily implementable into the binary training framework. We show experimental results on CIFAR-10 obtaining an accuracy of 86.5%, on AlexNet and 91.3% with VGG network. On ImageNet, our method also outperforms the traditional BNN method and XNOR-net, using AlexNet by a margin of 4% and 2% top-1 accuracy respectively.",
"title": ""
},
{
"docid": "dd130195f82c005d1168608a0388e42d",
"text": "CONTEXT\nThe educational environment makes an important contribution to student learning. The DREEM questionnaire is a validated tool assessing the environment.\n\n\nOBJECTIVES\nTo translate and validate the DREEM into Greek.\n\n\nMETHODS\nForward translations from English were produced by three independent Greek translators and then back translations by five independent bilingual translators. The Greek DREEM.v0 that was produced was administered to 831 undergraduate students from six Greek medical schools. Cronbach's alpha and test-retest correlation were used to evaluate reliability and factor analysis was used to assess validity. Questions that increased alpha if deleted and/or sorted unexpectedly in factor analysis were further checked through two focus groups.\n\n\nFINDINGS\nQuestionnaires were returned by 487 respondents (59%), who were representative of all surveyed students by gender but not by year of study or medical school. The instrument's overall alpha was 0.90, and for the learning, teachers, academic, atmosphere and social subscales the alphas were 0.79 (expected 0.69), 0.78 (0.67), 0.69 (0.60), 0.68 (0.69), 0.48 (0.57), respectively. In a subset of the whole sample, test and retest alphas were both 0.90, and mean item scores highly correlated (p<0.001). Factor analysis produced meaningful subscales but not always matching the original ones. Focus group evaluation revealed possible misunderstanding for questions 17, 25, 29 and 38, which were revised in the DREEM.Gr.v1. The group mean overall scale score was 107.7 (SD 20.2), with significant differences across medical schools (p<0.001).\n\n\nCONCLUSION\nAlphas and test-retest correlation suggest the Greek translated and validated DREEM scale is a reliable tool for assessing the medical education environment and for informing policy. Factor analysis and focus group input suggest it is a valid tool. Reasonable school differences suggest the instrument's sensitivity.",
"title": ""
},
{
"docid": "54dd5e40748b13dafc672e143d20c3bc",
"text": "Reinforcement learning is a promising new approach for automatically developing effective policies for real-time self-management. RL can achieve superior performance to traditional methods, while requiring less built-in domain knowledge. Several case studies from real and simulated systems management applications demonstrate RL's promises and challenges. These studies show that standard online RL can learn effective policies in feasible training times. Moreover, a Hybrid RL approach can profit from any knowledge contained in an existing policy by training on the policy's observable behavior, without needing to interface directly to such knowledge",
"title": ""
},
{
"docid": "c1f095252c6c64af9ceeb33e78318b82",
"text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.",
"title": ""
},
{
"docid": "003d004f57d613ff78bf39a35e788bf9",
"text": "Breast cancer is one of the most common cancer in women worldwide. It is typically diagnosed via histopathological microscopy imaging, for which image analysis can aid physicians for more effective diagnosis. Given a large variability in tissue appearance, to better capture discriminative traits, images can be acquired at different optical magnifications. In this paper, we propose an approach which utilizes joint colour-texture features and a classifier ensemble for classifying breast histopathology images. While we demonstrate the effectiveness of the proposed framework, an important objective of this work is to study the image classification across different optical magnification levels. We provide interesting experimental results and related discussions, demonstrating a visible classification invariance with cross-magnification training-testing. Along with magnification-specific model, we also evaluate the magnification independent model, and compare the two to gain some insights.",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "a42163e2a6625006d04a9b9f6dddf9ce",
"text": "This paper concludes the theme issue on structural health monitoring (SHM) by discussing the concept of damage prognosis (DP). DP attempts to forecast system performance by assessing the current damage state of the system (i.e. SHM), estimating the future loading environments for that system, and predicting through simulation and past experience the remaining useful life of the system. The successful development of a DP capability will require the further development and integration of many technology areas including both measurement/processing/telemetry hardware and a variety of deterministic and probabilistic predictive modelling capabilities, as well as the ability to quantify the uncertainty in these predictions. The multidisciplinary and challenging nature of the DP problem, its current embryonic state of development, and its tremendous potential for life-safety and economic benefits qualify DP as a 'grand challenge' problem for engineers in the twenty-first century.",
"title": ""
},
{
"docid": "1a0ed30b64fa7f8d39a12acfcadfd763",
"text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.",
"title": ""
},
{
"docid": "f233f816c84407a4acd694f540bb18a9",
"text": "Link prediction is a key technique in many applications such as recommender systems, where potential links between users and items need to be predicted. A challenge in link prediction is the data sparsity problem. In this paper, we address this problem by jointly considering multiple heterogeneous link prediction tasks such as predicting links between users and different types of items including books, movies and songs, which we refer to as the collective link prediction (CLP) problem. We propose a nonparametric Bayesian framework for solving the CLP problem, which allows knowledge to be adaptively transferred across heterogeneous tasks while taking into account the similarities between tasks. We learn the inter-task similarity automatically. We also introduce link functions for different tasks to correct their biases and skewness of distributions in their link data. We conduct experiments on several real world datasets and demonstrate significant improvements over several existing state-of-the-art methods.",
"title": ""
},
{
"docid": "86fdb9b60508f87c0210623879185c8c",
"text": "This paper proposes a novel Hierarchical Parsing Net (HPN) for semantic scene parsing. Unlike previous methods, which separately classify each object, HPN leverages global scene semantic information and the context among multiple objects to enhance scene parsing. On the one hand, HPN uses the global scene category to constrain the semantic consistency between the scene and each object. On the other hand, the context among all objects is also modeled to avoid incompatible object predictions. Specifically, HPN consists of four steps. In the first step, we extract scene and local appearance features. Based on these appearance features, the second step is to encode a contextual feature for each object, which models both the scene-object context (the context between the scene and each object) and the interobject context (the context among different objects). In the third step, we classify the global scene and then use the scene classification loss and a backpropagation algorithm to constrain the scene feature encoding. In the fourth step, a label map for scene parsing is generated from the local appearance and contextual features. Our model outperforms many state-of-the-art deep scene parsing networks on five scene parsing databases.",
"title": ""
}
] |
scidocsrr
|
0080127afb31502bf9ce634f93bd4a63
|
Augmenting End-to-End Dialogue Systems With Commonsense Knowledge
|
[
{
"docid": "d4a1acf0fedca674145599b4aa546de0",
"text": "Neural network models are capable of generating extremely natural sounding conversational interactions. However, these models have been mostly applied to casual scenarios (e.g., as “chatbots”) and have yet to demonstrate they can serve in more useful conversational applications. This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses. We generalize the widely-used Sequence-toSequence (SEQ2SEQ) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting. Our approach yields significant improvements over a competitive SEQ2SEQ baseline. Human judges found that our outputs are significantly more informative.",
"title": ""
},
{
"docid": "4410d7ed0d64e49e83111e6126cbc533",
"text": "We consider incorporating topic information as prior knowledge into the sequence to sequence (Seq2Seq) network structure with attention mechanism for response generation in chatbots. To this end, we propose a topic augmented joint attention based Seq2Seq (TAJASeq2Seq) model. In TAJA-Seq2Seq, information from input posts and information from topics related to the posts are simultaneously embedded into vector spaces by a content encoder and a topic encoder respectively. The two kinds of information interact with each other and help calibrate weights of each other in the joint attention mechanism in TAJA2Seq2Seq, and jointly determine the generation of responses in decoding. The model simulates how people behave in conversation and can generate well-focused and informative responses with the help of topic information. Empirical study on large scale human judged generation results show that our model outperforms Seq2Seq with attention on both response quality and diversity.",
"title": ""
}
] |
[
{
"docid": "7fd33ebd4fec434dba53b15d741fdee4",
"text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.",
"title": ""
},
{
"docid": "d574b43be735b5f560881a58c17f2acf",
"text": "People seek out situations that \"fit,\" but the concept of fit is not well understood. We introduce State Authenticity as Fit to the Environment (SAFE), a conceptual framework for understanding how social identities motivate the situations that people approach or avoid. Drawing from but expanding the authenticity literature, we first outline three types of person-environment fit: self-concept fit, goal fit, and social fit. Each type of fit, we argue, facilitates cognitive fluency, motivational fluency, and social fluency that promote state authenticity and drive approach or avoidance behaviors. Using this model, we assert that contexts subtly signal social identities in ways that implicate each type of fit, eliciting state authenticity for advantaged groups but state inauthenticity for disadvantaged groups. Given that people strive to be authentic, these processes cascade down to self-segregation among social groups, reinforcing social inequalities. We conclude by mapping out directions for research on relevant mechanisms and boundary conditions.",
"title": ""
},
{
"docid": "722e838f25efde8592c5eb7d8209ef45",
"text": "Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure-activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers.",
"title": ""
},
{
"docid": "516153ca56874e4836497be9b7631834",
"text": "Shunt active power filter (SAPF) is the preeminent solution against nonlinear loads, current harmonics, and power quality problems. APF topologies for harmonic compensation use numerous high-power rating components and are therefore disadvantageous. Hybrid topologies combining low-power rating APF with passive filters are used to reduce the power rating of voltage source inverter (VSI). Hybrid APF topologies for high-power rating systems use a transformer with large numbers of passive components. In this paper, a novel four-switch two-leg VSI topology for a three-phase SAPF is proposed for reducing the system cost and size. The proposed topology comprises a two-arm bridge structure, four switches, coupling inductors, and sets of LC PFs. The third leg of the three-phase VSI is removed by eliminating the set of power switching devices, thereby directly connecting the phase with the negative terminals of the dc-link capacitor. The proposed topology enhances the harmonic compensation capability and provides complete reactive power compensation compared with conventional APF topologies. The new experimental prototype is tested in the laboratory to verify the results in terms of total harmonic distortion, balanced supply current, and harmonic compensation, following the IEEE-519 standard.",
"title": ""
},
{
"docid": "5d6cb50477423bf9fc1ea6c27ad0f1b9",
"text": "We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional structures), the nonparametric nature of Quantile Regression and the efficiency of Direct Multi-Horizon Forecasting. A new training scheme, forking-sequences, is designed for sequential nets to boost stability and performance. We show that the approach accommodates both temporal and static covariates, learning across multiple related series, shifting seasonality, future planned event spikes and coldstarts in real life large-scale forecasting. The performance of the framework is demonstrated in an application to predict the future demand of items sold on Amazon.com, and in a public probabilistic forecasting competition to predict electricity price and load.",
"title": ""
},
{
"docid": "1151348144ad2915f63f6b437e777452",
"text": "Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, publicly available data sets are few, often contain samples from subjects with too similar characteristics, and very often lack of specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new smartphone accelerometer dataset designed for activity recognition. The dataset includes 11,771 activities performed by 30 subjects of ages ranging from 18 to 60 years. Activities are divided in 17 fine grained classes grouped in two coarse grained classes: 9 types of activities of daily living (ADL) and 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with two different classifiers and with different configurations. The best results are achieved with k-NN classifying ADLs only, considering personalization, and with both windows of 51 and 151 samples.",
"title": ""
},
{
"docid": "0022623017e81ee0a102da0524c83932",
"text": "Calcite is a new Eclipse plugin that helps address the difficulty of understanding and correctly using an API. Calcite finds the most popular ways to instantiate a given class or interface by using code examples. To allow the users to easily add these object instantiations to their code, Calcite adds items to the popup completion menu that will insert the appropriate code into the user’s program. Calcite also uses crowd sourcing to add to the menu instructions in the form of comments that help the user perform functions that people have identified as missing from the API. In a user study, Calcite improved users’ success rate by 40%.",
"title": ""
},
{
"docid": "744d409ba86a8a60fafb5c5602f6d0f0",
"text": "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 %, 65 %, and 55 % for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively.",
"title": ""
},
{
"docid": "f4bb9f769659436c79b67765145744ac",
"text": "Sparse Principal Component Analysis (S-PCA) is a novel framework for learning a linear, orthonormal basis representation for structure intrinsic to an ensemble of images. S-PCA is based on the discovery that natural images exhibit structure in a low-dimensional subspace in a sparse, scale-dependent form. The S-PCA basis optimizes an objective function which trades off correlations among output coefficients for sparsity in the description of basis vector elements. This objective function is minimized by a simple, robust and highly scalable adaptation algorithm, consisting of successive planar rotations of pairs of basis vectors. The formulation of S-PCA is novel in that multi-scale representations emerge for a variety of ensembles including face images, images from outdoor scenes and a database of optical flow vectors representing a motion class.",
"title": ""
},
{
"docid": "696069ce14bb37713421a01686555a92",
"text": "We propose a Bayesian trajectory prediction and criticality assessment system that allows to reason about imminent collisions of a vehicle several seconds in advance. We first infer a distribution of high-level, abstract driving maneuvers such as lane changes, turns, road followings, etc. of all vehicles within the driving scene by modeling the domain in a Bayesian network with both causal and diagnostic evidences. This is followed by maneuver-based, long-term trajectory predictions, which themselves contain random components due to the immanent uncertainty of how drivers execute specific maneuvers. Taking all uncertain predictions of all maneuvers of every vehicle into account, the probability of the ego vehicle colliding at least once within a time span is evaluated via Monte-Carlo simulations and given as a function of the prediction horizon. This serves as the basis for calculating a novel criticality measure, the Time-To-Critical-Collision-Probability (TTCCP) - a generalization of the common Time-To-Collision (TTC) in arbitrary, uncertain, multi-object driving environments and valid for longer prediction horizons. The system is applicable from highly-structured to completely non-structured environments and additionally allows the prediction of vehicles not behaving according to a specific maneuver class.",
"title": ""
},
{
"docid": "e83873daee4f8dae40c210987d9158e8",
"text": "Domain ontologies are important information sources for knowledge-based systems. Yet, building domain ontologies from scratch is known to be a very labor-intensive process. In this study, we present our semi-automatic approach to building an ontology for the domain of wind energy which is an important type of renewable energy with a growing share in electricity generation all over the world. Related Wikipedia articles are first processed in an automated manner to determine the basic concepts of the domain together with their properties and next the concepts, properties, and relationships are organized to arrive at the ultimate ontology. We also provide pointers to other engineering ontologies which could be utilized together with the proposed wind energy ontology in addition to its prospective application areas. The current study is significant as, to the best of our knowledge, it proposes the first considerably wide-coverage ontology for the wind energy domain and the ontology is built through a semi-automatic process which makes use of the related Web resources, thereby reducing the overall cost of the ontology building process.",
"title": ""
},
{
"docid": "5d557ecb67df253662e37d6ec030d055",
"text": "Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.",
"title": ""
},
{
"docid": "1b3b2b8872d3b846120502a7a40e03d0",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
},
{
"docid": "ed8fef21796713aba1a6375a840c8ba3",
"text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.",
"title": ""
},
{
"docid": "06d42f15aa724120bd99f3ab3bed6053",
"text": "With today's unprecedented proliferation in smart-devices, the Internet of Things Vision has become more of a reality than ever. With the extreme diversity of applications running on these heterogeneous devices, numerous middle-ware solutions have consequently emerged to address IoT-related challenges. These solutions however, heavily rely on the cloud for better data management, integration, and processing. This might potentially compromise privacy, add latency, and place unbearable traffic load. In this paper, we propose The Hive, an edge-based middleware architecture and protocol, that enables heterogeneous edge devices to dynamically share data and resources for enhanced application performance and privacy. We implement a prototype of the Hive, test it for basic robustness, show its modularity, and evaluate its performance with a real world smart emotion recognition application running on edge devices.",
"title": ""
},
{
"docid": "8b0a09cbac4b1cbf027579ece3dea9ef",
"text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.",
"title": ""
},
{
"docid": "9646160d55bf5fe6d883ac62075c7560",
"text": "The authors provide a systematic security analysis on the sharing methods of three major cloud storage and synchronization services: Dropbox, Google Drive, and Microsoft SkyDrive. They show that all three services have security weaknesses that may result in data leakage without users' awareness.",
"title": ""
},
{
"docid": "b17fdc300edc22ab855d4c29588731b2",
"text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.",
"title": ""
},
{
"docid": "a6f11cf1bf479fe72dcb8dabb53176ee",
"text": "This paper focuses on WPA and IEEE 802.11i protocols that represent two important solutions in the wireless environment. Scenarios where it is possible to produce a DoS attack and DoS flooding attacks are outlined. The last phase of the authentication process, represented by the 4-way handshake procedure, is shown to be unsafe from DoS attack. This can produce the undesired effect of memory exhaustion if a flooding DoS attack is conducted. In order to avoid DoS attack without increasing the complexity of wireless mobile devices too much and without changing through some further control fields of the frame structure of wireless security protocols, a solution is found and an extension of WPA and IEEE 802.11 is proposed. A protocol extension with three “static” variants and with a resource-aware dynamic approach is considered. The three enhancements to the standard protocols are achieved through some simple changes on the client side and they are robust against DoS and DoS flooding attack. Advantages introduced by the proposal are validated by simulation campaigns and simulation parameters such as attempted attacks, successful attacks, and CPU load, while the algorithm execution time is evaluated. Simulation results show how the three static solutions avoid memory exhaustion and present a good performance in terms of CPU load and execution time in comparison with the standard WPA and IEEE 802.11i protocols. However, if the mobile device presents different resource availability in terms of CPU and memory or if resource availability significantly changes in time, a dynamic approach that is able to switch among three different modalities could be more suitable.",
"title": ""
}
] |
scidocsrr
|
7370e36cddefd67a8bb8250286d22c20
|
The RowHammer problem and other issues we may face as memory becomes denser
|
[
{
"docid": "c97fe8ccd39a1ad35b5f09377f45aaa2",
"text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. In our research, we experimentally measure, characterize, analyze, and model error patterns in nanoscale flash memories. Based on the understanding developed using real flash memory chips, we design techniques for more efficient and effective error management than traditionally used costly error correction codes.",
"title": ""
},
{
"docid": "73284fdf9bc025672d3b97ca5651084a",
"text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
}
] |
[
{
"docid": "dc76a4d28841e703b961a1126bd28a39",
"text": "In this work, we study the problem of anomaly detection of the trajectories of objects in a visual scene. For this purpose, we propose a novel representation for trajectories utilizing covariance features. Representing trajectories via co-variance features enables us to calculate the distance between the trajectories of different lengths. After setting this proposed representation and calculation of distances, anomaly detection is achieved by sparse representations on nearest neighbours. Conducted experiments on both synthetic and real datasets show that the proposed method yields results which are outperforming or comparable with state of the art.",
"title": ""
},
{
"docid": "9b45bb1734e9afc34b14fa4bc47d8fba",
"text": "To achieve complex solutions in the rapidly changing world of e-commerce, it is impossible to go it alone. This explains the latest trend in IT outsourcing---global and partner-based alliances. But where do we go from here?",
"title": ""
},
{
"docid": "5772e4bfb9ced97ff65b5fdf279751f4",
"text": "Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memorybased component. We observe significant gains in effectiveness on a range of different datasets in seven different languages.",
"title": ""
},
{
"docid": "fe89c8a17676b7767cfa40e7822b8d25",
"text": "Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.",
"title": ""
},
{
"docid": "4bee6ec901c365f3780257ed62b7c020",
"text": "There is no explicitly known example of a triple (g, a, x), where g ≥ 3 is an integer, a a digit in {0, . . . , g − 1} and x a real algebraic irrational number, for which one can claim that the digit a occurs infinitely often in the g–ary expansion of x. In 1909 and later in 1950, É. Borel considered such questions and suggested that the g–ary expansion of any algebraic irrational number in any base g ≥ 2 satisfies some of the laws that are satisfied by almost all numbers. For instance, the frequency where a given finite sequence of digits occurs should depend only on the base and on the length of the sequence. Hence there is a huge gap between the established theory and the expected state of the art. However, some progress have been made recently, mainly thanks to clever use of the Schmidt’s subspace Theorem. We review some of these results.",
"title": ""
},
{
"docid": "efd3280939a90041f50c4938cf886deb",
"text": "A distributed double integrator discrete time consensus protocol is presented along with stability analysis. The protocol will achieve consensus when the communication topology contains at least a directed spanning tree. Average consensus is achieved when the communication topology is strongly connected and balanced, where average consensus for double integrator systems is discussed. For second order systems average consensus occurs when the information states tend toward the average of the current information states not their initial values. Lastly, perturbation to the consensus protocol is addressed. Using a designed perturbation input, an algorithm is presented that accurately tracks the center of a vehicle formation in a decentralized manner.",
"title": ""
},
{
"docid": "6421979368a138e4b21ab7d9602325ff",
"text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.",
"title": ""
},
{
"docid": "0963b6b27b57575bd34ff8f5bd330536",
"text": "The human ocular surface spans from the conjunctiva to the cornea and plays a critical role in visual perception. Cornea, the anterior portion of the eye, is transparent and provides the eye with two-thirds of its focusing power and protection of ocular integrity. The cornea consists of five main layers, namely, corneal epithelium, Bowman’s layer, corneal stroma, Descemet’s membrane and corneal endothelium. The outermost layer of the cornea, which is exposed to the external environment, is the corneal epithelium. Corneal epithelial integrity and transparency are maintained by somatic stem cells (SC) that reside in the limbus. The limbus, an anatomical structure 1-2 mm wide, circumscribes the peripheral cornea and separates it from the conjunctiva (Cotsarelis et al., 1989, Davanger and Evensen, 1971) (Figure 1). Any damage to the ocular surface by burns, or various infections, can threaten vision. The most insidious of such damaging conditions is limbal stem cell deficiency (LSCD). Clinical signs of LSCD include corneal vascularization, chronic stromal inflammation, ingrowth of conjunctival epithelium onto the corneal surface and persistent epithelial defects (Lavker et al., 2004). Primary limbal stem cell deficiency is associated with aniridia and ectodermal dysplasia. Acquired limbal stem cell deficiency has been associated with inflammatory conditions (Stevens–Johnson syndrome (SJS), ocular cicatricial pemphigoid), ocular trauma (chemical and thermal burns), contact lens wear, corneal infection, neoplasia, peripheral ulcerative corneal disease and neurotrophic keratopathy (Dua et al., 2000, Jeng et al., 2011). Corneal stem cells and/or their niche are known to play important anti-angiogenic and anti-inflamatory roles in maintaining a normal corneal microenvironment, the destruction of which in LSCD, tips the balance toward pro-angiogenic conditions (Lim et al., 2009). For a long time, the primary treatment for LSCD has been transplantation of healthy keratolimbal tissue from autologous, allogenic, or cadaveric sources. In the late 1990s, cultured, autologous, limbal epithelial cell implants were used successfully to improve vision in two patients with chemical injury-induced LSCD (Pellegrini et al., 1997). Since then, transplantation of cultivated epithelial (stem) cells has become a treatment of choice for numerous LSCD patients worldwide. While the outcomes are promising, the variability of methodologies used to expand the cells, points to an underlying need for better standardization of ex vivo cultivation-based therapies and their outcome measures (Sangwan et al., 2005, Ti et al., 2004, Grueterich et al., 2002b, Kolli et al., 2010).",
"title": ""
},
{
"docid": "9a6ce56536585e54d3e15613b2fa1197",
"text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.",
"title": ""
},
{
"docid": "8c4d4567cf772a76e99aa56032f7e99e",
"text": "This paper discusses current perspectives on play and leisure and proposes that if play and leisure are to be accepted as viable occupations, then (a) valid and reliable measures of play must be developed, (b) interventions must be examined for inclusion of the elements of play, and (c) the promotion of play and leisure must be an explicit goal of occupational therapy intervention. Existing tools used by occupational therapists to assess clients' play and leisure are evaluated for the aspects of play and leisure they address and the aspects they fail to address. An argument is presented for the need for an assessment of playfulness, rather than of play or leisure activities. A preliminary model for the development of such an assessment is proposed.",
"title": ""
},
{
"docid": "e0320fc4031a4d1d09c9255012c3d03c",
"text": "We develop a model of premium sharing for firms that offer multiple insurance plans. We assume that firms offer one low quality plan and one high quality plan. Under the assumption of wage rigidities we found that the employee's contribution to each plan is an increasing function of that plan's premium. The effect of the other plan's premium is ambiguous. We test our hypothesis using data from the Employer Health Benefit Survey. Restricting the analysis to firms that offer both HMO and PPO plans, we measure the amount of the premium passed on to employees in response to a change in both premiums. We find evidence of large and positive effects of the increase in the plan's premium on the amount of the premium passed on to employees. The effect of the alternative plan's premium is negative but statistically significant only for the PPO plans.",
"title": ""
},
{
"docid": "dcd116e601c9155d60364c19a1f0dfb7",
"text": "The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms.",
"title": ""
},
{
"docid": "5da2747dd2c3fe5263d8bfba6e23de1f",
"text": "We propose to transfer the content of a text written in a certain style to an alternative text written in a different style, while maintaining as much as possible of the original meaning. Our work is inspired by recent progress of applying style transfer to images, as well as attempts to replicate the results to text. Our model is a deep neural network based on Generative Adversarial Networks (GAN). Our novelty is replacing the discrete next-word prediction with prediction in the embedding space, which provides two benefits (1) train the GAN without using gradient approximations and (2) provide semantically related results even for failure cases.",
"title": ""
},
{
"docid": "b059f6d2e9f10e20417f97c05d92c134",
"text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.",
"title": ""
},
{
"docid": "d06c91afbfd79e40d0d6fe326e3be957",
"text": "This meta-analysis included 66 studies (N = 4,176) on parental antecedents of attachment security. The question addressed was whether maternal sensitivity is associated with infant attachment security, and what the strength of this relation is. It was hypothesized that studies more similar to Ainsworth's Baltimore study (Ainsworth, Blehar, Waters, & Wall, 1978) would show stronger associations than studies diverging from this pioneering study. To create conceptually homogeneous sets of studies, experts divided the studies into 9 groups with similar constructs and measures of parenting. For each domain, a meta-analysis was performed to describe the central tendency, variability, and relevant moderators. After correction for attenuation, the 21 studies (N = 1,099) in which the Strange Situation procedure in nonclinical samples was used, as well as preceding or concurrent observational sensitivity measures, showed a combined effect size of r(1,097) = .24. According to Cohen's (1988) conventional criteria, the association is moderately strong. It is concluded that in normal settings sensitivity is an important but not exclusive condition of attachment security. Several other dimensions of parenting are identified as playing an equally important role. In attachment theory, a move to the contextual level is required to interpret the complex transactions between context and sensitivity in less stable and more stressful settings, and to pay more attention to nonshared environmental influences.",
"title": ""
},
{
"docid": "b92484f67bf2d3f71d51aee9fb7abc86",
"text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.",
"title": ""
},
{
"docid": "f1681e1c8eef93f15adb5a4d7313c94c",
"text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
},
{
"docid": "b5aad69e6a0f672cdaa1f81187a48d57",
"text": "In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%.",
"title": ""
}
] |
scidocsrr
|
47b0cae56e5e04ca4fa7e91be1b8c7d1
|
Empathy and Its Modulation in a Virtual Human
|
[
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
}
] |
[
{
"docid": "ec8ffeb175dbd392e877d7704705f44e",
"text": "Business Intelligence (BI) solutions commonly aim at assisting decision-making processes by providing a comprehensive view over a company’s core business data and suitable abstractions thereof. Decision-making based on BI solutions therefore builds on the assumption that providing users with targeted, problemspecific fact data enables them to make informed and, hence, better decisions in their everyday businesses. In order to really provide users with all the necessary details to make informed decisions, we however believe that – in addition to conventional reports – it is essential to also provide users with information about the quality, i.e. with quality metadata, regarding the data from which reports are generated. Identifying a lack of support for quality metadata management in conventional BI solutions, in this paper we propose the idea of quality-aware reports and a possible architecture for quality-aware BI, able to involve the users themselves into the quality metadata management process, by explicitly soliciting and exploiting user feedback.",
"title": ""
},
{
"docid": "2d86a717ef4f83ff0299f15ef1df5b1b",
"text": "Proactive interference (PI) refers to the finding that memory for recently studied (target) information can be vastly impaired by the previous study of other (nontarget) information. PI can be reduced in a number of ways, for instance, by directed forgetting of the prior nontarget information, the testing of the prior nontarget information, or an internal context change before study of the target information. Here we report the results of four experiments, in which we demonstrate that all three forms of release from PI are accompanied by a decrease in participants’ response latencies. Because response latency is a sensitive index of the size of participants’ mental search set, the results suggest that release from PI can reflect more focused memory search, with the previously studied nontarget items being largely eliminated from the search process. Our results thus provide direct evidence for a critical role of retrieval processes in PI release. 2012 Elsevier Inc. All rights reserved. Introduction buildup of PI is caused by a failure to distinguish items Proactive interference (PI) refers to the finding that memory for recently studied information can be vastly impaired by the previous study of further information (e.g., Underwood, 1957). In a typical PI experiment, participants study a (target) list of items and are later tested on it. In the PI condition, participants study further (nontarget) lists that precede encoding of the target information, whereas in the no-PI condition participants engage in an unrelated distractor task. Typically, recall of the target list is worse in the PI condition than the no-PI condition, which reflects the PI finding. PI has been extensively studied in the past century, has proven to be a very robust finding, and has been suggested to be one of the major causes of forgetting in everyday life (e.g., Underwood, 1957; for reviews, see Anderson & Neely, 1996; Crowder, 1976). Over the years, a number of theories have been put forward to account for PI, most of them suggesting a critical role of retrieval processes in this form of forgetting. For instance, temporal discrimination theory suggests that . All rights reserved. ie.uni-regensburg.de from the most recent target list from items that appeared on the earlier nontarget lists. Specifically, the theory assumes that at test participants are unable to restrict their memory search to the target list and instead search the entire set of items that have previously been exposed (Baddeley, 1990; Crowder, 1976; Wixted & Rohrer, 1993). Another retrieval account attributes PI to a generation failure. Here, reduced recall levels of the target items are thought to be due to the impaired ability to access the material’s correct memory representation (Dillon & Thomas, 1975). In contrast to these retrieval explanations of PI, some theories also suggested a role of encoding factors in PI, assuming that the prior study of other lists impairs subsequent encoding of the target list. For instance, attentional resources may deteriorate across item lists and cause the target material to be less well processed in the presence than the absence of the preceding lists (e.g., Crowder, 1976).",
"title": ""
},
{
"docid": "b085860a27df6604c6dc38cd9fbd0b75",
"text": "A number of factors are considered during the analysis of automobile transportation with respect to increasing safety. One of the vital factors for night-time travel is temporary blindness due to increase in the headlight intensity. While headlight intensity provides better visual acuity, it simultaneously affects oncoming traffic. This problem is encountered when both drivers are using a higher headlight intensity setting. Also, increased speed of the vehicles due to decreased traffic levels at night increases the severity of accidents. In order to reduce accidents due to temporary driver blindness, a wireless sensor network (WSN) based controller could be developed to transmit sensor data in a faster and an efficient way between cars. Low latency allows faster headlight intensity adjustment between the vehicles to drastically reduce the cause of temporary blindness. An attempt has been made to come up with a system which would sense the intensity of the headlight of the oncoming vehicle and depending on the threshold headlight intensity being set in the system it would automatically reduce the intensity of the headlight of the oncoming vehicle using wireless sensor network thus reducing the condition of temporary blindness caused due to excessive exposure to headlights.",
"title": ""
},
{
"docid": "c68397cdbe538fd22fe88c0ff4e47879",
"text": "With the higher demand of the three dimensional (3D) imaging, a high definition real-time 3D video system based on FPGA is proposed. The system is made up of CMOS image sensors, DDR2 SDRAM, High Definition Multimedia Interface (HDMI) transmitter and Field Programmable Gate Array (FPGA). CMOS image sensor produces digital video streaming. DDR2 SDRAM buffers large amount of video data. FPGA processes the video streaming and realizes 3D data format conversion. HDMI transmitter is utilized to transmit 3D format data. Using the active 3D display device and shutter glasses, the system can achieve the living effect of real-time 3D high definition imaging. The resolution of the system is 720p@60Hz in 3D mode.",
"title": ""
},
{
"docid": "bd4dde3f5b7ec9dcd711a538b973ef1e",
"text": "Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the well-known measures precision, recall, and their average, the F-measure. The unigrambased F-measure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, this standard measure has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The relevant software is publicly available from http://nlp.cs.nyu.edu/GTM/.",
"title": ""
},
{
"docid": "c9b278eea7f915222cf8e99276fb5af2",
"text": "Pseudorandom generators based on linear feedback shift registers (LFSR) are a traditional building block for cryptographic stream ciphers. In this report, we review the general idea for such generators, as well as the most important techniques of cryptanalysis.",
"title": ""
},
{
"docid": "738f60fbfe177eec52057c8e5ab43e55",
"text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.",
"title": ""
},
{
"docid": "d353db098a7ca3bd9dc73b803e7369a2",
"text": "DevOps community advocates collaboration between development and operations staff during software deployment. However this collaboration may cause a conceptual deficit. This paper proposes a Unified DevOps Model (UDOM) in order to overcome the conceptual deficit. Firstly, the origin of conceptual deficit is discussed. Secondly, UDOM model is introduced that includes three sub-models: application and data model, workflow execution model and infrastructure model. UDOM model can help to scale down deployment time, mitigate risk, satisfy customer requirements, and improve productivity. Finally, this paper can be a roadmap for standardization DevOps terminologies, concepts, patterns, cultures, and tools.",
"title": ""
},
{
"docid": "1be6aecdc3200ed70ede2d5e96cb43be",
"text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.",
"title": ""
},
{
"docid": "63339fb80c01c38911994cd326e483a3",
"text": "Older adults are becoming a significant percentage of the world's population. A multitude of factors, from the normal aging process to the progression of chronic disease, influence the nutrition needs of this very diverse group of people. Appropriate micronutrient intake is of particular importance but is often suboptimal. Here we review the available data regarding micronutrient needs and the consequences of deficiencies in the ever growing aged population.",
"title": ""
},
{
"docid": "dfc7a31461a382f0574fadf36a8fd211",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Road Traffic Accident is very serious matter of life. The World Health Organization (WHO) reports that about 1.24 million people of the world die annually on the roads. The Institute for Health Metrics and Evaluation (IHME) estimated about 907,900, 1.3 million and 1.4 million deaths from road traffic injuries in 1990, 2010 and 2013, respectively. Uttar Pradesh in particular one of the state of India, experiences the highest rate of such accidents. Thus, methods to reduce accident severity are of great interest to traffic agencies and the public at large. In this paper, we applied data mining technologies to link recorded road characteristics to accident severity and developed a set of rules that could be used by the Indian Traffic Agency to improve safety and could help to save precious life.",
"title": ""
},
{
"docid": "689f1a8a6e8a1267dd45db32f3b711f6",
"text": "Today, the digitalization strides tremendously on all the sides of the modern society. One of the enablers to keep this process secure is the authentication. It touches many different areas of the connected world including payments, communications, and access right management. This manuscript attempts to shed the light on the authentication systems' evolution towards Multi-factor Authentication (MFA) from Singlefactor Authentication (SFA) and through Two-factor Authentication (2FA). Particularly, MFA is expected to be utilized for the user and vehicle-to-everything (V2X) interaction which is selected as descriptive scenario. The manuscript is focused on already available and potentially integrated sensors (factor providers) to authenticate the occupant from inside the vehicle. The survey on existing vehicular systems suitable for MFA is given. Finally, the MFA system based on reversed Lagrange polynomial, utilized in Shamir's Secret Sharing (SSS), was proposed to enable flexible in-car authentication. The solution was further extended covering the cases of authenticating the user even if some of the factors are mismatched or absent. The framework allows to qualify the missing factor and authenticate the user without providing the sensitive biometric data to the verification entity. The proposed is finally compared to conventional SSS.",
"title": ""
},
{
"docid": "9d3778091b10c6352559fb51faace714",
"text": "Aims to provide an analysis of the introduction of Internet-based skills into small firms. Seeks to contribute to the wider debate on the content and style of training most appropriate for employees and managers of SMEs.",
"title": ""
},
{
"docid": "5867f20ff63506be7eccb6c209ca03cc",
"text": "When creating a virtual environment open to the public a number of challenges have to be addressed. The equipment has to be chosen carefully in order to be be able to withstand hard everyday usage, and the application has not only to be robust and easy to use, but has also to be appealing to the user, etc. The current paper presents findings gathered from the creation of a multi-thematic virtual museum environment to be offered to visitors of real world museums. A number of design and implementation aspects are described along with an experiment designed to evaluate alternative approaches for implementing the navigation in a virtual museum environment. The paper is concluded with insights gained from the development of the virtual museum and portrays future research plans.",
"title": ""
},
{
"docid": "64f4a275dce1963b281cd0143f5eacdc",
"text": "Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.",
"title": ""
},
{
"docid": "a1b50cf02ef0e37aed3d941ea281b885",
"text": "Collaborative filtering and content-based methods are two main approaches for recommender systems, and hybrid models use advantages of both. In this paper, we made a comparison of a hybrid model, which uses Bayesian Staked Denoising Autoencoders for content learning, and a collaborative filtering method, Bayesian Nonnegative Matrix Factorisation. It is shown that the tightly coupled hybrid model, Collaborative Deep Learning, gave more successful results comparing to collaborative filtering methods.",
"title": ""
},
{
"docid": "d9c4bdd95507ef497db65fc80d3508c5",
"text": "3D content creation is referred to as one of the most fundamental tasks of computer graphics. And many 3D modeling algorithms from 2D images or curves have been developed over the past several decades. Designers are allowed to align some conceptual images or sketch some suggestive curves, from front, side, and top views, and then use them as references in constructing a 3D model automatically or manually. However, to the best of our knowledge, no studies have investigated on 3D human body reconstruction in a similar manner. In this paper, we propose a deep learning based reconstruction of 3D human body shape from 2D orthographic views. A novel CNN-based regression network, with two branches corresponding to frontal and lateral views respectively, is designed for estimating 3D human body shape from 2D mask images. We train our networks separately to decouple the feature descriptors which encode the body parameters from different views, and fuse them to estimate an accurate human body shape. In addition, to overcome the shortage of training data required for this purpose, we propose some significantly data augmentation schemes for 3D human body shapes, which can be used to promote further research on this topic. Extensive experimental results demonstrate that visually realistic and accurate reconstructions can be achieved effectively using our algorithm. Requiring only binary mask images, our method can help users create their own digital avatars quickly, and also make it easy to create digital human body for 3D game, virtual reality, online fashion shopping.",
"title": ""
},
{
"docid": "39ed08e9a08b7d71a4c177afe8f0056a",
"text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a40c00b1dc4a8d795072e0a8cec09d7a",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
},
{
"docid": "e65d522f6b08eeebb8a488b133439568",
"text": "We propose a bootstrap learning algorithm for salient object detection in which both weak and strong models are exploited. First, a weak saliency map is constructed based on image priors to generate training samples for a strong model. Second, a strong classifier based on samples directly from an input image is learned to detect salient pixels. Results from multiscale saliency maps are integrated to further improve the detection performance. Extensive experiments on six benchmark datasets demonstrate that the proposed bootstrap learning algorithm performs favorably against the state-of-the-art saliency detection methods. Furthermore, we show that the proposed bootstrap learning approach can be easily applied to other bottom-up saliency models for significant improvement.",
"title": ""
}
] |
scidocsrr
|
7f0fd3cae088ad01ca2e50d33b24ec11
|
Insiders and Insider Threats - An Overview of Definitions and Mitigation Techniques
|
[
{
"docid": "27b9350b8ea1032e727867d34c87f1c3",
"text": "A field study and an experimental study examined relationships among organizational variables and various responses of victims to perceived wrongdoing. Both studies showed that procedural justice climate moderates the effect of organizational variables on the victim's revenge, forgiveness, reconciliation, or avoidance behaviors. In Study 1, a field study, absolute hierarchical status enhanced forgiveness and reconciliation, but only when perceptions of procedural justice climate were high; relative hierarchical status increased revenge, but only when perceptions of procedural justice climate were low. In Study 2, a laboratory experiment, victims were less likely to endorse vengeance or avoidance depending on the type of wrongdoing, but only when perceptions of procedural justice climate were high.",
"title": ""
}
] |
[
{
"docid": "e510e80f71d24783414cb5db279b2ec3",
"text": "The purpose of this research is to investigate negativity bias in secondary electronic word-of-mouth (eWOM). Two experiments, one laboratory and one field, were conducted to study actual dissemination behavior. The results demonstrate a strong tendency toward the negative in the dissemination of secondary commercial information. In line with Dynamic Social Impact Theory, our findings show that consumers disseminate online negative content to more recipients, for a longer period of time and in more elaborated and assimilated manner than they do positive information. The research is important from both a theoretical and managerial perspective. In the former, it enriches existing literature on eWOM by providing insight into theoretical dimensions of the negativity theory not examined before (duration, role of valence, elaboration, and assimilation). Findings provide managerial insights into designing more effective WOM and publicity campaigns. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "49ff096deb6621438286942b792d6af3",
"text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.",
"title": ""
},
{
"docid": "4d429f5f5d46dc1beb9b681c4578f34a",
"text": "Recently, many digital service providers started to gamify their services to promote continued service usage. Although gamification has drawn attention in both practice and research, it remains unclear how users experience gamified services and how these gameful experiences may increase service usage. This research adopts a user-centered perspective to reveal the underlying gameful experience dimensions during gamified service usage and how they drive continued service usage. Findings from Study 1 – a survey with 148 app-users – reveal four essential gameful experience dimensions (skill development, social comparison, social connectedness, and expressive freedom) and how they relate to game mechanics. Study 2, which is based on a survey among 821 app-users, shows that gameful experiences trigger continued service usage through two different types of motivation, namely autonomous and controlled motivation.",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "bc3924d12ee9d07a752fce80a67bb438",
"text": "Unsupervised semantic segmentation in the time series domain is a much-studied problem due to its potential to detect unexpected regularities and regimes in poorly understood data. However, the current techniques have several shortcomings, which have limited the adoption of time series semantic segmentation beyond academic settings for three primary reasons. First, most methods require setting/learning many parameters and thus may have problems generalizing to novel situations. Second, most methods implicitly assume that all the data is segmentable, and have difficulty when that assumption is unwarranted. Finally, most research efforts have been confined to the batch case, but online segmentation is clearly more useful and actionable. To address these issues, we present an algorithm which is domain agnostic, has only one easily determined parameter, and can handle data streaming at a high rate. In this context, we test our algorithm on the largest and most diverse collection of time series datasets ever considered, and demonstrate our algorithm's superiority over current solutions. Furthermore, we are the first to show that semantic segmentation may be possible at superhuman performance levels.",
"title": ""
},
{
"docid": "d1cf6f36fe964ac9e48f54a1f35e94c3",
"text": "Recognising patterns that correlate multiple events over time becomes increasingly important in applications from urban transportation to surveillance monitoring. In many realworld scenarios, however, timestamps of events may be erroneously recorded and events may be dropped from a stream due to network failures or load shedding policies. In this work, we present SimpMatch, a novel simplex-based algorithm for probabilistic evaluation of event queries using constraints over event orderings in a stream. Our approach avoids learning probability distributions for time-points or occurrence intervals. Instead, we employ the abstraction of segmented intervals and compute the probability of a sequence of such segments using the principle of order statistics. The algorithm runs in linear time to the number of missed timestamps, and shows high accuracy, yielding exact results if event generation is based on a Poisson process and providing a good approximation otherwise. As we demonstrate empirically, SimpMatch enables efficient and effective reasoning over event streams, outperforming state-ofthe-art methods for probabilistic evaluation of event queries by up to two orders of magnitude.",
"title": ""
},
{
"docid": "7931fa9541efa9a006a030655c59c5f4",
"text": "Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.",
"title": ""
},
{
"docid": "b741698d7e4d15cb7f4e203f2ddbce1d",
"text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.",
"title": ""
},
{
"docid": "867516a6a54105e4759338e407bafa5a",
"text": "At the end of the criminal intelligence analysis process there are relatively well established and understood approaches to explicit externalisation and representation of thought that include theories of argumentation, narrative and hybrid approaches that include both of these. However the focus of this paper is on the little understood area of how to support users in the process of arriving at such representations from an initial starting point where little is given. The work is based on theoretical considerations and some initial studies with end users. In focusing on process we discuss the requirements of fluidity and rigor and how to gain traction in investigations, the processes of thinking involved including abductive, deductive and inductive reasoning, how users may use thematic sorting in early stages of investigation and how tactile reasoning may be used to externalize and facilitate reasoning in a productive way. In the conclusion section we discuss the issues raised in this work and directions for future work.",
"title": ""
},
{
"docid": "a6e84af8b1ba1d120e69c10f76eb7e2a",
"text": "Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which autoencoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the intractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method.",
"title": ""
},
{
"docid": "4a5f05a7aea8a02cf70d6c644e06dda0",
"text": "Sales pipeline win-propensity prediction is fundamental to effective sales management. In contrast to using subjective human rating, we propose a modern machine learning paradigm to estimate the winpropensity of sales leads over time. A profile-specific two-dimensional Hawkes processes model is developed to capture the influence from seller’s activities on their leads to the win outcome, coupled with lead’s personalized profiles. It is motivated by two observations: i) sellers tend to frequently focus their selling activities and efforts on a few leads during a relatively short time. This is evidenced and reflected by their concentrated interactions with the pipeline, including login, browsing and updating the sales leads which are logged by the system; ii) the pending opportunity is prone to reach its win outcome shortly after such temporally concentrated interactions. Our model is deployed and in continual use to a large, global, B2B multinational technology enterprize (Fortune 500) with a case study. Due to the generality and flexibility of the model, it also enjoys the potential applicability to other real-world problems.",
"title": ""
},
{
"docid": "7d1470edd8d8c6bd589ea64a73189705",
"text": "Background modeling plays an important role for video surveillance, object tracking, and object counting. In this paper, we propose a novel deep background modeling approach utilizing fully convolutional network. In the network block constructing the deep background model, three atrous convolution branches with different dilate are used to extract spatial information from different neighborhoods of pixels, which breaks the limitation that extracting spatial information of the pixel from fixed pixel neighborhood. Furthermore, we sample multiple frames from original sequential images with increasing interval, in order to capture more temporal information and reduce the computation. Compared with classical background modeling approaches, our approach outperforms the state-of-art approaches both in indoor and outdoor scenes.",
"title": ""
},
{
"docid": "136fadcc21143fd356b48789de5fb2b0",
"text": "Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities.",
"title": ""
},
{
"docid": "066eef8e511fac1f842c699f8efccd6b",
"text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "3c2b68ac95f1a9300585b73ca4b83122",
"text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.",
"title": ""
},
{
"docid": "8a22f454a657768a3d5fd6e6ec743f5f",
"text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.",
"title": ""
},
{
"docid": "e9e2887e7aae5315a8661c9d7456aa2e",
"text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.",
"title": ""
},
{
"docid": "bb43c98d05f3844354862d39f6fa1d2d",
"text": "There are always frustrations for drivers in finding parking spaces and being protected from auto theft. In this paper, to minimize the drivers' hassle and inconvenience, we propose a new intelligent secure privacy-preserving parking scheme through vehicular communications. The proposed scheme is characterized by employing parking lot RSUs to surveil and manage the whole parking lot and is enabled by communication between vehicles and the RSUs. Once vehicles that are equipped with wireless communication devices, which are also known as onboard units, enter the parking lot, the RSUs communicate with them and provide the drivers with real-time parking navigation service, secure intelligent antitheft protection, and friendly parking information dissemination. In addition, the drivers' privacy is not violated. Performance analysis through extensive simulations demonstrates the efficiency and practicality of the proposed scheme.",
"title": ""
},
{
"docid": "cf6b553b54ed94b9a6b516c51a4ad571",
"text": "The relationship of food and eating with affective and other clinical disorders is complex and intriguing. Serotoninergic dysfunction in seasonal affective disorder, atypical depression, premenstrual syndrome, anorexia and bulimia nervosa, and binge eating disorder is reviewed. Patients exhibiting a relationship between food and behaviour are found in various diagnostic categories. This points to a need to shift from nosological to functional thinking in psychiatry. It also means application of psychopharmacological treatments across diagnostic boundaries. The use of phototherapy and psychotropic drugs (MAO inhibitors and selective serotonin reuptake inhibitors like fluoxetine) in these disorders is discussed.",
"title": ""
},
{
"docid": "dc445d234bafaf115495ce1838163463",
"text": "In this paper, a novel camera tamper detection algorithm is proposed to detect three types of tamper attacks: covered, moved and defocused. The edge disappearance rate is defined in order to measure the amount of edge pixels that disappear in the current frame from the background frame while excluding edges in the foreground. Tamper attacks are detected if the difference between the edge disappearance rate and its temporal average is larger than an adaptive threshold reflecting the environmental conditions of the cameras. The performance of the proposed algorithm is evaluated for short video sequences with three types of tamper attacks and for 24-h video sequences without tamper attacks; the algorithm is shown to achieve acceptable levels of detection and false alarm rates for all types of tamper attacks in real environments.",
"title": ""
}
] |
scidocsrr
|
fa68821a1f52cc2104bad15f3a5cba67
|
The Work Tasks Motivation Scale for Teachers ( WTMST )
|
[
{
"docid": "691fcf418d6073f7681846b30a1753a8",
"text": "Cognitive evaluation theory, which explains the effects of extrinsic motivators on intrinsic motivation, received some initial attention in the organizational literature. However, the simple dichotomy between intrinsic and extrinsic motivation made the theory difficult to apply to work settings. Differentiating extrinsic motivation into types that differ in their degree of autonomy led to self-determination theory, which has received widespread attention in the education, health care, and sport domains. This article describes self-determination theory as a theory of work motivation and shows its relevance to theories of organizational behavior. Copyright # 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "2b2cbdced71e24eb25e20a186ad0af58",
"text": "The job demands-resources (JD-R) model proposes that working conditions can be categorized into 2 broad categories, job demands and job resources. that are differentially related to specific outcomes. A series of LISREL analyses using self-reports as well as observer ratings of the working conditions provided strong evidence for the JD-R model: Job demands are primarily related to the exhaustion component of burnout, whereas (lack of) job resources are primarily related to disengagement. Highly similar patterns were observed in each of 3 occupational groups: human services, industry, and transport (total N = 374). In addition, results confirmed the 2-factor structure (exhaustion and disengagement) of a new burnout instrument--the Oldenburg Burnout Inventory--and suggested that this structure is essentially invariant across occupational groups.",
"title": ""
},
{
"docid": "feafd64c9f81b07f7f616d2e36e15e0c",
"text": "Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.",
"title": ""
}
] |
[
{
"docid": "31f67a8751afec0442b8a91b9c8e9aa6",
"text": "Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.",
"title": ""
},
{
"docid": "39b7783e43526e5f825abe3bc8ebe01b",
"text": "The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the \"increasing dimensionality\" problem, where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethink the way cluster points are created and designed, and then devise OPTIC, an efficient online time series clustering technique for demand response (DR), in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. OPTIC is randomized in nature, and provides optimal performance guarantees (Section 2.3.2) in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a 'killer' approach that breaks the \"of dimensionality\" in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles. We demonstrate the efficacy of OPTIC in practice using real-world data obtained from the fully operational USC microgrid.",
"title": ""
},
{
"docid": "72d47983c009c7892155fc3c491c9f52",
"text": "To improve the stability accuracy of stable platform of unmanned aerial vehicle (UAV), a line-of-sight stabilized control system is developed by using an inertial and optical-mechanical (fast steering mirror) combined method in a closed loop with visual feedback. The system is based on Peripheral Component Interconnect (PCI), included an image-deviation-obtained system and a combined controller using a PQ method. The method changes the series-wound structure to the shunt-wound structure of dual-input/single-output (DISO), and decouples the actuator range and frequency of inertial stabilization and fast steering mirror stabilization. Test results show the stability accuracy improves from 20μrad of inertial method to 5μrad of inertial and optical-mechanical combined method, and prove the effectiveness of the combined line-of-sight stabilization control system.",
"title": ""
},
{
"docid": "fca196c6900f43cf6fd711f8748c6768",
"text": "The fatigue fracture of structural details subjected to cyclic loads mostly occurs at a critical cross section with stress concentration. The welded joint is particularly dangerous location because of sinergetic harmful effects of stress concentration, tensile residual stresses, deffects, microstructural heterogeneity. Because of these reasons many methods for improving the fatigue resistance of welded joints are developed. Significant increase in fatigue strength and fatigue life was proved and could be attributed to improving weld toe profile, the material microstructure, removing deffects at the weld toe and modifying the original residual stress field. One of the most useful methods to improve fatigue behaviour of welded joints is TIG dressing. The magnitude of the improvement in fatigue performance depends on base material strength, type of welded joint and type of loading. Improvements of the fatigue behaviour of the welded joints in low-carbon structural steel treated by TIG dressing is considered in this paper.",
"title": ""
},
{
"docid": "9882c528dce5e9bb426d057ee20a520c",
"text": "The use of herbal medicinal products and supplements has increased tremendously over the past three decades with not less than 80% of people worldwide relying on them for some part of primary healthcare. Although therapies involving these agents have shown promising potential with the efficacy of a good number of herbal products clearly established, many of them remain untested and their use are either poorly monitored or not even monitored at all. The consequence of this is an inadequate knowledge of their mode of action, potential adverse reactions, contraindications, and interactions with existing orthodox pharmaceuticals and functional foods to promote both safe and rational use of these agents. Since safety continues to be a major issue with the use of herbal remedies, it becomes imperative, therefore, that relevant regulatory authorities put in place appropriate measures to protect public health by ensuring that all herbal medicines are safe and of suitable quality. This review discusses toxicity-related issues and major safety concerns arising from the use of herbal medicinal products and also highlights some important challenges associated with effective monitoring of their safety.",
"title": ""
},
{
"docid": "9eedeec21ab380c0466ed7edfe7c745d",
"text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.",
"title": ""
},
{
"docid": "1fc965670f71d9870a4eea93d129e285",
"text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bda0ae59319660987e9d2686d98e4b9a",
"text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.",
"title": ""
},
{
"docid": "b950d3b1bc2a30730b12e2f0016ecd9c",
"text": "Application distribution platforms - or app stores - such as Google Play or Apple AppStore allow users to submit feedback in form of ratings and reviews to downloaded applications. In the last few years, these platforms have become very popular to both application developers and users. However, their real potential for and impact on requirements engineering processes are not yet well understood. This paper reports on an exploratory study, which analyzes over one million reviews from the Apple AppStore. We investigated how and when users provide feedback, inspected the feedback content, and analyzed its impact on the user community. We found that most of the feedback is provided shortly after new releases, with a quickly decreasing frequency over time. Reviews typically contain multiple topics, such as user experience, bug reports, and feature requests. The quality and constructiveness vary widely, from helpful advices and innovative ideas to insulting offenses. Feedback content has an impact on download numbers: positive messages usually lead to better ratings and vice versa. Negative feedback such as shortcomings is typically destructive and misses context details and user experience. We discuss our findings and their impact on software and requirements engineering teams.",
"title": ""
},
{
"docid": "66467b6181882fade46d331d7a67da59",
"text": "This paper suggests an architectural approach of representing knowledge graph for complex question-answering. There are four kinds of entity relations added to our knowledge graph: syntactic dependencies, semantic role labels, named entities, and coreference links, which can be effectively applied to answer complex questions. As a proof of concept, we demonstrate how our knowledge graph can be used to solve complex questions such as arithmetics. Our experiment shows a promising result on solving arithmetic questions, achieving the 3folds cross-validation score of 71.75%.",
"title": ""
},
{
"docid": "c5a7c8457830fb2989e6087abf9fd252",
"text": "Paper prototyping highlights cost-effective usability testing techniques that produce fast results for improving an interface design. Practitioners and students interested in the design, development, and support of user interfaces will appreciate Snyder’s text for its focus on practical information and application. This book’s best features are the real life examples, anecdotes, and case studies that the author presents to demonstrate the uses of paper prototyping and its many benefits. While the author advocates paper prototyping, she also notes that paper prototyping techniques are one of many usability evaluation methods and that paper prototyping works best only in certain situations. Snyder reminds her readers that paper prototyping does not produce precise usability measurements, but rather it is a “blunt instrument” that rapidly uncovers qualitative information from actual users performing real tasks (p. 185). Hence, this book excludes in-depth theoretical discussions about methods and validity, but its pragmatic discussion on test design prepares the practitioner for dealing with several circumstances and making sound decisions based on testing method considerations.",
"title": ""
},
{
"docid": "918e7434798ebcfdf075fa93cbffba39",
"text": "Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.",
"title": ""
},
{
"docid": "c3f942a915c149a7fc9929e0404c61f2",
"text": "Distributed model training suffers from communication overheads due to frequent gradient updates transmitted between compute nodes. To mitigate these overheads, several studies propose the use of sparsified stochastic gradients. We argue that these are facets of a general sparsification method that can operate on any possible atomic decomposition. Notable examples include elementwise, singular value, and Fourier decompositions. We present Atomo, a general framework for atomic sparsification of stochastic gradients. Given a gradient, an atomic decomposition, and a sparsity budget, Atomo gives a random unbiased sparsification of the atoms minimizing variance. We show that recent methods such as QSGD and TernGrad are special cases of Atomo and that sparsifiying the singular value decomposition of neural networks gradients, rather than their coordinates, can lead to significantly faster distributed training.",
"title": ""
},
{
"docid": "a1774a08ffefd28785fbf3a8f4fc8830",
"text": "Bounds are given for the empirical and expected Rademacher complexity of classes of linear transformations from a Hilbert space H to a
nite dimensional space. The results imply generalization guarantees for graph regularization and multi-task subspace learning. 1 Introduction Rademacher averages have been introduced to learning theory as an e¢ cient complexity measure for function classes, motivated by tight, sample or distribution dependent generalization bounds ([10], [2]). Both the de
nition of Rademacher complexity and the generalization bounds extend easily from realvalued function classes to function classes with values in R, as they are relevant to multi-task learning ([1], [12]). There has been an increasing interest in multi-task learning which has shown to be very e¤ective in experiments ([7], [1]), and there have been some general studies of its generalisation performance ([4], [5]). For a large collection of tasks there are usually more data available than for a single task and these data may be put to a coherent use by some constraint of relatedness. A practically interesting case is linear multi-task learning, extending linear large margin classi
ers to vector valued large-margin classi
ers. Di¤erent types of constraints have been proposed: Evgeniou et al ([8], [9]) propose graph regularization, where the vectors de
ning the classi
ers of related tasks have to be near each other. They also show that their scheme can be implemented in the framework of kernel machines. Ando and Zhang [1] on the other hand require the classi
ers to be members of a common low dimensional subspace. They also give generalization bounds using Rademacher complexity, but these bounds increase with the dimension of the input space. This paper gives dimension free bounds which apply to both approaches. 1.1 Multi-task generalization and Rademacher complexity Suppose we have m classi
cation tasks, represented by m independent random variables X ; Y l taking values in X f 1; 1g, where X l models the random",
"title": ""
},
{
"docid": "38f6aaf5844ddb6e4ed0665559b7f813",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "226276adf10b40939e8cbb15addc6ba3",
"text": "The effects of EGb 761 on the CNS underlie one of its major therapeutic indications; i.e., individuals suffering from deteriorating cerebral mechanisms related to age-associated impairments of memory, attention and other cognitive functions. EGb 761 is currently used as symptomatic treatment for cerebral insufficiency that occurs during normal ageing or which may be due to degenerative dementia, vascular dementia or mixed forms of both, and for neurosensory disturbances. Depressive symptoms of patients with Alzheimer's disease (AD) and aged non-Alzheimer patients may also respond to treatment with EGb 761 since this extract has an \"anti-stress\" effect. Basic and clinical studies, conducted both in vitro and in vivo, support these beneficial neuroprotective effects of EGb 761. EGb 761 has several major actions; it enhances cognition, improves blood rheology and tissue metabolism, and opposes the detrimental effects of ischaemia. Several mechanisms of action are useful in explaining how EGb 761 benefits patients with AD and other age-related, neurodegenerative disorders. In animals, EGb 761 possesses antioxidant and free radical-scavenging activities, it reverses age-related losses in brain alpha 1-adrenergic, 5-HT1A and muscarinic receptors, protects against ischaemic neuronal death, preserves the function of the hippocampal mossy fiber system, increases hippocampal high-affinity choline uptake, inhibits the down-regulation of hippocampal glucocorticoid receptors, enhances neuronal plasticity, and counteracts the cognitive deficits that follow stress or traumatic brain injury. Identified chemical constituents of EGb 761 have been associated with certain actions. Both flavonoid and ginkgolide constituents are involved in the free radical-scavenging and antioxidant effects of EGb 761 which decrease tissue levels of reactive oxygen species (ROS) and inhibit membrane lipid peroxidation. Regarding EGb 761-induced regulation of cerebral glucose utilization, bilobalide increases the respiratory control ratio of mitochondria by protecting against uncoupling of oxidative phosphorylation, thereby increasing ATP levels, a result that is supported by the finding that bilobalide increases the expression of the mitochondrial DNA-encoded COX III subunit of cytochrome oxidase. With regard to its \"anti-stress\" effect, EGb 761 acts via its ginkgolide constituents to decrease the expression of the peripheral benzodiazepine receptor (PBR) of the adrenal cortex.",
"title": ""
},
{
"docid": "149c18850040c6073e84ad117b4e4eac",
"text": "Hemangiomas are the most common tumor of infantile period and usually involved sites are head and neck (%50), followed by trunk and extremities. Hemangioma is rarely described in genitals. We report a 17-months-old patient with a hemangioma of the preputium penis. The tumor was completely removed surgically and histological examination revealed an infantile hemangioma.",
"title": ""
},
{
"docid": "325b97e73ea0a50d2413757e95628163",
"text": "Due to the recent advancement in procedural generation techniques, games are presenting players with ever growing cities and terrains to explore. However most sandbox-style games situated in cities, do not allow players to wander into buildings. In past research, space planning techniques have already been utilized to generate suitable layouts for both building floor plans and room layouts. We introduce a novel rule-based layout solving approach, especially suited for use in conjunction with procedural generation methods. We show how this solving approach can be used for procedural generation by providing the solver with a userdefined plan. In this plan, users can specify objects to be placed as instances of classes, which in turn contain rules about how instances should be placed. This approach gives us the opportunity to use our generic solver in different procedural generation scenarios. In this paper, we will illustrate mainly with interior generation examples.",
"title": ""
},
{
"docid": "c36dac0c410570e84bf8634b32a0cac3",
"text": "The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and their parameter settings) are essentially input-agnostic. To address these issues, we propose a machine learning (ML) framework for variable branching in MIP. Our method observes the decisions made by Strong Branching (SB), a time-consuming strategy that produces small search trees, collecting features that characterize the candidate branching variables at each node of the tree. Based on the collected data, we learn an easy-to-evaluate surrogate function that mimics the SB strategy, by means of solving a learning-to-rank problem, common in ML. The learned ranking function is then used for branching. The learning is instance-specific, and is performed on-the-fly while executing a branch-and-bound search to solve the instance. Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.",
"title": ""
},
{
"docid": "29d98961d0ecde875bedcd4cfcb72026",
"text": "The claim that we have a moral obligation, where a choice can be made, to bring to birth the 'best' child possible, has been highly controversial for a number of decades. More recently Savulescu has labelled this claim the Principle of Procreative Beneficence. It has been argued that this Principle is problematic in both its reasoning and its implications, most notably in that it places lower moral value on the disabled. Relentless criticism of this proposed moral obligation, however, has been unable, thus far, to discredit this Principle convincingly and as a result its influence shows no sign of abating. I will argue that while criticisms of the implications and detail of the reasoning behind it are well founded, they are unlikely to produce an argument that will ultimately discredit the obligation that the Principle of Procreative Beneficence represents. I believe that what is needed finally and convincingly to reveal the fallacy of this Principle is a critique of its ultimate theoretical foundation, the notion of impersonal harm. In this paper I argue that while the notion of impersonal harm is intuitively very appealing, its plausibility is based entirely on this intuitive appeal and not on sound moral reasoning. I show that there is another plausible explanation for our intuitive response and I believe that this, in conjunction with the other theoretical criticisms that I and others have levelled at this Principle, shows that the Principle of Procreative Beneficence should be rejected.",
"title": ""
}
] |
scidocsrr
|
4067a8bb29b89d8861b311280b95fdf6
|
Smart Wheelchairs - State of the Art in an Emerging Market
|
[
{
"docid": "f5913b9635302192149270b600a15fcd",
"text": "Many people who use wheelchairs are unable to control a powered wheelchair with the standard joystick interface. A robotic wheelchair can provide users with driving assistance, taking over low level navigation to allow its user to travel efficiently and with greater ease. Our robotic wheelchair system, Wheelesley, consists of a standard powered wheelchair with an on-board computer, sensors and a graphical user interface running on a notebook computer. This paper describes the indoor navigation system and a user interface that can be easily customized for",
"title": ""
}
] |
[
{
"docid": "b94687da7db1a718a9a440a575a71a34",
"text": "SOS1 constraints require that at most one of a given set of variables is nonzero. In this article, we investigate a branch-and-cut algorithm to solve linear programs with SOS1 constraints. We focus on the case in which the SOS1 constraints overlap. The corresponding conflict graph can algorithmically be exploited, for instance, for improved branching rules, preprocessing, primal heuristics, and cutting planes. In an extensive computational study, we evaluate the components of our implementation on instances for three different applications. We also demonstrate the effectiveness of this approach by comparing it to the solution of a mixed-integer programming formulation, if the variables appearing in SOS1 constraints are bounded.",
"title": ""
},
{
"docid": "514d626cc44cf453706c0903cbc645fe",
"text": "Peer group analysis is a new tool for monitoring behavior over time in data mining situations. In particular, the tool detects individual objects that begin to behave in a way distinct from objects to which they had previously been similar. Each object is selected as a target object and is compared with all other objects in the database, using either external comparison criteria or internal criteria summarizing earlier behavior patterns of each object. Based on this comparison, a peer group of objects most similar to the target object is chosen. The behavior of the peer group is then summarized at each subsequent time point, and the behavior of the target object compared with the summary of its peer group. Those target objects exhibiting behavior most different from their peer group summary behavior are flagged as meriting closer investigation. The tool is intended to be part of the data mining process, involving cycling between the detection of objects that behave in anomalous ways and the detailed examination of those objects. Several aspects of peer group analysis can be tuned to the particular application, including the size of the peer group, the width of the moving behavior window being used, the way the peer group is summarized, and the measures of difference between the target object and its peer group summary. We apply the tool in various situations and illustrate its use on a set of credit card transaction data.",
"title": ""
},
{
"docid": "313dba70fea244739a45a9df37cdcf71",
"text": "We present KB-UNIFY, a novel approach for integrating the output of different Open Information Extraction systems into a single unified and fully disambiguated knowledge repository. KB-UNIFY consists of three main steps: (1) disambiguation of relation argument pairs via a sensebased vector representation and a large unified sense inventory; (2) ranking of semantic relations according to their degree of specificity; (3) cross-resource relation alignment and merging based on the semantic similarity of domains and ranges. We tested KB-UNIFY on a set of four heterogeneous knowledge bases, obtaining high-quality results. We discuss and provide evaluations at each stage, and release output and evaluation data for the use and scrutiny of the community1.",
"title": ""
},
{
"docid": "263e8b756862ab28d313578e3f6acbb1",
"text": "Goal posts detection is a critical robot soccer ability which is needed to be accurate, robust and efficient. A goal detection method using Hough transform to get the detailed goal features is presented in this paper. In the beginning, the image preprocessing and Hough transform implementation are described in detail. A new modification on the θ parameter range in Hough transform is explained and applied to speed up the detection process. Line processing algorithm is used to classify the line detected, and then the goal feature extraction method, including the line intersection calculation, is done. Finally, the goal distance from the robot body is estimated using triangle similarity. The experiment is performed on our university humanoid robot with the goal dimension of 225 cm in width and 110 cm in height, in yellow color. The result shows that the goal detection method, including the modification in Hough transform, is able to extract the goal features seen by the robot correctly, with the lowest speed of 5 frames per second. Additionally, the goal distance estimation is accomplished with maximum error of 20 centimeters.",
"title": ""
},
{
"docid": "4e97003a5609901f1f18be1ccbf9db46",
"text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.",
"title": ""
},
{
"docid": "729840cdad8954ac58df1e8457a93796",
"text": "Prudent health care policies that encourage public-private participation in health care financing and provisioning have conferred on Singapore the advantage of flexible response as it faces the potentially conflicting challenges of becoming a regional medical hub attracting foreign patients and ensuring domestic access to affordable health care. Both the external and internal health care markets are two sides of the same coin, the competition to be decided on price and quality. For effective regulation, a tripartite model, involving not just the government and providers but empowered consumers, is needed. Government should distance itself from the provider role while providers should compete - and cooperate - to create higher-value health care systems than what others can offer. Health care policies should be better informed by health policy research.",
"title": ""
},
{
"docid": "72453a8b2b70c781e1a561b5cfb9eecb",
"text": "Pair Programming is an innovative collaborative software development methodology. Anecdotal and empirical evidence suggests that this agile development method produces better quality software in reduced time with higher levels of developer satisfaction. To date, little explanation has been offered as to why these improved performance outcomes occur. In this qualitative study, we focus on how individual differences, and specifically task conflict, impact results of the collaborative software development process and related outcomes. We illustrate that low to moderate levels of task conflict actually enhance performance, while high levels mitigate otherwise anticipated positive results.",
"title": ""
},
{
"docid": "4019beb9fa6ec59b4b19c790fe8ff832",
"text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.",
"title": ""
},
{
"docid": "d10ab66c987495aefc34ce55eb89e110",
"text": "Bartter syndrome (BS) type 1, also referred to antenatal BS, is a genetic tubulopathy with hypokalemic metabolic alkalosis and prenatal onset of polyuria leading to polyhydramnios. It has been shown that BS type 1 is caused by mutations in the SLC12A1 gene encoding bumetanide-sensitive Na-K-2Cl– cotransporter (NKCC2). We had the opportunity to care for two unrelated Japanese patients of BS type 1 with typical manifestations including polyhydramnios, prematurity, hypokalemia, alkalosis, and infantile-onset nephrocalcinosis. Analysis of the SLC12A1 gene demonstrated four novel mutations: N117X, G257S, D792fs and N984fs. N117X mutation is expected to abolish most of the NKCC2 protein, whereas G257, which is evolutionary conserved, resides in the third transmemebrane domain. The latter two frameshift mutations reside in the intra-cytoplasmic C-terminal domain, which illustrates the importance of this domain for the NKCC2 function. In conclusion, we found four novel SLC12A1 mutations in two BS type 1 patients. Development of effective therapy for hypercalciuria is mandatory to prevent nephrocalcinosis and resultant renal failure.",
"title": ""
},
{
"docid": "674d347526e5ea2677eec2f2b816935b",
"text": "PATIENT\nMale, 70 • Male, 84.\n\n\nFINAL DIAGNOSIS\nAppendiceal mucocele and pseudomyxoma peritonei.\n\n\nSYMPTOMS\n-.\n\n\nMEDICATION\n-.\n\n\nCLINICAL PROCEDURE\n-.\n\n\nSPECIALTY\nSurgery.\n\n\nOBJECTIVE\nRare disease.\n\n\nBACKGROUND\nMucocele of the appendix is an uncommon cystic lesion characterized by distension of the appendiceal lumen with mucus. Most commonly, it is the result of epithelial proliferation, but it can also be caused by inflammation or obstruction of the appendix. When an underlying mucinous cystadenocarcinoma exists, spontaneous or iatrogenic rupture of the mucocele can lead to mucinous intraperitoneal ascites, a syndrome known as pseudomyxoma peritonei.\n\n\nCASE REPORT\nWe report 2 cases that represent the clinical extremities of this heterogeneous disease; an asymptomatic mucocele of the appendix in a 70-year-old female and a case of pseudomyxoma peritonei in an 84-year-old male. Subsequently, we review the current literature focusing to the optimal management of both conditions.\n\n\nCONCLUSIONS\nMucocele of the appendix is a rare disease, usually diagnosed on histopathologic examination of appendectomized specimens. Due to the existing potential for malignant transformation and pseudomyxoma peritonei caused by rupture of the mucocele, extensive preoperative evaluation and thorough intraoperative gastrointestinal and peritoneal examination is required.",
"title": ""
},
{
"docid": "800befb527094bc6169809c6765d5d15",
"text": "The problem of scheduling a weighted directed acyclic graph (DAG) to a set of homogeneous processors to minimize the completion time has been extensively studied. The NPcompleteness of the problem has instigated researchers to propose a myriad of heuristic algorithms. While these algorithms are individually reported to be efficient, it is not clear how effective they are and how well they compare against each other. A comprehensive performance evaluation and comparison of these algorithms entails addressing a number of difficult issues. One of the issues is that a large number of scheduling algorithms are based upon radically different assumptions, making their comparison on a unified basis a rather intricate task. Another issue is that there is no standard set of benchmarks that can be used to evaluate and compare these algorithms. Furthermore, most algorithms are evaluated using small problem sizes, and it is not clear how their performance scales with the problem size. In this paper, we first provide a taxonomy for classifying various algorithms into different categories according to their assumptions and functionalities. We then propose a set of benchmarks which are of diverse structures without being biased towards a particular scheduling technique and still allow variations in important parameters. We have evaluated 15 scheduling algorithms, and compared them using the proposed benchmarks. Based upon the design philosophies and principles behind these algorithms, we interpret the results and discuss why some algorithms perform better than the others.",
"title": ""
},
{
"docid": "5218f1ddf65b9bc1db335bb98d7e71b4",
"text": "The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm. Key-words:-Fingerprint Recognition, Binarization, Block Filter Method, Matching score and Minutia.",
"title": ""
},
{
"docid": "1f7f0b82bf5822ee51313edfd1cb1593",
"text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.",
"title": ""
},
{
"docid": "fbc41e1582d2d6d3896f89de1568de3c",
"text": "Vehicular ad-hoc NETworks (VANETs) have received considerable attention in recent years, due to its unique characteristics, which are different from mobile ad-hoc NETworks, such as rapid topology change, frequent link failure, and high vehicle mobility. The main drawback of VANETs network is the network instability, which yields to reduce the network efficiency. In this paper, we propose three algorithms: cluster-based life-time routing (CBLTR) protocol, Intersection dynamic VANET routing (IDVR) protocol, and control overhead reduction algorithm (CORA). The CBLTR protocol aims to increase the route stability and average throughput in a bidirectional segment scenario. The cluster heads (CHs) are selected based on maximum lifetime among all vehicles that are located within each cluster. The IDVR protocol aims to increase the route stability and average throughput, and to reduce end-to-end delay in a grid topology. The elected intersection CH receives a set of candidate shortest routes (SCSR) closed to the desired destination from the software defined network. The IDVR protocol selects the optimal route based on its current location, destination location, and the maximum of the minimum average throughput of SCSR. Finally, the CORA algorithm aims to reduce the control overhead messages in the clusters by developing a new mechanism to calculate the optimal numbers of the control overhead messages between the cluster members and the CH. We used SUMO traffic generator simulators and MATLAB to evaluate the performance of our proposed protocols. These protocols significantly outperform many protocols mentioned in the literature, in terms of many parameters.",
"title": ""
},
{
"docid": "b16d8dddf037e60ba9121f85e7d9b45a",
"text": "Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.",
"title": ""
},
{
"docid": "8389e7702dbd2c54395d871758361b0e",
"text": "Recently, significant advances have been made in ROBOTICS, ARTIFICIAL INTELLIGENCE and other COGNITIVE related fields, allowing tomakemuch sophisticated biomimetic robotics systems. In addition, enormous number of robots have been designed and assembled, explicitly realize biological oriented behaviors. Towards much skill behaviors and adequate grasping abilities (i.e. ARTICULATION and DEXTEROUS MANIPULATION), a new phase of dexterous hands have been developed recently with biomimetically oriented and bio-inspired functionalities. In this respect, this manuscript brings a detailed survey of biomimetic based dexterous robotics multi-fingered hands. The aim of this survey, is to find out the state of the art on dexterous robotics end-effectors, known in literature as (ROBOTIC HANDS) or (DEXTEROUSMULTI-FINGERED) robot hands. Hence, this review finds such biomimetic approaches using a framework that permits for a common description of biological and technical based hand manipulation behavior. In particular, the manuscript focuses on a number of developments that have been taking place over the past two decades, and some recent developments related to this biomimetic field of research. In conclusions, the study found that, there are rich research efforts in terms of KINEMATICS, DYNAMICS, MODELING and CONTROL methodologies. The survey is also indicating that, the topic of biomimetic inspired robotics systems make significant contributions to robotics hand design, in four main directions for future research. First, they provide a genuine world test of models of biologically inspired hand designs and dexterous manipulation behaviors. Second, they provide novel manipulation articulations and mechanisms available for industrial and domestic uses, most notably in the field of human like hand design and real world applications. Third, this survey has also indicated that, there are quite large number of attempts to acquire biologically inspired hands. These attempts were almost successful, where they exposed more novel ideas for further developments. Such inspirations were directed towards a number of topics related (HAND MECHANICS AND DESIGN), (HAND TACTILE SENSING), (HAND FORCE SENSING), (HAND SOFT ACTUATION) and (HANDCONFIGURATIONAND TOPOLOGY). FOURTH, in terms of employing AI related sciences and cognitive thinking, it was also found that, rare and exceptional research attempts were directed towards the employment of biologically inspired thinking, i.e. (AI, BRAIN AND COGNITIVE SCIENCES) for hand upper control and towards much sophisticated dexterous movements. Throughout the study, it has been found there are number of efforts in terms of mechanics and hand designs, tactical sensing, however, for hand soft actuation, it seems this area of research is still far away from having a realistic muscular type fingers and hand movements. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "242b854de904075d04e7044e680dc281",
"text": "Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.",
"title": ""
},
{
"docid": "54768e28b5980d735fed93096de20f5d",
"text": "................................................................................................... vii Chapter",
"title": ""
},
{
"docid": "9c951a9bf159c073471107bd3c1663ee",
"text": "Collision tumor means the coexistence of two adjacent, but histologically distinct tumors without histologic admixture in the same tissue or organ. Collision tumors involving ovaries are extremely rare. We present a case of 45-year-old parous woman with a left dermoid cyst, with unusual imaging findings, massive ascites and peritoneal carcinomatosis. The patient underwent cytoreductive surgery. The histopathology revealed a collision tumor consisting of an invasive serous cystadenocarcinoma and a dermoid cyst.",
"title": ""
}
] |
scidocsrr
|
b72287732bf3573bd69c5b8e44b71fed
|
Identifying Justifications in Written Dialogs by Classifying Text as Argumentative
|
[
{
"docid": "5cd48ee461748d989c40f8e0f0aa9581",
"text": "Being able to identify which rhetorical relations (e.g., contrast or explanation) hold between spans of text is important for many natural language processing applications. Using machine learning to obtain a classifier which can distinguish between different relations typically depends on the availability of manually labelled training data, which is very time-consuming to create. However, rhetorical relations are sometimes lexically marked, i.e., signalled by discourse markers (e.g., because, but, consequently etc.), and it has been suggested (Marcu and Echihabi, 2002) that the presence of these cues in some examples can be exploited to label them automatically with the corresponding relation. The discourse markers are then removed and the automatically labelled data are used to train a classifier to determine relations even when no discourse marker is present (based on other linguistic cues such as word co-occurrences). In this paper, we investigate empirically how feasible this approach is. In particular, we test whether automatically labelled, lexically marked examples are really suitable training material for classifiers that are then applied to unmarked examples. Our results suggest that training on this type of data may not be such a good strategy, as models trained in this way do not seem to generalise very well to unmarked data. Furthermore, we found some evidence that this behaviour is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples).",
"title": ""
}
] |
[
{
"docid": "a54bc0f529d047aa273d834c53c15bd3",
"text": "This paper presents an optimized methodology to folded cascode operational transconductance amplifier (OTA) design. The design is done in different regions of operation, weak inversion, strong inversion and moderate inversion using the gm/ID methodology in order to optimize MOS transistor sizing. Using 0.35μm CMOS process, the designed folded cascode OTA achieves a DC gain of 77.5dB and a unity-gain frequency of 430MHz in strong inversion mode. In moderate inversion mode, it has a 92dB DC gain and provides a gain bandwidth product of around 69MHz. The OTA circuit has a DC gain of 75.5dB and unity-gain frequency limited to 19.14MHZ in weak inversion region. Keywords—CMOS IC design, Folded Cascode OTA, gm/ID methodology, optimization.",
"title": ""
},
{
"docid": "c962837c549d0ef45384bb7a67805f63",
"text": "In this study, hypotheses of astrologers about the predominance of specific astrological factors in the birth charts of serial killers are tested. In particular, Mutable signs (Gemini, Virgo, Sagittarius and Pisces), the 12 principles (12 house, Pisces, Neptune) and specific Moon aspects are expected to be frequent among serial killers as compared to the normal population. A sample consisting of two datasets of male serial killers was analysed: one set consisting of birth data with a reliable birth time (N=77) and another set with missing birth times (12:00 AM was used, N=216). The set with known birth times was selected from AstroDatabank and an astrological publication. The set with unknown birth times was selected from three specialised sources on the Internet. Various control groups were obtained by shuffle methods, by time-shifting and by sampling birth data of 6,000 persons from AstroDatabank. Theoretically expected frequencies of astrological factors were derived from the control samples. Probability-density functions were obtained by bootstrap methods and were used to estimate significance levels. It is found that serial killers are frequently born when celestial factors are in Mutable signs (with birth time: p=0.005, effect size=0.31; without birth time: p=0.002, effect size=0.25). The frequency of planets in the 12 house is significantly high (p=0.005, effect size=0.31, for birth times only) and the frequency distribution of Moon aspects deviates from the theoretical distribution in the whole sample (p=0.0005) and in the dataset with known birth time (p=0.001). It is concluded that, based on the two datasets, some of the claims of astrologers cannot be rejected. Introduction This investigation is stimulated by astrological research articles about the birth charts of serial killers (Marks, 2002; Wickenburg, 1994). Unfortunately, the hypotheses by astrologer Liz Greene and others about the natal charts of psychopaths and serial killers (Greene & Sasportas, 1987a,b; Greene, 2003) are not tested in these research articles. I feel the challenge to do that in a more detailed study. Evidence for astrology is largely lacking, though some studies have reported small effect sizes (Ertel & Irving, 1996). It could be reasoned that if some of these astrological effects are genuine, higher effect sizes are to be expected in samples that are more homogeneous with respect to certain behavioural or psychological factors. Serial killers can be considered quite homogeneous with respect to common psychological traits, which manifest at an early age, and with respect to background, which is mostly dysfunctional, involving sexual or physical abuse, drugs or alcoholism (Schechter & Everitt, 1997; Schechter, 2004). If astrology works, then one would say that serial killers should display common factors in their birth charts. Correlation 25(2) 2008 Jan Ruis: Serial Killers 8 8 Specific sorts of behaviour, such as animal torture, fire setting, bed-wetting, frequent daydreaming, social isolation and chronic lying, characterize the childhood of serial killers. As adults they are addicted to their fantasies, have a lack of empathy, a constant urge for stimuli, a lack of external goals in life, a low self-control and a low sense of personal power. The lack of empathy or remorse, the superficial charm and the inflated self-appraisal are features of psychopathy. Serial killers have also been said to have a form of narcissistic personality disorder with a mental addiction to kill (Vaknin, 2003). In many psychological profiles of serial killers the central theme is frequent daydreaming, starting in early childhood and associated with a powerful imagination. It leads to the general fantasy world in which the serial killer begins to live as protection against isolation and feelings of inadequacy arising from this isolation (Ressler & Burgess, 1990). Many serial killers enact their crimes because of the detailed and violent fantasies (power, torture and murder) that have developed in their minds as early as the ages of seven and eight. These aggressive daydreams, developed as children, continue to develop and expand through adolescence into maturity, where they are finally released into the real world (Wilson & Seamen, 1992). With each successive victim, they attempt to fine tune the act, striving to make the real life experiences as perfect as the fantasy (Apsche, 1993). Serial killers, of which 90% are males, must be distinguished from the other type of multiple murderers: rampage killers (Schechter, 2004), which include mass and spree killers. The typical serial killer murders a single victim at separate events, while reverting to normal life in between the killings, and may continue with this pattern for years. In contrast, a mass murderer kills many people at a single event that usually ends with actual or provoked suicide, such as the Columbine High School massacre. A spree killer can be seen as a mobile mass murderer, such as Charles Starkweather and Andrew Cunanan. The FBI definition of a serial killer states that they must have committed at least three murders at different locations with a cooling-off period in between the killings. This definition is criticized because it is not specific enough with respect to the nature of the crimes and the number of kills (Schechter, 2004). A person with the mentality of a serial killer, who gets arrested after the second sexually motivated murder, would not be a serial killer in this definition. Therefore, the National Institutes of Justice have formulated another description, which was adopted in the present study: “a series of two or more murders, committed as separate events, usually, but not always, by one offender acting alone. The crimes may occur over a period of time ranging from hours to years. Quite often the motive is psychological, and the offender’s behaviour and the physical evidence observed at the crime scene will often reflect sadistic, sexual overtones.” Five different categories of serial killer are usually distinguished (Newton, 2006; Schechter & Everitt, 1997, Schechter, 2004): 1. Visionary. Is subject to hallucinations or visions that tell him to kill. Examples are Ed Gein and Herbert Mullin. 2. Missionary. Goes on hunting \"missions\" to eradicate a specific group of people (prostitutes, ethnic groups). Missionary killers believe that their acts are justified on the basis that they are getting rid of a certain type of person and thus doing society a favour. Examples are Gary Ridgway and Carroll Cole. 3. Hedonistic, with two subtypes: Correlation 25(2) 2008 Jan Ruis: Serial Killers 9 9 a. Lust-motivated: associates sexual pleasure with murder. Torturing and necrophilia are eroticised experiences. An example is Jeffrey Dahmer. b. Thrill-motivated: gets a thrill from killing; excitement and euphoria at victim's final anguish. An example is Dennis Rader. 4. Powerand control-seeking. The primary motive is the urgent need to assert supremacy over a helpless victim, to compensate for their own deep-seated feelings of worthlessness by completely dominating a victim. An example is Ted Bundy. 5. Gain-motivated. Most criminals who commit multiple murders for financial gain (such as bank robbers, hit men from the drug business or the mafia) are not classified as serial killers, because they are motivated by economic gain rather than psychopathological compulsion. Many serial killers may take a trophy from the crime scene, or even some valuables, but financial gain is not a driving motive. Still, there is no clear boundary between profit killers and other kinds of serial killer. For instance, Marcel Petiot liked to watch his victims die through a peephole after having robbed them of their possessions. Here sadism as a psychological motive was clearly involved. Both sadism and greed also motivated Henry Howard Holmes, and sadism was at least a second motive in “bluebeard” killers such as Harry Powers (who murder a series of wives, fiancées or partners for profit). Schechter (2004) argues that all bluebeards, like Henry Landru, George Joseph Smith and John George Haigh, are driven by both greed and sadism. Other investigators, such as Aamodt from Radford University (2008), categorize bluebeards in the group of power-motivated serial killers. Holmes (1996) distinguishes six types of serial killer: visionary, missionary, lust-oriented hedonist, thrill-oriented hedonist, the power/control freak and the comfort-oriented hedonist. In this typology, bluebeards are placed in the comfort type of serial killer group. Other arguments that bluebeards should be included in the present study are that they fit the serial killer definition of the National Institutes of Justice, and that like typical serial killers, they engage in planning activities, target a specific type of (vulnerable) victim, kill out of free will and at their own initiative, avoid being captured, and pretend to be normal citizens while hiding the crimes. Other multiple killers for profit, such as bank robbers and other armed robbers, hit men from the drugs scene, the mafia or other gangs, are generally not considered serial killers. Neither are other types of multiple murderers such as war criminals, mass murderers (including terrorists), spree killers and murderers who kill their partner out of jealousy. These killers are not incorporated in this study. Since definite boundaries between the different types of multiple murderers are hard to draw (Newton, 2006), I used a checklist in order to define serial killers in this study and to distinguish between serial killers and the other types of multiple murderer. This checklist is based on the characteristics of serial and rampage killers (Holmes, 1996; Schechter, 2004) and is included in Appendix A. For reasons of homogeneity, and because females usually have different motives as compared to males and over 90% of serial killers are males, this investigation was restricted to mal",
"title": ""
},
{
"docid": "e15ee429fd04286d7668486af088e1f2",
"text": "This paper reviews the applications of Augmented Reality with an emphasis on aerospace manufacturing processes. A contextual overview of Lean Manufacturing, aerospace industry, Virtual Reality (VR) and Augmented Reality (AR) is provided. Many AR applications are provided to show that AR can be used in different fields of endeavor with different focuses. This paper shows two case studies in aerospace industries, presenting different forms of AR use in aerospace manufacturing processes to demonstrate the benefits and advantages that can be reached. It is concluded showing that gains of labor qualification, training costs reduction, inspection system and productivity of the business can be provided by the use of AR.",
"title": ""
},
{
"docid": "d840814a871a36479e465736077b375a",
"text": "With the popularity of the Internet, online news media are pouring numerous of news reports into the Internet every day. People get lost in the information explosion. Although the existing methods are able to extract news reports according to key words, and aggregate news reports into stories or events, they just list the related reports or events in order. Moreover, they are unable to provide the evolution relationships between events within a topic, thus people hardly capture the events development vein. In order to mine the underlying evolution relationships between events within the topic, we propose a novel event evolution Model in this paper. This model utilizes TFIEF and Temporal Distance Cost factor (TDC) to model the event evolution relationships. we construct event evolution relationships map to show the events development vein. The experimental evaluation on real dataset show that our technique precedes the baseline technique.",
"title": ""
},
{
"docid": "eb6da64fe7dffde7fbc0a2520b435c87",
"text": "In this paper, we present our system addressing Task 1 of CL-SciSumm Shared Task at BIRNDL 2016. Our system makes use of lexical and syntactic dependency cues, and applies rule-based approach to extract text spans in the Reference Paper that accurately reflect the citances. Further, we make use of lexical cues to identify discourse facets of the paper to which cited text belongs. The lexical and syntactic cues are obtained on pre-processed text of the citances, and the reference paper. We report our results obtained for development set using our system for identifying reference scope of citances in this paper.",
"title": ""
},
{
"docid": "b5290a5df838baff03de94f1f18bf9fa",
"text": "Current Web service technology is evolving towards a simpler approach to define Web service APIs that challenges the assumptions made by existing languages for Web service composition. RESTful Web services introduce a new kind of abstraction, the resource, which does not fit well with the message-oriented paradigm of the Web service description language (WSDL). RESTful Web services are thus hard to compose using the Business Process Execution Language (WS-BPEL), due to its tight coupling to WSDL. The goal of the BPEL for REST extensions presented in this paper is twofold. First, we aim to enable the composition of both RESTful Web services and traditional Web services from within the same process-oriented service composition language. Second, we show how to publish a BPEL process as a RESTful Web service, by exposing selected parts of its execution state using the REST interaction primitives. We include a detailed example on how BPEL for REST can be applied to orchestrate a RESTful e-Commerce scenario and discuss how the proposed extensions affect the architecture of a process execution engine. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "09e4a9c086638fe436e90b008a873d22",
"text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2015, INFORMS",
"title": ""
},
{
"docid": "767b6a698ee56a4859c21f70f52b2b81",
"text": "This article surveyed the main neuromarketing techniques used in the world and the practical results obtained. Specifically, the objectives are (1) to identify the main existing definitions of neuromarketing; (2) to identify the importance and the potential contributions of neuromarketing; (3) to demonstrate the advantages of neuromarketing as a marketing research tool compared to traditional research methods; (4) to identify the ethical issues involved with neuromarketing research; (5) to present the main neuromarketing techniques that are being used in the development of marketing research; (6) to present studies in which neuromarketing research techniques were used; and (7) to identify the main limitations of neuromarketing. The results obtained allow an understanding of the ways to develop, store, Journal of Management Research ISSN 1941-899X 2014, Vol. 6, No. 2 www.macrothink.org/jmr 202 retrieve and use information about consumers, as well as ways to develop the field of neuromarketing. In addition to offering theoretical support for neuromarketing, this article discusses business cases, implementation and achievements.",
"title": ""
},
{
"docid": "804322502b82ad321a0f97d6f83858ee",
"text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.",
"title": ""
},
{
"docid": "492b99428b8c0b4a5921c78518fece50",
"text": "Over the past few decades, significant progress has been made in clustering high-dimensional data sets distributed around a collection of linear and affine subspaces. This article presented a review of such progress, which included a number of existing subspace clustering algorithms together with an experimental evaluation on the motion segmentation and face clustering problems in computer vision.",
"title": ""
},
{
"docid": "eda40814ecaecbe5d15ccba49f8a0d43",
"text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem",
"title": ""
},
{
"docid": "9ea9b364e2123d8917d4a2f25e69e084",
"text": "Movement observation and imagery are increasingly propagandized for motor rehabilitation. Both observation and imagery are thought to improve motor function through repeated activation of mental motor representations. However, it is unknown what stimulation parameters or imagery conditions are optimal for rehabilitation purposes. A better understanding of the mechanisms underlying movement observation and imagery is essential for the optimization of functional outcome using these training conditions. This study systematically assessed the corticospinal excitability during rest, observation, imagery and execution of a simple and a complex finger-tapping sequence in healthy controls using transcranial magnetic stimulation (TMS). Observation was conducted passively (without prior instructions) as well as actively (in order to imitate). Imagery was performed visually and kinesthetically. A larger increase in corticospinal excitability was found during active observation in comparison with passive observation and visual or kinesthetic imagery. No significant difference between kinesthetic and visual imagery was found. Overall, the complex task led to a higher corticospinal excitability in comparison with the simple task. In conclusion, the corticospinal excitability was modulated during both movement observation and imagery. Specifically, active observation of a complex motor task resulted in increased corticospinal excitability. Active observation may be more effective than imagery for motor rehabilitation purposes. In addition, the activation of mental motor representations may be optimized by varying task-complexity.",
"title": ""
},
{
"docid": "8bf63451cf6b83f3da4d4378de7bfd7f",
"text": "This paper presents a high-efficiency and smoothtransition buck-boost (BB) converter to extend the battery life of portable devices. Owing to the usage of four switches, the BB control topology needs to minimize the switching and conduction losses at the same time. Therefore, over a wide input voltage range, the proposed BB converter consumes minimum switching loss like the basic operation of buck or boost converter. Besides, the conduction loss is reduced by means of the reduction of the inductor current level. Especially, the proposed BB converter offers good line/load regulation and thus provides a smooth and stable output voltage when the battery voltage decreases. Simulation results show that the output voltage drops is very small during the whole battery life time and the output transition is very smooth during the mode transition by the proposed BB control scheme.",
"title": ""
},
{
"docid": "0c7ba527445c6d8fc39d942f78901259",
"text": "Physically Unclonable Functions (PUFs) are impacted by environmental variations and aging which can reduce their acceptance in identification and authentication applications. Prior approaches to improve PUF reliability include bit analysis across environmental conditions, better design, and post-processing error correction, but these are of high cost in terms of test time and design overheads, making them unsuitable for high volume production. In this paper, we aim to address this issue for SRAM PUFs with novel bit analysis and bit selection algorithms. Our analysis of real SRAM PUFs reveals (i) critical conditions on which to select stable SRAM cells for PUF at low-cost (ii) unexplored spatial correlation between stable bits, i.e., cells that are the most stable tend to be surrounded by stable cells determined during enrollment. We develop a bit selection procedure around these observations that produces very stable bits for the PUF generated ID/key. Experimental data from real SRAM PUFs show that our approaches can effectively reduce number of errors in PUF IDs/keys with fewer enrollment steps.",
"title": ""
},
{
"docid": "3500278940baaf6f510ad47463cbf5ed",
"text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.",
"title": ""
},
{
"docid": "47432aed7a46f1591597208dd25e8425",
"text": "Successful breastfeeding is dependent upon an infant's ability to correctly latch onto a mother's breast. If an infant is born with oral soft tissue abnormalities such as tongue-tie or lip-tie, breastfeeding may become challenging or impossible. During the oral evaluation of an infant presenting with breastfeeding problems, one area that is often overlooked and undiagnosed and, thus, untreated is the attachment of the upper lip to the maxillary gingival tissue. Historically, this tissue has been described as the superior labial frenum, median labial frenum, or maxillary labial frenum. These terms all refer to a segment of the mucous membrane in the midline of the upper lip containing loose connective tissue that inserts into the maxillary arch's loose, unattached gingival or tight, attached gingival tissue. There is no muscle contained within this tissue. In severe instances, this tissue may extend into the area behind the upper central incisors and incisive papilla. The author has defined and identified the restrictions of mobility of this tissue as a lip-tie, which reflects the clinical attachment of the upper lip to the maxillary arch. This article discusses the diagnosis and classifications of the lip-tie, as it affects an infant's latch onto the mother's breast. As more and more women choose to breastfeed, lip-ties must be considered as an impediment to breastfeeding, recognizing that they can affect a successful, painless latch and milk transfer.",
"title": ""
},
{
"docid": "3860b1d259317da9ac6fe2c2ab161ce3",
"text": "In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.",
"title": ""
},
{
"docid": "00f9290840ba201e23d0ea6149f344e4",
"text": "Despite the plethora of security advice and online education materials offered to end-users, there exists no standard measurement tool for end-user security behaviors. We present the creation of such a tool. We surveyed the most common computer security advice that experts offer to end-users in order to construct a set of Likert scale questions to probe the extent to which respondents claim to follow this advice. Using these questions, we iteratively surveyed a pool of 3,619 computer users to refine our question set such that each question was applicable to a large percentage of the population, exhibited adequate variance between respondents, and had high reliability (i.e., desirable psychometric properties). After performing both exploratory and confirmatory factor analysis, we identified a 16-item scale consisting of four sub-scales that measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness.",
"title": ""
},
{
"docid": "02dfbd00fcff9601a8f70a334e3da9ba",
"text": "Visual sentiment analysis framework can predict the sentiment of an image by analyzing the image contents. Nowadays, people are uploading millions of images in social networks such as Twitter, Facebook, Google Plus, and Flickr. These images play a crucial part in expressing emotions of users in online social networks. As a result, image sentiment analysis has become important in the area of online multimedia big data research. Several research works are focusing on analyzing the sentiment of the textual contents. However, little investigation has been done to develop models that can predict sentiment of visual content. In this paper, we propose a novel visual sentiment analysis framework using transfer learning approach to predict sentiment. We use hyper-parameters learned from a very deep convolutional neural network to initialize our network model to prevent overfitting. We conduct extensive experiments on a Twitter image dataset and prove that our model achieves better performance than the current state-of-the-art.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
}
] |
scidocsrr
|
6ea8102cc982f2bec5f454d7772f7c77
|
A Humanized Version of Foxp2 Affects Cortico-Basal Ganglia Circuits in Mice
|
[
{
"docid": "54a8620e5f7ea945eabd0ed5420cefb3",
"text": "The cellular heterogeneity of the brain confounds efforts to elucidate the biological properties of distinct neuronal populations. Using bacterial artificial chromosome (BAC) transgenic mice that express EGFP-tagged ribosomal protein L10a in defined cell populations, we have developed a methodology for affinity purification of polysomal mRNAs from genetically defined cell populations in the brain. The utility of this approach is illustrated by the comparative analysis of four types of neurons, revealing hundreds of genes that distinguish these four cell populations. We find that even two morphologically indistinguishable, intermixed subclasses of medium spiny neurons display vastly different translational profiles and present examples of the physiological significance of such differences. This genetically targeted translating ribosome affinity purification (TRAP) methodology is a generalizable method useful for the identification of molecular changes in any genetically defined cell type in response to genetic alterations, disease, or pharmacological perturbations.",
"title": ""
}
] |
[
{
"docid": "7e1f0cd43cdc9685474e19b7fd65791b",
"text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.",
"title": ""
},
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "a1bef11b10bc94f84914d103311a5941",
"text": "Class imbalance and class overlap are two of the major problems in data mining and machine learning. Several studies have shown that these data complexities may affect the performance or behavior of artificial neural networks. Strategies proposed to face with both challenges have been separately applied. In this paper, we introduce a hybrid method for handling both class imbalance and class overlap simultaneously in multi-class learning problems. Experimental results on five remote sensing data show that the combined approach is a promising method. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "522a9deb3926d067686d4c26354a78f7",
"text": "The golden age of cannabis pharmacology began in the 1960s as Raphael Mechoulam and his colleagues in Israel isolated and synthesized cannabidiol, tetrahydrocannabinol, and other phytocannabinoids. Initially, THC garnered most research interest with sporadic attention to cannabidiol, which has only rekindled in the last 15 years through a demonstration of its remarkably versatile pharmacology and synergy with THC. Gradually a cognizance of the potential of other phytocannabinoids has developed. Contemporaneous assessment of cannabis pharmacology must be even far more inclusive. Medical and recreational consumers alike have long believed in unique attributes of certain cannabis chemovars despite their similarity in cannabinoid profiles. This has focused additional research on the pharmacological contributions of mono- and sesquiterpenoids to the effects of cannabis flower preparations. Investigation reveals these aromatic compounds to contribute modulatory and therapeutic roles in the cannabis entourage far beyond expectations considering their modest concentrations in the plant. Synergistic relationships of the terpenoids to cannabinoids will be highlighted and include many complementary roles to boost therapeutic efficacy in treatment of pain, psychiatric disorders, cancer, and numerous other areas. Additional parts of the cannabis plant provide a wide and distinct variety of other compounds of pharmacological interest, including the triterpenoid friedelin from the roots, canniprene from the fan leaves, cannabisin from seed coats, and cannflavin A from seed sprouts. This chapter will explore the unique attributes of these agents and demonstrate how cannabis may yet fulfil its potential as Mechoulam's professed \"pharmacological treasure trove.\"",
"title": ""
},
{
"docid": "5bdf4585df04c00ebcf00ce94a86ab38",
"text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.",
"title": ""
},
{
"docid": "11ce5bca8989b3829683430abe2aee47",
"text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.",
"title": ""
},
{
"docid": "a633e3714f730d53c7dd9719a18496de",
"text": "This paper addresses the problem of controlling a robot arm executing a cooperative task with a human who guides the robot through direct physical interaction. This problem is tackled by allowing the end effector to comply according to an impedance control law defined in the Cartesian space. While, in principle, the robot's dynamics can be fully compensated and any impedance behaviour can be imposed by the control, the stability of the coupled human-robot system is not guaranteed for any value of the impedance parameters. Moreover, if the robot is kinematically or functionally redundant, the redundant degrees of freedom play an important role. The idea proposed here is to use redundancy to ensure a decoupled apparent inertia at the end effector. Through an extensive experimental study on a 7-DOF KUKA LWR4 arm, we show that inertial decoupling enables a more flexible choice of the impedance parameters and improves the performance during manual guidance.",
"title": ""
},
{
"docid": "27b5e0594305a81c6fad15567ba1f3b9",
"text": "A novel approach to the design of series-fed antenna arrays has been presented, in which a modified three-way slot power divider is applied. In the proposed coupler, the power division is adjusted by changing the slot inclination with respect to the transmission line, whereas coupled transmission lines are perpendicular. The proposed modification reduces electrical length of the feeding line to <formula formulatype=\"inline\"><tex Notation=\"TeX\">$1 \\lambda$</tex></formula>, hence results in dissipation losses' reduction. The theoretical analysis and measurement results of the 2<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\, \\times \\,$</tex></formula>8 microstrip antenna array operating within 10.5-GHz frequency range are shown in the letter, proving the novel inclined-slot power divider's capability to provide appropriate power distribution and its potential application in the large antenna arrays.",
"title": ""
},
{
"docid": "ed33b5fae6bc0af64668b137a3a64202",
"text": "In this study the effect of the Edmodo social learning environment on mobile assisted language learning (MALL) was examined by seeking the opinions of students. Using a quantitative experimental approach, this study was conducted by conducting a questionnaire before and after using the social learning network Edmodo. Students attended lessons with their mobile devices. The course materials were shared in the network via Edmodo group sharing tools. The students exchanged idea and developed projects, and felt as though they were in a real classroom setting. The students were also able to access various multimedia content. The results of the study indicate that Edmodo improves students’ foreign language learning, increases their success, strengthens communication among students, and serves as an entertaining learning environment for them. The educationally suitable sharing structure and the positive user opinions described in this study indicate that Edmodo is also usable in other lessons. Edmodo can be used on various mobile devices, including smartphones, in addition to the web. This advantageous feature contributes to the usefulness of Edmodo as a scaffold for education.",
"title": ""
},
{
"docid": "e0919f53691d17c7cb495c19914683f8",
"text": "Carpooling has long held the promise of reducing gas consumption by decreasing mileage to deliver coriders. Although ad hoc carpools already exist in the real world through private arrangements, little research on the topic has been done. In this article, we present the first systematic work to design, implement, and evaluate a carpool service, called coRide, in a large-scale taxicab network intended to reduce total mileage for less gas consumption. Our coRide system consists of three components, a dispatching cloud server, passenger clients, and an onboard customized device, called TaxiBox. In the coRide design, in response to the delivery requests of passengers, dispatching cloud servers calculate cost-efficient carpool routes for taxicab drivers and thus lower fares for the individual passengers.\n To improve coRide’s efficiency in mileage reduction, we formulate an NP-hard route calculation problem under different practical constraints. We then provide (1) an optimal algorithm using Linear Programming, (2) a 2-approximation algorithm with a polynomial complexity, and (3) its corresponding online version with a linear complexity. To encourage coRide’s adoption, we present a win-win fare model as the incentive mechanism for passengers and drivers to participate. We test the performance of coRide by a comprehensive evaluation with a real-world trial implementation and a data-driven simulation with 14,000 taxi data from the Chinese city Shenzhen. The results show that compared with the ground truth, our service can reduce 33% of total mileage; with our win-win fare model, we can lower passenger fares by 49% and simultaneously increase driver profit by 76%.",
"title": ""
},
{
"docid": "f35db13e8b2afd0f23c421bd8828af35",
"text": "In this paper, we report a novel flexible tactile sensor array for an anthropomorphic artificial hand with the capability of measuring both normal and shear force distributions using quantum tunneling composite as a base material. There are four fan-shaped electrodes in a cell that decompose the contact force into normal and shear components. The sensor has been realized in a 2 × 6 array of unit sensors, and each unit sensor responds to normal and shear stresses in all three axes. By applying separated drops of conductive polymer instead of a full layer, cross-talk between the sensor cells is decreased. Furthermore, the voltage mirror method is used in this circuit to avoid crosstalk effect, which is based on a programmable system-on-chip. The measurement of a single sensor shows that the full-scale range of detectable forces are about 20, 8, and 8 N for the x-, y-, and z-directions, respectively. The sensitivities of a cell measured with a current setup are 0.47, 0.45, and 0.16 mV/mN for the x-, y-, and y-directions, respectively. The sensor showed a high repeatability, low hysteresis, and minimum tactile crosstalk. The proposed flexible three-axial tactile sensor array can be applied in a curved or compliant surface that requires slip detection and flexibility, such as a robotic finger.",
"title": ""
},
{
"docid": "ab474cc2128d488a884602a247b4e7b2",
"text": "Trajectory outlier detection is a fundamental building block for many location-based service (LBS) applications, with a large application base. We dedicate this paper on detecting the outliers from vehicle trajectories efficiently and effectively. In addition, we want our solution to be able to issue an alarm early when an outlier trajectory is only partially observed (i.e., the trajectory has not yet reached the destination). Most existing works study the problem on general Euclidean trajectories and require accesses to the historical trajectory database or computations on the distance metric that are very expensive. Furthermore, few of existing works consider some specific characteristics of vehicles trajectories (e.g., their movements are constrained by the underlying road networks), and majority of them require the input of complete trajectories. Motivated by this, we propose a vehicle outlier detection approach namely DB-TOD which is based on probabilistic model via modeling the driving behavior/preferences from the set of historical trajectories. We design outlier detection algorithms on both complete trajectory and partial one. Our probabilistic model-based approach makes detecting trajectory outlier extremely efficient while preserving the effectiveness, contributed by the relatively accurate model on driving behavior. We conduct comprehensive experiments using real datasets and the results justify both effectiveness and efficiency of our approach.",
"title": ""
},
{
"docid": "e6bb77b8f16e17b674d6baada5ac9b87",
"text": "Art is a uniquely human activity associated fundamentally with symbolic and abstract cognition. Its practice in human societies throughout the world, coupled with seeming non-functionality, has led to three major brain theories of art. (1) The localized brain regions and pathways theory links art to multiple neural regions. (2) The display of art and its aesthetics theory is tied to the biological motivation of courtship signals and mate selection strategies in animals. (3) The evolutionary theory links the symbolic nature of art to critical pivotal brain changes in Homo sapiens supporting increased development of language and hierarchical social grouping. Collectively, these theories point to art as a multi-process cognition dependent on diverse brain regions and on redundancy in art-related functional representation.",
"title": ""
},
{
"docid": "e6cba9e178f568c402be7b25c4f0777f",
"text": "This paper is a tutorial introduction to the Viterbi Algorithm, this is reinforced by an example use of the Viterbi Algorithm in the area of error correction in communications channels. Some extensions to the basic algorithm are also discussed briefly. Some of the many application areas where the Viterbi Algorithm has been used are considered, including it's use in communications, target tracking and pattern recognition problems. A proposal for further research into the use of the Viterbi Algorithm in Signature Verification is then presented, and is the area of present research at the moment.",
"title": ""
},
{
"docid": "9a79a9b2c351873143a8209d37b46f64",
"text": "The authors review research on police effectiveness in reducing crime, disorder, and fear in the context of a typology of innovation in police practices. That typology emphasizes two dimensions: one concerning the diversity of approaches, and the other, the level of focus. The authors find that little evidence supports the standard model of policing—low on both of these dimensions. In contrast, research evidence does support continued investment in police innovations that call for greater focus and tailoring of police efforts, combined with an expansion of the tool box of policing beyond simple law enforcement. The strongest evidence of police effectiveness in reducing crime and disorder is found in the case of geographically focused police practices, such as hot-spots policing. Community policing practices are found to reduce fear of crime, but the authors do not find consistent evidence that community policing (when it is implemented without models of problem-oriented policing) affects either crime or disorder. A developing body of evidence points to the effectiveness of problemoriented policing in reducing crime, disorder, and fear. More generally, the authors find that many policing practices applied broadly throughout the United States either have not been the subject of systematic research or have been examined in the context of research designs that do not allow practitioners or policy makers to draw very strong conclusions.",
"title": ""
},
{
"docid": "a4fb1919a1bf92608a55bc3feedf897d",
"text": "We develop an algebraic framework, Logic Programming Doctrines, for the syntax, proof theory, operational semantics and model theory of Horn Clause logic programming based on indexed premonoidal categories. Our aim is to provide a uniform framework for logic programming and its extensions capable of incorporating constraints, abstract data types, features imported from other programming language paradigms and a mathematical description of the state space in a declarative manner. We define a new way to embed information about data into logic programming derivations by building a sketch-like description of data structures directly into an indexed category of proofs. We give an algebraic axiomatization of bottom-up semantics in this general setting, describing categorical models as fixed points of a continuous operator. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b151d236ce17b4d03b384a29dbb91330",
"text": "To investigate the blood supply to the nipple areola complex (NAC) on thoracic CT angiograms (CTA) to improve breast pedicle design in reduction mammoplasty. In a single centre, CT scans of the thorax were retrospectively reviewed for suitability by a cardiothoracic radiologist. Suitable scans had one or both breasts visible in extended fields, with contrast enhancement of breast vasculature in a female patient. The arterial sources, intercostal space perforated, glandular/subcutaneous course, vessel entry point, and the presence of periareolar anastomoses were recorded for the NAC of each breast. From 69 patients, 132 breasts were suitable for inclusion. The most reproducible arterial contribution to the NAC was perforating branches arising from the internal thoracic artery (ITA) (n = 108, 81.8%), followed by the long thoracic artery (LTA) (n = 31, 23.5%) and anterior intercostal arteries (AI) (n = 21, 15.9%). Blood supply was superficial versus deep in (n = 86, 79.6%) of ITA sources, (n = 28, 90.3%) of LTA sources, and 10 (47.6%) of AI sources. The most vascularly reliable breast pedicle would be asymmetrical in 7.9% as a conservative estimate. We suggest that breast CT angiography can provide valuable information about NAC blood supply to aid customised pedicle design, especially in high-risk, large-volume breast reductions where the risk of vascular-dependent complications is the greatest and asymmetrical dominant vasculature may be present. Superficial ITA perforator supplies are predominant in a majority of women, followed by LTA- and AIA-based sources, respectively.",
"title": ""
},
{
"docid": "26db4ecbc2ad4b8db0805b06b55fe27d",
"text": "The advent of high voltage (HV) wide band-gap power semiconductor devices has enabled the medium voltage (MV) grid tied operation of non-cascaded neutral point clamped (NPC) converters. This results in increased power density, efficiency as well as lesser control complexity. The multi-chip 15 kV/40 A SiC IGBT and 15 kV/20 A SiC MOSFET are two such devices which have gained attention for MV grid interface applications. Such converters based on these devices find application in active power filters, STATCOM or as active front end converters for solid state transformers. This paper presents an experimental comparative evaluation of these two SiC devices for 3-phase grid connected applications using a 3-level NPC converter as reference. The IGBTs are generally used for high power applications due to their lower conduction loss while MOSFETs are used for high frequency applications due to their lower switching loss. The thermal performance of these devices are compared based on device loss characteristics, device heat-run tests, 3-level pole heat-run tests, PLECS thermal simulation based loss comparison and MV experiments on developed hardware prototypes. The impact of switching frequency on the harmonic control of the grid connected converter is also discussed and suitable device is selected for better grid current THD.",
"title": ""
},
{
"docid": "fb7c268419d798587e1675a5a1a37232",
"text": "Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image reranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets.",
"title": ""
},
{
"docid": "f383dd5dd7210105406c2da80cf72f89",
"text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".",
"title": ""
}
] |
scidocsrr
|
fcc403f4319dc81eba63c968aaaf8c51
|
Thin Structures in Image Based Rendering
|
[
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "d82553a7bf94647aaf60eb36748e567f",
"text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.",
"title": ""
}
] |
[
{
"docid": "dd911eff60469b32330c5627c288f19f",
"text": "Routing Algorithms are driving the growth of the data transmission in wireless sensor networks. Contextually, many algorithms considered the data gathering and data aggregation. This paper uses the scenario of clustering and its impact over the SPIN protocol and also finds out the effect over the energy consumption in SPIN after uses of clustering. The proposed scheme is implemented using TCL/C++ programming language and evaluated using Ns2.34 simulator and compare with LEACH. Simulation shows proposed protocol exhibits significant performance gains over the LEACH for lifetime of network and guaranteed data transmission.",
"title": ""
},
{
"docid": "ad5943b20597be07646cca1af9d23660",
"text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.",
"title": ""
},
{
"docid": "5387c752db7b4335a125df91372099b3",
"text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.",
"title": ""
},
{
"docid": "4933f3f3007dab687fc852e9c2b1ab0a",
"text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.",
"title": ""
},
{
"docid": "529b6b658674a52191d4a8fed97e44eb",
"text": "We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to \"focus\" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest).",
"title": ""
},
{
"docid": "ba2632b7a323e785b57328d32a26bc99",
"text": "Modern malware is designed with mutation characteristics, namely polymorphism and metamorphism, which causes an enormous growth in the number of variants of malware samples. Categorization of malware samples on the basis of their behaviors is essential for the computer security community, because they receive huge number of malware everyday, and the signature extraction process is usually based on malicious parts characterizing malware families. Microsoft released a malware classification challenge in 2015 with a huge dataset of near 0.5 terabytes of data, containing more than 20K malware samples. The analysis of this dataset inspired the development of a novel paradigm that is effective in categorizing malware variants into their actual family groups. This paradigm is presented and discussed in the present paper, where emphasis has been given to the phases related to the extraction, and selection of a set of novel features for the effective representation of malware samples. Features can be grouped according to different characteristics of malware behavior, and their fusion is performed according to a per-class weighting paradigm. The proposed method achieved a very high accuracy ($\\approx$ 0.998) on the Microsoft Malware Challenge dataset.",
"title": ""
},
{
"docid": "c1477b801a49df62eb978b537fd3935e",
"text": "The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning.",
"title": ""
},
{
"docid": "f301f87dee3c13d06e34f533bb69cf01",
"text": "Representation of news events as latent feature vectors is essential for several tasks, such as news recommendation, news event linking, etc. However, representations proposed in the past fail to capture the complex network structure of news events. In this paper we propose Event2Vec, a novel way to learn latent feature vectors for news events using a network. We use recently proposed network embedding techniques, which are proven to be very effective for various prediction tasks in networks. As events involve different classes of nodes, such as named entities, temporal information, etc, general purpose network embeddings are agnostic to event semantics. To address this problem, we propose biased random walks that are tailored to capture the neighborhoods of news events in event networks. We then show that these learned embeddings are effective for news event recommendation and news event linking tasks using strong baselines, such as vanilla Node2Vec, and other state-of-the-art graph-based event ranking techniques.",
"title": ""
},
{
"docid": "3e00367b754777a6659578963f006a69",
"text": "This paper presents a study on a three-phase 24-pulse Transformer Rectifier Unit (TRU) for use in aircraft electric power system. Four three-phase systems with 15°, 30°, 45°, and 60° phase shifts are obtained by interconnection of conventional transformers in zig-zag configuration. The system is modeled in details using Simulink (SimPowerSystems). Simulation results are presented and the obtained performance is compared with those of a 12-pulse TRU.",
"title": ""
},
{
"docid": "1df9ac95778bbe7ad750810e9b5a9756",
"text": "To characterize muscle synergy organization underlying multidirectional control of stance posture, electromyographic activity was recorded from 11 lower limb and trunk muscles of 7 healthy subjects while they were subjected to horizontal surface translations in 12 different, randomly presented directions. The latency and amplitude of muscle responses were quantified for each perturbation direction. Tuning curves for each muscle were examined to relate the amplitude of the muscle response to the direction of surface translation. The latencies of responses for the shank and thigh muscles were constant, regardless of perturbation direction. In contrast, the latencies for another thigh [tensor fascia latae (TFL)] and two trunk muscles [rectus abdominis (RAB) and erector spinae (ESP)] were either early or late, depending on the perturbation direction. These three muscles with direction-specific latencies may play different roles in postural control as prime movers or as stabilizers for different translation directions, depending on the timing of recruitment. Most muscle tuning curves were within one quadrant, having one direction of maximal activity, generally in response to diagonal surface translations. Two trunk muscles (RAB and ESP) and two lower limb muscles (semimembranosus and peroneus longus) had bipolar tuning curves, with two different directions of maximal activity, suggesting that these muscle can play different roles as part of different synergies, depending on translation direction. Muscle tuning curves tended to group into one of three regions in response to 12 different directions of perturbations. Two muscles [rectus femoris (RFM) and TFL] were maximally active in response to lateral surface translations. The remaining muscles clustered into one of two diagonal regions. The diagonal regions corresponded to the two primary directions of active horizontal force vector responses. Two muscles (RFM and adductor longus) were maximally active orthogonal to their predicted direction of maximal activity based on anatomic orientation. Some of the muscles in each of the synergic regions were not anatomic synergists, suggesting a complex central organization for recruitment of muscles. The results suggest that neither a simple reflex mechanism nor a fixed muscle synergy organization is adequate to explain the muscle activation patterns observed in this postural control task. Our results are consistent with a centrally mediated pattern of muscle latencies combined with peripheral influence on muscle magnitude. We suggest that a flexible continuum of muscle synergies that are modifiable in a task-dependent manner be used for equilibrium control in stance.",
"title": ""
},
{
"docid": "f25afc147ceb24fb1aca320caa939f10",
"text": "Third party intervention is a typical response to destructive and persistent social conflict and comes in a number of different forms attended by a variety of issues. Mediation is a common form of intervention designed to facilitate a negotiated settlement on substantive issues between conflicting parties. Mediators are usually external to the parties and carry an identity, motives and competencies required to play a useful role in addressing the dispute. While impartiality is generally seen as an important prerequisite for effective intervention, biased mediators also appear to have a role to play. This article lays out the different forms of third-party intervention in a taxonomy of six methods, and proposes a contingency model which matches each type of intervention to the appropriate stage of conflict escalation. Interventions are then sequenced, in order to assist the parties in de-escalating and resolving the conflict. It must be pointed out, however, that the mixing of interventions with different power bases raises a number of ethical and moral questions about the use of reward and coercive power by third parties. The article then discusses several issues around the practice of intervention. It is essential to give these issues careful consideration if third-party methods are to play their proper and useful role in the wider process of conflict transformation. Psychology from the University of Saskatchewan and a Ph.D. in Social Psychology from the University of Michigan. He has provided training and consulting services to various organizations and international institutes in conflict management. His current interests include third party intervention, interactive conflict resolution, and reconciliation in situations of ethnopolitical conflict. A b s t r a c t A b o u t t h e C o n t r i b u t o r",
"title": ""
},
{
"docid": "36fdd31b04f53f7aef27b9d4af5f479f",
"text": "Smart meters have been deployed in many countries across the world since early 2000s. The smart meter as a key element for the smart grid is expected to provide economic, social, and environmental benefits for multiple stakeholders. There has been much debate over the real values of smart meters. One of the key factors that will determine the success of smart meters is smart meter data analytics, which deals with data acquisition, transmission, processing, and interpretation that bring benefits to all stakeholders. This paper presents a comprehensive survey of smart electricity meters and their utilization focusing on key aspects of the metering process, different stakeholder interests, and the technologies used to satisfy stakeholder interests. Furthermore, the paper highlights challenges as well as opportunities arising due to the advent of big data and the increasing popularity of cloud environments.",
"title": ""
},
{
"docid": "75952b1d2c9c2f358c4c2e3401a00245",
"text": "This book is an outstanding contribution to the philosophical study of language and mind, by one of the most influential thinkers of our time. In a series of penetrating essays, Noam Chomsky cuts through the confusion and prejudice which has infected the study of language and mind, bringing new solutions to traditional philosophical puzzles and fresh perspectives on issues of general interest, ranging from the mind–body problem to the unification of science. Using a range of imaginative and deceptively simple linguistic analyses, Chomsky argues that there is no coherent notion of “language” external to the human mind, and that the study of language should take as its focus the mental construct which constitutes our knowledge of language. Human language is therefore a psychological, ultimately a “biological object,” and should be analysed using the methodology of the natural sciences. His examples and analyses come together in this book to give a unique and compelling perspective on language and the mind.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "b645d8f57b60703e3910e2e5ce60117b",
"text": "We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute Fscore gains compared to the mono-lingual single-task baseline model. 1",
"title": ""
},
{
"docid": "aa562b52c51fa6c4563280a6ce82f8c0",
"text": "We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner. Our approach battles domain shift with a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach. Our model is simultaneously optimized on labeled source data and unlabeled or sparsely labeled data in the target domain. Our method shows compelling results on novel classes within a new domain even when only a few labeled examples per class are available, outperforming the prevalent fine-tuning approach. In addition, we demonstrate the effectiveness of our framework on the transfer learning task from image object recognition to video action recognition.",
"title": ""
},
{
"docid": "e07377cb36e31c8190d5ac96f3891f2a",
"text": "We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation. COST weighs a system’s scalability against the overheads introduced by the system, and indicates the actual performance gains of the system, without rewarding systems that bring substantial but parallelizable overheads. We survey measurements of data-parallel systems recently reported in SOSP and OSDI, and find that many systems have either a surprisingly large COST, often hundreds of cores, or simply underperform one thread for all of their reported configurations.",
"title": ""
},
{
"docid": "b4dd76179734fb43e74c9c1daef15bbf",
"text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.",
"title": ""
},
{
"docid": "f3f15a37a1d1a2a3a3647dc14f075297",
"text": "Stress is known to inhibit neuronal growth in the hippocampus. In addition to reducing the size and complexity of the dendritic tree, stress and elevated glucocorticoid levels are known to inhibit adult neurogenesis. Despite the negative effects of stress hormones on progenitor cell proliferation in the hippocampus, some experiences which produce robust increases in glucocorticoid levels actually promote neuronal growth. These experiences, including running, mating, enriched environment living, and intracranial self-stimulation, all share in common a strong hedonic component. Taken together, the findings suggest that rewarding experiences buffer progenitor cells in the dentate gyrus from the negative effects of elevated stress hormones. This chapter considers the evidence that stress and glucocorticoids inhibit neuronal growth along with the paradoxical findings of enhanced neuronal growth under rewarding conditions with a view toward understanding the underlying biological mechanisms.",
"title": ""
},
{
"docid": "815a9db2fb8c2aeadc766270a85517fd",
"text": "Resistive-switching random access memory (RRAM) based on the formation and the dissolution of a conductive filament (CF) through insulating materials, e.g., transition metal oxides, may find applications as novel memory and logic devices. Understanding the resistive-switching mechanism is essential for predicting and controlling the scaling and reliability performances of the RRAM. This paper addresses the set/reset characteristics of RRAM devices based on $\\hbox{HfO}_{x}$. The set process is analyzed as a function of the initial high-resistance state and of the current compliance. The reset process is studied as a function of the initial low-resistance state. Finally, the intermediate set states, obtained by set at variable compliance current, and reset states, obtained by reset at variable stopping voltage, are characterized with respect to their reset voltage, allowing for a microscopic interpretation of intermediate states in terms of different filament morphologies.",
"title": ""
}
] |
scidocsrr
|
35ebc67bbdc3701184c6ed579dff44bb
|
ALIZE 3.0 - open source toolkit for state-of-the-art speaker recognition
|
[
{
"docid": "978dd8a7f33df74d4a5cea149be6ebb0",
"text": "A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or toverify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct identification. Last, the performances of various systems are compared.",
"title": ""
}
] |
[
{
"docid": "7c5abed8220171f38e3801298f660bfa",
"text": "Heavy metal remediation of aqueous streams is of special concern due to recalcitrant and persistency of heavy metals in environment. Conventional treatment technologies for the removal of these toxic heavy metals are not economical and further generate huge quantity of toxic chemical sludge. Biosorption is emerging as a potential alternative to the existing conventional technologies for the removal and/or recovery of metal ions from aqueous solutions. The major advantages of biosorption over conventional treatment methods include: low cost, high efficiency, minimization of chemical or biological sludge, regeneration of biosorbents and possibility of metal recovery. Cellulosic agricultural waste materials are an abundant source for significant metal biosorption. The functional groups present in agricultural waste biomass viz. acetamido, alcoholic, carbonyl, phenolic, amido, amino, sulphydryl groups etc. have affinity for heavy metal ions to form metal complexes or chelates. The mechanism of biosorption process includes chemisorption, complexation, adsorption on surface, diffusion through pores and ion exchange etc. The purpose of this review article is to provide the scattered available information on various aspects of utilization of the agricultural waste materials for heavy metal removal. Agricultural waste material being highly efficient, low cost and renewable source of biomass can be exploited for heavy metal remediation. Further these biosorbents can be modified for better efficiency and multiple reuses to enhance their applicability at industrial scale.",
"title": ""
},
{
"docid": "5b31cdfd19e40a2ee5f1094e33366902",
"text": "Much of the early literature on 'cultural competence' focuses on the 'categorical' or 'multicultural' approach, in which providers learn relevant attitudes, values, beliefs, and behaviors of certain cultural groups. In essence, this involves learning key 'dos and don'ts' for each group. Literature and educational materials of this kind focus on broad ethnic, racial, religious, or national groups, such as 'African American', 'Hispanic', or 'Asian'. The problem with this categorical or 'list of traits' approach to clinical cultural competence is that culture is multidimensional and dynamic. Culture comprises multiple variables, affecting all aspects of experience. Cultural processes frequently differ within the same ethnic or social group because of differences in age cohort, gender, political association, class, religion, ethnicity, and even personality. Culture is therefore a very elusive and nebulous concept, like art. The multicultural approach to cultural competence results in stereotypical thinking rather than clinical competence. A newer, cross cultural approach to culturally competent clinical practice focuses on foundational communication skills, awareness of cross-cutting cultural and social issues, and health beliefs that are present in all cultures. We can think of these as universal human beliefs, needs, and traits. This patient centered approach relies on identifying and negotiating different styles of communication, decision-making preferences, roles of family, sexual and gender issues, and issues of mistrust, prejudice, and racism, among other factors. In the current paper, we describe 'cultural' challenges that arise in the care of four patients from disparate cultures, each of whom has advanced colon cancer that is no longer responding to chemotherapy. We then illustrate how to apply principles of patient centered care to these challenges.",
"title": ""
},
{
"docid": "11ffdc076696536cef886a7ba130f049",
"text": "This work is about recognizing human activities occurring in videos at distinct semantic levels, including individual actions, interactions, and group activities. The recognition is realized using a two-level hierarchy of Long Short-Term Memory (LSTM) networks, forming a feed-forward deep architecture, which can be trained end-to-end. In comparison with existing architectures of LSTMs, we make two key contributions giving the name to our approach as Confidence-Energy Recurrent Network – CERN. First, instead of using the common softmax layer for prediction, we specify a novel energy layer (EL) for estimating the energy of our predictions. Second, rather than finding the common minimum-energy class assignment, which may be numerically unstable under uncertainty, we specify that the EL additionally computes the p-values of the solutions, and in this way estimates the most confident energy minimum. The evaluation on the Collective Activity and Volleyball datasets demonstrates: (i) advantages of our two contributions relative to the common softmax and energy-minimization formulations and (ii) a superior performance relative to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "e79df31bd411d7c62d625a047dde61ce",
"text": "The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this article, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the-art methods, especially in low signal-to-noise ratio (SNR) settings. We also develop a comprehensive physically-motivated simulator for C-ToF cameras that can be used to evaluate various coding schemes prior to a real hardware implementation. Since most off-the-shelf C-ToF sensors use sinusoid or square functions, we develop a hardware prototype that can implement a wide range of coding functions. Using this prototype and our software simulator, we demonstrate the performance advantages of the proposed Hamiltonian coding functions in a wide range of imaging settings.",
"title": ""
},
{
"docid": "a1af04cc0616533bd47bb660f0eff3cd",
"text": "Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing) working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.",
"title": ""
},
{
"docid": "fda9db396d7c35ba64a7a5453aaa80dc",
"text": "A novel dynamic latched comparator with offset voltage compensation is presented. The proposed comparator uses one phase clock signal for its operation and can drive a larger capacitive load with complementary version of the regenerative output latch stage. As it provides a larger voltage gain up to 22 V/V to the regenerative latch, the inputreferred offset voltage of the latch is reduced and metastability is improved. The proposed comparator is designed using 90 nm PTM technology and 1 V power supply voltage. It demonstrates up to 24.6% less offset voltage and 30.0% less sensitivity of delay to decreasing input voltage difference (17 ps/decade) than the conventional double-tail latched comparator at approximately the same area and power consumption. In addition, with a digitally controlled capacitive offset calibration technique, the offset voltage of the proposed comparator is further reduced from 6.03 to 1.10 mV at 1-sigma at the operating clock frequency of 3 GHz, and it consumes 54 lW/GHz after calibration.",
"title": ""
},
{
"docid": "faac043b0c32bad5a44d52b93e468b78",
"text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.",
"title": ""
},
{
"docid": "88d9c077f588e9e02453bd0ea40cfcae",
"text": "This study explored the prevalence of and motivations behind 'drunkorexia' – restricting food intake prior to drinking alcohol. For both male and female university students (N = 3409), intentionally changing eating behaviour prior to drinking alcohol was common practice (46%). Analyses performed on a targeted sample of women (n = 226) revealed that food restriction prior to alcohol use was associated with greater symptomology than eating more food. Those who restrict eating prior to drinking to avoid weight gain scored higher on measures of disordered eating, whereas those who restrict to get intoxicated faster scored higher on measures of alcohol abuse.",
"title": ""
},
{
"docid": "3b1b829e6d017d574562e901f4963bc4",
"text": "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning.",
"title": ""
},
{
"docid": "e56bd360fe21949d0617c6e1ddafefff",
"text": "This study addresses the problem of identifying the meaning of unknown words or entities in a discourse with respect to the word embedding approaches used in neural language models. We proposed a method for on-the-fly construction and exploitation of word embeddings in both the input and output layers of a neural model by tracking contexts. This extends the dynamic entity representation used in Kobayashi et al. (2016) and incorporates a copy mechanism proposed independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we construct a new task and dataset called Anonymized Language Modeling for evaluating the ability to capture word meanings while reading. Experiments conducted using our novel dataset show that the proposed variant of RNN language model outperformed the baseline model. Furthermore, the experiments also demonstrate that dynamic updates of an output layer help a model predict reappearing entities, whereas those of an input layer are effective to predict words following reappearing entities.",
"title": ""
},
{
"docid": "924e10782437c323b8421b156db50584",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "23eb979ec3e17db2b162b659e296a10e",
"text": "The authors would like to thank the Marketing Science Institute for their generous assistance in funding this research. We would also like to thank Claritas for providing us with data. We are indebted to Vincent Bastien, former CEO of Louis Vuitton, for the time he has spent with us critiquing our framework.",
"title": ""
},
{
"docid": "2e0547228597476a28c6b99b6f927299",
"text": "Several virtual reality (VR) applications for the understanding, assessment and treatment of mental health problems have been developed in the last 10 years. The purpose of this review is to outline the current state of virtual reality research in the treatment of mental health problems. PubMed and PsycINFO were searched for all articles containing the words “virtual reality”. In addition a manual search of the references contained in the papers resulting from this search was conducted and relevant periodicals were searched. Studies reporting the results of treatment utilizing VR in the mental health field and involving at least one patient were identified. More than 50 studies using VR were identified, the majority of which were case studies. Seventeen employed a between groups design: 4 involved patients with fear of flying; 3 involved patients with fear of heights; 3 involved patients with social phobia/public speaking anxiety; 2 involved people with spider phobia; 2 involved patients with agoraphobia; 2 involved patients with body image disturbance and 1 involved obese patients. There are both advantages in terms of delivery and disadvantages in terms of side effects to using VR. Although virtual reality based therapy appears to be superior to no treatment the effectiveness of VR therapy over traditional therapeutic approaches is not supported by the research currently available. There is a lack of good quality research on the effectiveness of VR therapy. Before clinicians will be able to make effective use of this emerging technology greater emphasis must be placed on controlled trials with clinically identified populations.",
"title": ""
},
{
"docid": "9a5fd2fa3ec899fad8969de102f55379",
"text": "The development of Machine Translation system for ancient language such as Sanskrit language is much more fascinating and challenging task. Due to lack of linguistic community, there are no wide work accomplish in Sanskrit translation while it is mother language by virtue of its importance in cultural heritage of India. In this paper, we integrate a traditional rule based approach of machine translation with Artificial Neural Network (ANN) model which translates an English sentence (source language sentence) into equivalent Sanskrit sentence (target language sentence). We use feed forward ANN for the selection of Sanskrit word like noun, verb, object, adjective etc from English to Sanskrit User Data Vector (UDV). Due to morphological richness of Sanskrit language, this system makes limited use of syntax and uses only morphological markings to identify Subject, Object, Verb, Preposition, Adjective, Adverb and as well as Conjunctive sentences also. It uses limited parsing for part of speech (POS) tagging, identification of clause, its Subject, Object, Verb etc and Gender-Number-Person (GNP) of noun, adjective and object. This system represents the translation between the SVO and SOV classes of languages. This system gives translation result in GUI form and handles English sentences of different classes.",
"title": ""
},
{
"docid": "f1635d5cf51f0a4d70090f5f672de605",
"text": "Enrichment analysis is a popular method for analyzing gene sets generated by genome-wide experiments. Here we present a significant update to one of the tools in this domain called Enrichr. Enrichr currently contains a large collection of diverse gene set libraries available for analysis and download. In total, Enrichr currently contains 180 184 annotated gene sets from 102 gene set libraries. New features have been added to Enrichr including the ability to submit fuzzy sets, upload BED files, improved application programming interface and visualization of the results as clustergrams. Overall, Enrichr is a comprehensive resource for curated gene sets and a search engine that accumulates biological knowledge for further biological discoveries. Enrichr is freely available at: http://amp.pharm.mssm.edu/Enrichr.",
"title": ""
},
{
"docid": "18acdeb37257f2f7f10a5baa8957a257",
"text": "Time-memory trade-off methods provide means to invert one way functions. Such attacks offer a flexible trade-off between running time and memory cost in accordance to users' computational resources. In particular, they can be applied to hash values of passwords in order to recover the plaintext. They were introduced by Martin Hellman and later improved by Philippe Oechslin with the introduction of rainbow tables. The drawbacks of rainbow tables are that they do not always guarantee a successful inversion. We address this issue in this paper. In the context of passwords, it is pertinent that frequently used passwords are incorporated in the rainbow table. It has been known that up to 4 given passwords can be incorporated into a chain but it is an open problem if more than 4 passwords can be achieved. We solve this problem by showing that it is possible to incorporate more of such passwords along a chain. Furthermore, we prove that this results in faster recovery of such passwords during the online running phase as opposed to assigning them at the beginning of the chains. For large chain lengths, the average improvement translates to 3 times the speed increase during the online recovery time.",
"title": ""
},
{
"docid": "09c27f3f680188637177e7f2913c1ef7",
"text": "The implementation of a monitoring and control system for the induction motor based on programmable logic controller (PLC) technology is described. Also, the implementation of the hardware and software for speed control and protection with the results obtained from tests on induction motor performance is provided. The PLC correlates the operational parameters to the speed requested by the user and monitors the system during normal operation and under trip conditions. Tests of the induction motor system driven by inverter and controlled by PLC prove a higher accuracy in speed regulation as compared to a conventional V/f control system. The efficiency of PLC control is increased at high speeds up to 95% of the synchronous speed. Thus, PLC proves themselves as a very versatile and effective tool in industrial control of electric drives.",
"title": ""
},
{
"docid": "45a8fea3e8d780c65811cee79082237f",
"text": "Pedestrian dead reckoning, especially on smart-phones, is likely to play an increasingly important role in indoor tracking and navigation, due to its low cost and ability to work without any additional infrastructure. A challenge however, is that positioning, both in terms of step detection and heading estimation, must be accurate and reliable, even when the use of the device is so varied in terms of placement (e.g. handheld or in a pocket) or orientation (e.g holding the device in either portrait or landscape mode). Furthermore, the placement can vary over time as a user performs different tasks, such as making a call or carrying the device in a bag. A second challenge is to be able to distinguish between a true step and other periodic motion such as swinging an arm or tapping when the placement and orientation of the device is unknown. If this is not done correctly, then the PDR system typically overestimates the number of steps taken, leading to a significant long term error. We present a fresh approach, robust PDR (R-PDR), based on exploiting how bipedal motion impacts acquired sensor waveforms. Rather than attempting to recognize different placements through sensor data, we instead simply determine whether the motion of one or both legs impact the measurements. In addition, we formulate a set of techniques to accurately estimate the device orientation, which allows us to very accurately (typically over 99%) reject false positives. We demonstrate that regardless of device placement, we are able to detect the number of steps taken with >99.4% accuracy. R-PDR thus addresses the two main limitations facing existing PDR techniques.",
"title": ""
},
{
"docid": "53d1ddf4809ab735aa61f4059a1a38b1",
"text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.",
"title": ""
}
] |
scidocsrr
|
0ea7a1202a3a2df640f7dbf9a0451d2d
|
Exploitation and exploration in a performance based contextual advertising system
|
[
{
"docid": "341b0588f323d199275e89d8c33d6b47",
"text": "We propose novel multi-armed bandit (explore/exploit) schemes to maximize total clicks on a content module published regularly on Yahoo! Intuitively, one can ``explore'' each candidate item by displaying it to a small fraction of user visits to estimate the item's click-through rate (CTR), and then ``exploit'' high CTR items in order to maximize clicks. While bandit methods that seek to find the optimal trade-off between explore and exploit have been studied for decades, existing solutions are not satisfactory for web content publishing applications where dynamic set of items with short lifetimes, delayed feedback and non-stationary reward (CTR) distributions are typical. In this paper, we develop a Bayesian solution and extend several existing schemes to our setting. Through extensive evaluation with nine bandit schemes, we show that our Bayesian solution is uniformly better in several scenarios. We also study the empirical characteristics of our schemes and provide useful insights on the strengths and weaknesses of each. Finally, we validate our results with a ``side-by-side'' comparison of schemes through live experiments conducted on a random sample of real user visits to Yahoo!",
"title": ""
},
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
}
] |
[
{
"docid": "7d08501a0123d773f9fe755f1612e57e",
"text": "Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent-pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.",
"title": ""
},
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "d81cadc01ab599fd34d2ccfa8377de51",
"text": "1. The Situation in Cognition The situated cognition movement in the cognitive sciences, like those sciences themselves, is a loose-knit family of approaches to understanding the mind and cognition. While it has both philosophical and psychological antecedents in thought stretching back over the last century (see Gallagher, this volume, Clancey, this volume,), it has developed primarily since the late 1970s as an alternative to, or a modification of, the then predominant paradigms for exploring the mind within the cognitive sciences. For this reason it has been common to characterize situated cognition in terms of what it is not, a cluster of \"anti-isms\". Situated cognition has thus been described as opposed to Platonism, Cartesianism, individualism, representationalism, and even",
"title": ""
},
{
"docid": "aeba4012971d339a9a953a7b86f57eb8",
"text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"title": ""
},
{
"docid": "f4b270b09649ba05dd22d681a2e3e3b7",
"text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.",
"title": ""
},
{
"docid": "5eb65797b9b5e90d5aa3968d5274ae72",
"text": "Blockchains enable tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statistics obtained from the blockchain, and the truthfulness of trustless nodes are financially enforced by smart contract terms. Further, we propose an incentive-compatible consensus mechanism based on proof-of-stake to financially encourage the cache helpers to stay active in service. We model the interaction between the cache helpers and the content providers as a Chinese restaurant game. Based on the theoretical analysis regarding the Nash equilibrium of the game, we propose a decentralized strategy-searching algorithm using sequential best response. The simulation results demonstrate both the efficiency and reliability of the proposed equilibrium searching algorithm.",
"title": ""
},
{
"docid": "e4914b41b7d38ff04b0e5a9b88cf1dc6",
"text": "In this paper, we investigate the secure nearest neighbor (SNN) problem, in which a client issues an encrypted query point E(q) to a cloud service provider and asks for an encrypted data point in E(D) (the encrypted database) that is closest to the query point, without allowing the server to learn the plaintexts of the data or the query (and its result). We show that efficient attacks exist for existing SNN methods [21], [15], even though they were claimed to be secure in standard security models (such as indistinguishability under chosen plaintext or ciphertext attacks). We also establish a relationship between the SNN problem and the order-preserving encryption (OPE) problem from the cryptography field [6], [5], and we show that SNN is at least as hard as OPE. Since it is impossible to construct secure OPE schemes in standard security models [6], [5], our results imply that one cannot expect to find the exact (encrypted) nearest neighbor based on only E(q) and E(D). Given this hardness result, we design new SNN methods by asking the server, given only E(q) and E(D), to return a relevant (encrypted) partition E(G) from E(D) (i.e., G ⊆ D), such that that E(G) is guaranteed to contain the answer for the SNN query. Our methods provide customizable tradeoff between efficiency and communication cost, and they are as secure as the encryption scheme E used to encrypt the query and the database, where E can be any well-established encryption schemes.",
"title": ""
},
{
"docid": "4a7a4db8497b0d13c8411100dab1b207",
"text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.",
"title": ""
},
{
"docid": "f9b99ad1fcf9963cca29e7ddfca20428",
"text": "Nested Named Entities (nested NEs), one containing another, are commonly seen in biomedical text, e.g., accounting for 16.7% of all named entities in GENIA corpus. While many works have been done in recognizing non-nested NEs, nested NEs have been largely neglected. In this work, we treat the task as a binary classification problem and solve it using Support Vector Machines. For each token in nested NEs, we use two schemes to set its class label: labeling as the outmost entity or the inner entity. Our preliminary results show that while the outmost labeling tends to work better in recognizing the outmost entities, the inner labeling recognizes the inner NEs better. This result should be useful for recognition of nested NEs.",
"title": ""
},
{
"docid": "90125582272e3f16a34d5d0c885f573a",
"text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.",
"title": ""
},
{
"docid": "a4ddf6920fa7a5c09fa0f62f9b96a2e3",
"text": "In this paper, a class of single-phase Z-source (ZS) ac–ac converters is proposed with high-frequency transformer (HFT) isolation. The proposed HFT isolated (HFTI) ZS ac–ac converters possess all the features of their nonisolated counterparts, such as providing wide range of buck-boost output voltage with reversing or maintaining the phase angle, suppressing the in-rush and harmonic currents, and improved reliability. In addition, the proposed converters incorporate HFT for electrical isolation and safety, and therefore can save an external bulky line frequency transformer, for applications such as dynamic voltage restorers, etc. The proposed HFTI ZS converters are obtained from conventional (nonisolated) ZS ac–ac converters by adding only one extra bidirectional switch, and replacing two inductors with an HFT, thus saving one magnetic core. The switching signals for buck and boost modes are presented with safe-commutation strategy to remove the switch voltage spikes. A quasi-ZS-based HFTI ac–ac is used to discuss the operation principle and circuit analysis of the proposed class of HFTI ZS ac–ac converters. Various ZS-based HFTI proposed ac–ac converters are also presented thereafter. Moreover, a laboratory prototype of the proposed converter is constructed and experiments are conducted to produce output voltage of 110 Vrms / 60 Hz, which verify the operation of the proposed converters.",
"title": ""
},
{
"docid": "7e6573b3e080481949a2b45eb6c68a42",
"text": "We study the problem of minimizing the sum of a smooth convex function and a convex blockseparable regularizer and propose a new randomized coordinate descent method, which we call ALPHA. Our method at every iteration updates a random subset of coordinates, following an arbitrary distribution. No coordinate descent methods capable to handle an arbitrary sampling have been studied in the literature before for this problem. ALPHA is a remarkably flexible algorithm: in special cases, it reduces to deterministic and randomized methods such as gradient descent, coordinate descent, parallel coordinate descent and distributed coordinate descent – both in nonaccelerated and accelerated variants. The variants with arbitrary (or importance) sampling are new. We provide a complexity analysis of ALPHA, from which we deduce as a direct corollary complexity bounds for its many variants, all matching or improving best known bounds.",
"title": ""
},
{
"docid": "d68bf9cd549c6d3fe067f343bd38c439",
"text": "Most multiobjective evolutionary algorithms are based on Pareto dominance for measuring the quality of solutions during their search, among them NSGA-II is well-known. A very few algorithms are based on decomposition and implicitly or explicitly try to optimize aggregations of the objectives. MOEA/D is a very recent such an algorithm. One of the major advantages of MOEA/D is that it is very easy to use well-developed single optimization local search within it. This paper compares the performance of MOEA/D and NSGA-II on the multiobjective travelling salesman problem and studies the effect of local search on the performance of MOEA/D.",
"title": ""
},
{
"docid": "5190176eb4e743b8ac356fa97c06aa7c",
"text": "This paper presents a flexible control technique of active and reactive power for single phase grid-tied photovoltaic inverter, supplied from PV array, based on quarter cycle phase delay methodology to generate the fictitious quadrature signal in order to emulate the PQ theory of three-phase systems. The investigated scheme is characterized by independent control of active and reactive power owing to the independent PQ reference signals that can satisfy the features and new functions of modern grid-tied inverters fed from renewable energy resources. The study is conducted on 10 kW PV array using PSIM program. The obtained results demonstrate the high capability to provide quick and accurate control of the injected active and reactive power to the main grid. The harmonic spectra of power components and the resultant grid current indicate that the single-phase PQ control scheme guarantees and satisfies the power quality requirements and constrains, which permits application of such scheme on a wide scale integrated with other PV inverters where independent PQ reference signals would be generated locally by energy management unit in case of microgrid, or from remote data center in case of smart grid.",
"title": ""
},
{
"docid": "8c9155ce72bc3ba11bd4680d46ad69b5",
"text": "Many theorists assume that the cognitive system is composed of a collection of encapsulated processing components or modules, each dedicated to performing a particular cognitive function. On this view, selective impairments of cognitive tasks following brain damage, as evidenced by double dissociations, are naturally interpreted in terms of the loss of particular processing components. By contrast, the current investigation examines in detail a double dissociation between concrete and abstract work reading after damage to a connectionist network that pronounces words via meaning and yet has no separable components (Plaut & Shallice, 1993). The functional specialization in the network that gives rise to the double dissociation is not transparently related to the network's structure, as modular theories assume. Furthermore, a consideration of the distribution of effects across quantitatively equivalent individual lesions in the network raises specific concerns about the interpretation of single-case studies. The findings underscore the necessity of relating neuropsychological data to cognitive theories in the context of specific computational assumptions about how the cognitive system operates normally and after damage.",
"title": ""
},
{
"docid": "aafaffb28d171e2cddadbd9b65539e21",
"text": "LCD column drivers have traditionally used nonlinear R-string style digital-to-analog converters (DAC). This paper describes an architecture that uses 840 linear charge redistribution 10/12-bit DACs to implement a 420-output column driver. Each DAC performs its conversion in less than 15 /spl mu/s and draws less than 5 /spl mu/A. This architecture allows 10-bit independent color control in a 17 mm/sup 2/ die for the LCD television market.",
"title": ""
},
{
"docid": "480c066863a97bde11b0acc32b427f4e",
"text": "When computer security incidents occur, it's critical that organizations be able to handle them in a timely manner. The speed with which an organization can recognize, analyze, and respond to an incident will affect the damage and lower recovery costs. Organized incident management requires defined, repeatable processes and the ability to learn from incidents that threaten the confidentiality, availability, and integrity of critical systems and data. Some organizations assign responsibility for incident management to a defined group of people or a designated unit, such as a computer security incident response team. This article looks at the development, purpose, and evolution of such specialized teams; the evolving nature of attacks they must deal with; and methods to evaluate the performance of such teams as well as the emergence of information sharing as a core service.",
"title": ""
},
{
"docid": "026a49cd48c7100b5b9f8f7197e71a1f",
"text": "In-wheel motors have tremendous potential to create an advanced all-wheel drive system. In this paper, a novel power assisted steering technology and its torque distribution control system were proposed, due to the independent driving characteristics of four-wheel-independent-drive electric vehicle. The first part of this study deals with the full description of the basic theory of differential drive assisted steering system. After that, 4-wheel-drive (4WD) electric vehicle dynamics model as well as driver model were built. Furthermore, the differential drive assisted steering control system, as well as the drive torque distribution and compensation control system, was also presented. Therein, the proportional–integral (PI) feedback control loop was employed to track the reference steering effort by controlling the drive torque distribution between the two sides wheels of the front axle. After that, the direct yaw moment control subsystem and the traction control subsystem were introduced, which were both employed to make the differential drive assisted steering work as well as wished. Finally, the open-loop and closed-loop simulation for validation were performed. The results verified that, the proposed differential drive torque assisted steering system cannot only reduce the steering efforts significantly, as well as ensure a stiffer steering feel at high vehicle speed and improve the returnability of the vehicle, but also keep the lateral stability of the vehicle. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b05a72a6fa5e381b341ba8c9107a690c",
"text": "Acknowledgments are widely used in scientific articles to express gratitude and credit collaborators. Despite suggestions that indexing acknowledgments automatically will give interesting insights, there is currently, to the best of our knowledge, no such system to track acknowledgments and index them. In this paper we introduce AckSeer, a search engine and a repository for automatically extracted acknowledgments in digital libraries. AckSeer is a fully automated system that scans items in digital libraries including conference papers, journals, and books extracting acknowledgment sections and identifying acknowledged entities mentioned within. We describe the architecture of AckSeer and discuss the extraction algorithms that achieve a F1 measure above 83%. We use multiple Named Entity Recognition (NER) tools and propose a method for merging the outcome from different recognizers. The resulting entities are stored in a database then made searchable by adding them to the AckSeer index along with the metadata of the containing paper/book.\n We build AckSeer on top of the documents in CiteSeerx digital library yielding more than 500,000 acknowledgments and more than 4 million mentioned entities.",
"title": ""
},
{
"docid": "2b0969dd0089bd2a2054957477ea4ce1",
"text": "A self-signaling action is an action chosen partly to secure good news about one’s traits or abilities, even when the action has no causal impact on these traits and abilities. We discuss some of the odd things that happen when self-signaling is introduced into an otherwise rational conception of action. We employ a signaling game perspective in which the diagnostic signals are an endogenous part of the equilibrium choice. We are interested (1) in pure self-signaling, separate from any desire to be regarded well by others, and (2) purely diagnostic motivation, that is, caring about what an action might reveal about a trait even when that action has no causal impact on it. When diagnostic motivation is strong, the person’s actions exhibit a rigidity characteristic of personal rules. Our model also predicts that a boost in self-image positively affects actions even though it leaves true preferences unchanged — we call this a “moral placebo effect.” 1 The chapter draws on (co-authored) Chapter 3 of Bodner’s doctoral dissertation (Bodner, 1995) and an unpublished MIT working paper (Bodner and Prelec, 1997). The authors thank Bodner’s dissertation advisors France Leclerc and Richard Thaler, workshop discussants Thomas Schelling, Russell Winer, and Mathias Dewatripont, and George Ainslie, Michael Bratman, Juan Carillo, Itzakh Gilboa, George Loewenstein, Al Mela, Matthew Rabin, Duncan Simester and Florian Zettelmeyer for comments on these ideas (with the usual disclaimer). We are grateful to Birger Wernerfelt for drawing attention to Bernheim's work on social conformity. Author addresses: Bodner – Director, Learning Innovations, 13\\4 Shimshon St., Jerusalem, 93501, Israel, learning@netvision.net.il; Prelec — E56-320, MIT, Sloan School, 38 Memorial Drive, Cambridge, MA 02139, dprelec@mit.edu. 1 Psychological evidence When we make a choice we reveal something of our inner traits or dispositions, not only to others, but also to ourselves. After the fact, this can be a source of pleasure or pain, depending on whether we were impressed or disappointed by our actions. Before the fact, the anticipation of future pride or remorse can influence what we choose to do. In a previous paper (Bodner and Prelec, 1997), we described how the model of a utility maximizing individual could be expanded to include diagnostic utility as a separate motive for action. We review the basic elements of that proposal here. The inspiration comes directly from signaling games in which actions of one person provide an informative signal to others, which in turn affects esteem (Bernheim, 1994). Here, however, actions provide a signal to ourselves, that is, actions are selfsignaling. For example, a person who takes the daily jog in spite of the rain may see that as a gratifying signal of willpower, dedication, or future well being. For someone uncertain about where he or she stands with respect to these dispositions, each new choice can provide a bit of good or bad \"news.” We incorporate the value of such \"news\" into the person's utility function. The notion that a person may draw inferences from an action he enacted partially in order to gain that inference has been posed as a philosophical paradox (e.g. Campbell and Sawden, 1985; Elster, 1985, 1989). A key problem is the following: Suppose that the disposition in question is altruism, and a person interprets a 25¢ donation to a panhandler as evidence of altruism. If the boost in self-esteem makes it worth giving the quarter even when there is no concern for the poor, than clearly, such a donation is not valid evidence of altruism. Logically, giving is valid evidence of high altruism only if a person with low altruism would not have given the quarter. This reasoning motivates our equilibrium approach, in which inferences from actions are an endogenous part of the equilibrium choice. As an empirical matter several studies have demonstrated that diagnostic considerations do indeed affect behavior (Quattrone and Tversky, 1984; Shafir and Tversky, 1992; Bodner, 1995). An elegant experiment by Quattrone and Tversky (1984) both defines the self-signaling phenomenon and demonstrates its existence. Quattrone and Tversky first asked each subject to take a cold pressor pain test in which the subject's arm is submerged in a container of cold water until the subject can no longer tolerate the pain. Subsequently the subject was told that recent medical studies had discovered a certain inborn heart condition, and that people with this condition are “frequently ill, prone to heart-disease, and have shorter-than-average life expectancy.” Subjects were also told that this type could be identified by the effect of exercise on the cold pressor test. Subjects were randomly assigned to one of two conditions in which they were told that the bad type of heart was associated with either increases or with decreases in tolerance to the cold water after exercise. Subjects then repeated the cold pressor test, after riding an Exercycle for one minute. As predicted, the vast majority of subjects showed changes in tolerance on the second cold pressor trial in the direction correlated of “good news”—if told that decreased tolerance is diagnostic of a bad heart they endured the near-freezing water longer (and vice versa). The result shows that people are willing to bear painful consequences for a behavior that is a signal, though not a cause, of a medical diagnosis. An experiment by Shafir and Tversky (1992) on \"Newcomb's paradox\" reinforces the same point. In the philosophical version of the paradox, a person is (hypothetically) presented with two boxes, A and B. Box A contains either nothing or some large amount of money deposited by an \"omniscient being.\" Box B contains a small amount of money for sure. The decision-maker doesn’t know what Box A contains choice, and has to choose whether to take the contents of that box (A) or of both boxes (A+B). What makes the problem a paradox is that the person is asked to believe that the omniscient being has already predicted her choice, and on that basis has already either \"punished\" a greedy choice of (A+B) with no deposit in A or \"rewarded\" a choice of (A) with a large deposit. The dominance principle argues in favor of choosing both boxes, because the deposits are fixed at the moment of choice. This is the philosophical statement of the problem. In the actual experiment, Shafir and Tversky presented a variant of Newcomb’s problem at the end of another, longer experiment, in which subjects repeatedly played a Prisoner’s Dilemma game against (virtual) opponents via computer terminals. After finishing these games, a final “bonus” problem appeared, with the two Newcomb boxes, and subjects had to choose whether to take money from one box or from both boxes. The experimental cover story did not mention an omniscient being but instead informed the subjects that \"a program developed at MIT recently was applied during the entire session [of Prisoner’s Dilemma choices] to analyze the pattern of your preference.” Ostensibly, this mighty program could predict choices, one or two boxes, with 85% accuracy, and, of course, if the program predicted a choice of both boxes it would then put nothing in Box A. Although it was evident that the money amounts were already set at the moment of choice, most experimental subjects opted for the single box. It is “as if” they believed that by declining to take the money in Box B, they could change the amount of money already deposited in box A. Although these are relatively recent experiments, their results are consistent with a long stream of psychological research, going back at least to the James-Lange theory of emotions which claimed that people infer their own states from behavior (e.g., they feel afraid if they see themselves running). The notion that people adopt the perspective of an outside observer when interpreting their own actions has been extensively explored in the research on self-perception (Bem, 1972). In a similar vein, there is an extensive literature confirming the existence of “self-handicapping” strategies, where a person might get too little sleep or under-prepare for an examination. In such a case, a successful performance could be attributed to ability while unsuccessful performance could be externalized as due to the lack of proper preparation (e.g. Berglas and Jones, 1978; Berglas and Baumeister, 1993). This broader context of psychological research suggests that we should view the results of Quattrone and Tversky, and Shafir and Tversky not as mere curiosities, applying to only contrived experimental situations, but instead as evidence of a general motivational “short circuit.” Motivation does not require causality, even when the lack of causality is utterly transparent. If anything, these experiments probably underestimate the impact of diagnosticity in realistic decisions, where the absence of causal links between actions and dispositions is less evident. Formally, our model distinguishes between outcome utility — the utility of the anticipated causal consequences of choice — and diagnostic utility — the value of the adjusted estimate of one’s disposition, adjusted in light of the choice. Individuals act so as to maximize some combination of the two sources of utility, and (in one version of the model) make correct inferences about what their choices imply about their dispositions. When diagnostic utility is sufficiently important, the individual chooses the same action independent of disposition. We interpret this as a personal rule. We describe other ways in which the behavior of self-signaling individuals is qualitatively different from that of standard economic agents. First, a self-signaling person will be more likely to reveal discrepancies between resolutions and actions when resolutions pertain to actions that are contingent or delayed. Thus she might honestly commit to do some worthy action if the circumstances requiring t",
"title": ""
}
] |
scidocsrr
|
044423195a1a39eb794ddbb010b857d7
|
Goal-Driven Conceptual Blending: A Computational Approach for Creativity
|
[
{
"docid": "c5f6a559d8361ad509ec10bbb6c3cc9b",
"text": "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.",
"title": ""
}
] |
[
{
"docid": "227786365219fe1efab6414bae0d8cdb",
"text": "Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.\n We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function.\n Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.",
"title": ""
},
{
"docid": "d07f3937b0500c63fea93db8f0ca33e2",
"text": "Style is a familiar category for the analysis of art. It is less so in the history of anatomical illustration. The great Renaissance and Baroque picture books of anatomy illustrated with stylish woodcuts and engravings, such as those by Charles Estienne, Andreas Vesalius and Govard Bidloo, showed figures in dramatic action in keeping with philosophical and theological ideas about human nature. Parallels can be found in paintings of the period, such as those by Titian, Michelangelo and Hans Baldung Grien. The anatomists also claimed to portray the body in an objective manner, and showed themselves as heroes of the discovery of human knowledge. Rembrandt's painting of Dr Nicholas Tulp is the best-known image of the anatomist as hero. The British empirical tradition in the 18th century saw William Cheselden and William Hunter working with techniques of representation that were intended to guarantee detailed realism. The ambition to portray forms life-size led to massive volumes, such as those by Antonio Mascagni. John Bell, the Scottish anatomist, criticized the size and pretensions of the earlier books and argued for a plain style adapted to the needs of teaching and surgery. Henry Gray's famous Anatomy of 1858, illustrated by Henry Vandyke Carter, aspired to a simple descriptive mode of functional representation that avoided stylishness, resulting in a style of its own. Successive editions of Gray progressively saw the replacement of Gray's method and of all his illustrations. The 150th anniversary edition, edited by Susan Standring, radically re-thinks the role of Gray's book within the teaching of medicine.",
"title": ""
},
{
"docid": "e6e3a5499991b2bbbcd5d4c95ae5c111",
"text": "Compelling evidence from many animal taxa indicates that male genitalia are often under postcopulatory sexual selection for characteristics that increase a male's relative fertilization success. There could, however, also be direct precopulatory female mate choice based on male genital traits. Before clothing, the nonretractable human penis would have been conspicuous to potential mates. This observation has generated suggestions that human penis size partly evolved because of female choice. Here we show, based upon female assessment of digitally projected life-size, computer-generated images, that penis size interacts with body shape and height to determine male sexual attractiveness. Positive linear selection was detected for penis size, but the marginal increase in attractiveness eventually declined with greater penis size (i.e., quadratic selection). Penis size had a stronger effect on attractiveness in taller men than in shorter men. There was a similar increase in the positive effect of penis size on attractiveness with a more masculine body shape (i.e., greater shoulder-to-hip ratio). Surprisingly, larger penis size and greater height had almost equivalent positive effects on male attractiveness. Our results support the hypothesis that female mate choice could have driven the evolution of larger penises in humans. More broadly, our results show that precopulatory sexual selection can play a role in the evolution of genital traits.",
"title": ""
},
{
"docid": "009a7247ef27758f6c303cea8108dae1",
"text": "We describe a method for automatic generation of a learning path for education or selfeducation. As a knowledge base, our method uses the semantic structure view from Wikipedia, leveraging on its broad variety of covered concepts. We evaluate our results by comparing them with the learning paths suggested by a group of teachers. Our algorithm is a useful tool for instructional design process.",
"title": ""
},
{
"docid": "cc0687b22e2ba514a2eef5a7aa88963a",
"text": "In this paper, we face the problem of phonetic segmentation under the hierarchical clustering framework. We extend the framework with an unsupervised segmentation algorithm based on a divisive clustering technique and compare both approaches: agglomerative nesting (Bottom-up) against divisive analysis (Top-down). As both approaches require prior knowledge of the number of segments to be estimated, we present a stopping criterion in order to make these algorithms become standalone. This criterion provides an estimation of the underlying number of segments inside the speech acoustic data. The evaluation of both approaches using the stopping criterion reveals good compromise between boundary estimation (Hit rate) and number of segments estimation (over-under segmentation).",
"title": ""
},
{
"docid": "b1e3fe6f24823a9e0dde74f6393d1348",
"text": "The dynamic tree is an abstract data type that allows the maintenance of a collection of trees subject to joining by adding edges (linking) and splitting by deleting edges (cutting), while at the same time allowing reporting of certain combinations of vertex or edge values. For many applications of dynamic trees, values must be combined along paths. For other applications, values must be combined over entire trees. For the latter situation, we show that an idea used originally in parallel graph algorithms, to represent trees by Euler tours, leads to a simple implementation with a time of O(log n) per tree operation, where n is the number of tree vertices. We apply this representation to the implementation of two versions of the network simplex algorithm, resulting in a time of O(log n) per pivot, where n is the number of vertices in the problem network.",
"title": ""
},
{
"docid": "44e7e452b9b27d2028d15c88256eff30",
"text": "In social media communication, multilingual speakers often switch between languages, and, in such an environment, automatic language identification becomes both a necessary and challenging task. In this paper, we describe our work in progress on the problem of automatic language identification for the language of social media. We describe a new dataset that we are in the process of creating, which contains Facebook posts and comments that exhibit code mixing between Bengali, English and Hindi. We also present some preliminary word-level language identification experiments using this dataset. Different techniques are employed, including a simple unsupervised dictionary-based approach, supervised word-level classification with and without contextual clues, and sequence labelling using Conditional Random Fields. We find that the dictionary-based approach is surpassed by supervised classification and sequence labelling, and that it is important to take contextual clues into consideration.",
"title": ""
},
{
"docid": "18f13858b5f9e9a8e123d80b159c4d72",
"text": "Cryptocurrency, and its underlying technologies, has been gaining popularity for transaction management beyond financial transactions. Transaction information is maintained in the blockchain, which can be used to audit the integrity of the transaction. The focus on this paper is the potential availability of block-chain technology of other transactional uses. Block-chain is one of the most stable open ledgers that preserves transaction information, and is difficult to forge. Since the information stored in block-chain is not related to personally identifiable information, it has the characteristics of anonymity. Also, the block-chain allows for transparent transaction verification since all information in the block-chain is open to the public. These characteristics are the same as the requirements for a voting system. That is, strong robustness, anonymity, and transparency. In this paper, we propose an electronic voting system as an application of blockchain, and describe block-chain based voting at a national level through examples.",
"title": ""
},
{
"docid": "ac0b86c5a0e7949c5e77610cee865e2b",
"text": "BACKGROUND\nDegenerative lumbosacral stenosis is a common problem in large breed dogs. For severe degenerative lumbosacral stenosis, conservative treatment is often not effective and surgical intervention remains as the last treatment option. The objective of this retrospective study was to assess the middle to long term outcome of treatment of severe degenerative lumbosacral stenosis with pedicle screw-rod fixation with or without evidence of radiological discospondylitis.\n\n\nRESULTS\nTwelve client-owned dogs with severe degenerative lumbosacral stenosis underwent pedicle screw-rod fixation of the lumbosacral junction. During long term follow-up, dogs were monitored by clinical evaluation, diagnostic imaging, force plate analysis, and by using questionnaires to owners. Clinical evaluation, force plate data, and responses to questionnaires completed by the owners showed resolution (n = 8) or improvement (n = 4) of clinical signs after pedicle screw-rod fixation in 12 dogs. There were no implant failures, however, no interbody vertebral bone fusion of the lumbosacral junction was observed in the follow-up period. Four dogs developed mild recurrent low back pain that could easily be controlled by pain medication and an altered exercise regime.\n\n\nCONCLUSIONS\nPedicle screw-rod fixation offers a surgical treatment option for large breed dogs with severe degenerative lumbosacral stenosis with or without evidence of radiological discospondylitis in which no other treatment is available. Pedicle screw-rod fixation alone does not result in interbody vertebral bone fusion between L7 and S1.",
"title": ""
},
{
"docid": "bf164afc6315bf29a07e6026a3db4a26",
"text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.",
"title": ""
},
{
"docid": "a094869c9f79d0fccbc6892a345fec8b",
"text": "Recent years have seen an exploration of data volumes from a myriad of IoT devices, such as various sensors and ubiquitous cameras. The deluge of IoT data creates enormous opportunities for us to explore the physical world, especially with the help of deep learning techniques. Traditionally, the Cloud is the option for deploying deep learning based applications. However, the challenges of Cloud-centric IoT systems are increasing due to significant data movement overhead, escalating energy needs, and privacy issues. Rather than constantly moving a tremendous amount of raw data to the Cloud, it would be beneficial to leverage the emerging powerful IoT devices to perform the inference task. Nevertheless, the statically trained model could not efficiently handle the dynamic data in the real in-situ environments, which leads to low accuracy. Moreover, the big raw IoT data challenges the traditional supervised training method in the Cloud. To tackle the above challenges, we propose In-situ AI, the first Autonomous and Incremental computing framework and architecture for deep learning based IoT applications. We equip deep learning based IoT system with autonomous IoT data diagnosis (minimize data movement), and incremental and unsupervised training method (tackle the big raw IoT data generated in ever-changing in-situ environments). To provide efficient architectural support for this new computing paradigm, we first characterize the two In-situ AI tasks (i.e. inference and diagnosis tasks) on two popular IoT devices (i.e. mobile GPU and FPGA) and explore the design space and tradeoffs. Based on the characterization results, we propose two working modes for the In-situ AI tasks, including Single-running and Co-running modes. Moreover, we craft analytical models for these two modes to guide the best configuration selection. We also develop a novel two-level weight shared In-situ AI architecture to efficiently deploy In-situ tasks to IoT node. Compared with traditional IoT systems, our In-situ AI can reduce data movement by 28-71%, which further yields 1.4X-3.3X speedup on model update and contributes to 30-70% energy saving.",
"title": ""
},
{
"docid": "016ba468269a1693cb49005712e00d52",
"text": "In 2011, Google released a one-month production trace with hundreds of thousands of jobs running across over 12,000 heterogeneous hosts. In order to perform in-depth research based on the trace, it is necessary to construct a close-to-practice simulation system. In this paper, we devise a distributed cloud simulator (or toolkit) based on virtual machines, with three important features. (1) The dynamic changing resource amounts (such as CPU rate and memory size) consumed by the reproduced jobs can be emulated as closely as possible to the real values in the trace. (2) Various types of events (e.g., kill/evict event) can be emulated precisely based on the trace. (3) Our simulation toolkit is able to emulate more complex and useful cases beyond the original trace to adapt to various research demands. We evaluate the system on a real cluster environment with 16×8=128 cores and 112 virtual machines (VMs) constructed by XEN hypervisor. To the best of our knowledge, this is the first work to reproduce Google cloud environment with real experimental system setting and real-world large scale production trace. Experiments show that our simulation system could effectively reproduce the real checkpointing/restart events based on Google trace, by leveraging Berkeley Lab Checkpoint/Restart (BLCR) tool. It can simultaneously process up to 1200 emulated Google jobs over the 112 VMs. Such a simulation toolkit has been released as a GNU GPL v3 software for free downloading, and it has been successfully applied to the fundamental research on the optimization of checkpoint intervals for Google tasks. Copyright c ⃝ 2013 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fae921cbf39b45fd73f8e8e8cb3cc92f",
"text": "Analyses of areal variations in the subsidence and rebound occurring over stressed aquifer systems, in conjunction with measurements of the hydraulic head fluctuations causing these displacements, can yield valuable information about the compressibility and storage properties of the aquifer system. Historically, stress-strain relationships have been derived from paired extensometer/piezometer installations, which provide only point source data. Because of the general unavailability of spatially detailed deformation data, areal stress-strain relations and their variability are not commonly considered in constraining conceptual and numerical models of aquifer systems. Interferometric synthetic aperture radar (InSAR) techniques can map ground displacements at a spatial scale of tens of meters over 100 km wide swaths. InSAR has been used previously to characterize larger magnitude, generally permanent aquifer system compaction and land subsidence at yearly and longer timescales, caused by sustained drawdown of groundwater levels that produces intergranular stresses consistently greater than the maximum historical stress. We present InSAR measurements of the typically small-magnitude, generally recoverable deformations of the Las Vegas Valley aquifer system occurring at seasonal timescales. From these we derive estimates of the elastic storage coefficient for the aquifer system at several locations in Las Vegas Valley. These high-resolution measurements offer great potential for future investigations into the mechanics of aquifer systems and the spatial heterogeneity of aquifer system structure and material properties as well as for monitoring ongoing aquifer system compaction and land subsidence.",
"title": ""
},
{
"docid": "b754b1d245aa68aeeb37cf78cf54682f",
"text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823",
"title": ""
},
{
"docid": "53821da1274fd420fe0f7eeba024b95d",
"text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.",
"title": ""
},
{
"docid": "450f39fb29cc8b9a51e67da5a4d723c5",
"text": "Trends in data mining are increasing over the time. Current world is of internet and everything is available over internet, which leads to criminal and malicious activity. So the identity of available content is now a need. Available content is always in the form of text data. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper describes review of various methods for authorship analysis and identification for a set of provided text. Surely research in authorship analysis and identification will continue and even increase over decades. In this article, we put our vision of future authorship analysis and identification with high performance and solution for behavioral feature extraction from set of text documents.",
"title": ""
},
{
"docid": "b776307764d3946fc4e7f6158b656435",
"text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.",
"title": ""
},
{
"docid": "a8edc02eb78637f18fc948d81397fc75",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "d12a485101f9453abcd2437c4cfccb01",
"text": "This report describes a low cost indoor position sensing system utilising a combination of radio frequency and ultrasonics. Using a single rf transmitter and four ceiling mounted ultrasonic transmitters it provides coverage in a typical room in an area greater than 8m by 8m. As well as finding position within a room, it uses data encoded into the rf signal to determine the relevant web server for a building, and which floor and room the user is in. It is intended to be used primarily by wearable/mobile computers, though it has also been extended for use as a tracking system.",
"title": ""
},
{
"docid": "47eef1318d313e2f89bb700f8cd34472",
"text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.",
"title": ""
}
] |
scidocsrr
|
1f3a590a37044a2a27bfe3bdd913f1a3
|
Adaptive Semi-Supervised Learning with Discriminative Least Squares Regression
|
[
{
"docid": "25442f28ef0964869966213df255d3be",
"text": "In this paper, we propose a novel ℓ1-norm graph model to perform unsupervised and semi-supervised learning methods. Instead of minimizing the ℓ2-norm of spectral embedding as traditional graph based learning methods, our new graph learning model minimizes the ℓ1-norm of spectral embedding with well motivation. The sparsity produced by the ℓ1-norm minimization results in the solutions with much clearer cluster structures, which are suitable for both image clustering and classification tasks. We introduce a new efficient iterative algorithm to solve the ℓ1-norm of spectral embedding minimization problem, and prove the convergence of the algorithm. More specifically, our algorithm adaptively re-weight the original weights of graph to discover clearer cluster structure. Experimental results on both toy data and real image data sets show the effectiveness and advantages of our proposed method.",
"title": ""
},
{
"docid": "6228f059be27fa5f909f58fb60b2f063",
"text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.",
"title": ""
},
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
}
] |
[
{
"docid": "0ff1ea411bcdd28b6c8bc773176f8e1c",
"text": "The paper presents a generalization of Haskell's IO monad suitable for synchronous concurrent programming. The new monad integrates the deterministic concurrency paradigm of synchronous programming with the powerful abstraction features of functional languages and with full support for imperative programming. For event-driven applications, it offers an alternative to the use of existing, thread-based concurrency extensions of functional languages. The concepts presented have been applied in practice in a framework for programming interactive graphics.",
"title": ""
},
{
"docid": "24a3924f15cb058668e8bcb7ba53ee66",
"text": "This paper presents a latest survey of different technologies used in medical image segmentation using Fuzzy C Means (FCM).The conventional fuzzy c-means algorithm is an efficient clustering algorithm that is used in medical image segmentation. To update the study of image segmentation the survey has performed. The techniques used for this survey are Brain Tumor Detection Using Segmentation Based on Hierarchical Self Organizing Map, Robust Image Segmentation in Low Depth Of Field Images, Fuzzy C-Means Technique with Histogram Based Centroid Initialization for Brain Tissue Segmentation in MRI of Head Scans.",
"title": ""
},
{
"docid": "6c937adbdfe7f86a83948f1a28d67649",
"text": "BACKGROUND\nViral warts are a common skin condition, which can range in severity from a minor nuisance that resolve spontaneously to a troublesome, chronic condition. Many different topical treatments are available.\n\n\nOBJECTIVES\nTo evaluate the efficacy of local treatments for cutaneous non-genital warts in healthy, immunocompetent adults and children.\n\n\nSEARCH METHODS\nWe updated our searches of the following databases to May 2011: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library, MEDLINE (from 2005), EMBASE (from 2010), AMED (from 1985), LILACS (from 1982), and CINAHL (from 1981). We searched reference lists of articles and online trials registries for ongoing trials.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of topical treatments for cutaneous non-genital warts.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors independently selected trials and extracted data; a third author resolved any disagreements.\n\n\nMAIN RESULTS\nWe included 85 trials involving a total of 8815 randomised participants (26 new studies were included in this update). There was a wide range of different treatments and a variety of trial designs. Many of the studies were judged to be at high risk of bias in one or more areas of trial design.Trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (RR (risk ratio) 1.56, 95% CI (confidence interval) 1.20 to 2.03). Subgroup analysis for different sites, hands (RR 2.67, 95% CI 1.43 to 5.01) and feet (RR 1.29, 95% CI 1.07 to 1.55), suggested it might be more effective for hands than feet.A meta-analysis of cryotherapy versus placebo for warts at all sites favoured neither intervention nor control (RR 1.45, 95% CI 0.65 to 3.23). Subgroup analysis for different sites, hands (RR 2.63, 95% CI 0.43 to 15.94) and feet (RR 0.90, 95% CI 0.26 to 3.07), again suggested better outcomes for hands than feet. One trial showed cryotherapy to be better than both placebo and SA, but only for hand warts.There was no significant difference in cure rates between cryotherapy at 2-, 3-, and 4-weekly intervals.Aggressive cryotherapy appeared more effective than gentle cryotherapy (RR 1.90, 95% CI 1.15 to 3.15), but with increased adverse effects.Meta-analysis did not demonstrate a significant difference in effectiveness between cryotherapy and SA at all sites (RR 1.23, 95% CI 0.88 to 1.71) or in subgroup analyses for hands and feet.Two trials with 328 participants showed that SA and cryotherapy combined appeared more effective than SA alone (RR 1.24, 95% CI 1.07 to 1.43).The benefit of intralesional bleomycin remains uncertain as the evidence was inconsistent. The most informative trial with 31 participants showed no significant difference in cure rate between bleomycin and saline injections (RR 1.28, 95% CI 0.92 to 1.78).Dinitrochlorobenzene was more than twice as effective as placebo in 2 trials with 80 participants (RR 2.12, 95% CI 1.38 to 3.26).Two trials of clear duct tape with 193 participants demonstrated no advantage over placebo (RR 1.43, 95% CI 0.51 to 4.05).We could not combine data from trials of the following treatments: intralesional 5-fluorouracil, topical zinc, silver nitrate (which demonstrated possible beneficial effects), topical 5-fluorouracil, pulsed dye laser, photodynamic therapy, 80% phenol, 5% imiquimod cream, intralesional antigen, and topical alpha-lactalbumin-oleic acid (which showed no advantage over placebo).We did not identify any RCTs that evaluated surgery (curettage, excision), formaldehyde, podophyllotoxin, cantharidin, diphencyprone, or squaric acid dibutylester.\n\n\nAUTHORS' CONCLUSIONS\nData from two new trials comparing SA and cryotherapy have allowed a better appraisal of their effectiveness. The evidence remains more consistent for SA, but only shows a modest therapeutic effect. Overall, trials comparing cryotherapy with placebo showed no significant difference in effectiveness, but the same was also true for trials comparing cryotherapy with SA. Only one trial showed cryotherapy to be better than both SA and placebo, and this was only for hand warts. Adverse effects, such as pain, blistering, and scarring, were not consistently reported but are probably more common with cryotherapy.None of the other reviewed treatments appeared safer or more effective than SA and cryotherapy. Two trials of clear duct tape demonstrated no advantage over placebo. Dinitrochlorobenzene (and possibly other similar contact sensitisers) may be useful for the treatment of refractory warts.",
"title": ""
},
{
"docid": "fbcebe9e6b22049918f262dae0dcd099",
"text": "Trust is a fundamental concern in large-scale open distributed systems. It lies at the core of all interactions between the entities that have to operate in such uncertain and constantly changing environments. Given this complexity, these components, and the ensuing system, are increasingly being conceptualised, designed, and built using agent-based techniques and, to this end, this paper examines the specific role of trust in multi-agent systems. In particular, we survey the state of the art and provide an account of the main directions along which research efforts are being focused. In so doing, we critically evaluate the relative strengths and weaknesses of the main models that have been proposed and show how, fundamentally, they all seek to minimise the uncertainty in interactions. Finally, we outline the areas that require further research in order to develop a comprehensive treatment of trust in complex computational settings.",
"title": ""
},
{
"docid": "dded827d0b9c513ad504663547018749",
"text": "In this paper, various key points in the rotor design of a low-cost permanent-magnet-assisted synchronous reluctance motor (PMa-SynRM) are introduced and their effects are studied. Finite-element approach has been utilized to show the effects of these parameters on the developed average electromagnetic torque and total d-q inductances. One of the features considered in the design of this motor is the magnetization of the permanent magnets mounted in the rotor core using the stator windings. This feature will cause a reduction in cost and ease of manufacturing. Effectiveness of the design procedure is validated by presenting simulation and experimental results of a 1.5-kW prototype PMa-SynRM",
"title": ""
},
{
"docid": "a531694dba7fc479b43d0725bc68de15",
"text": "This paper gives an introduction to the essential challenges of software engineering and requirements that software has to fulfill in the domain of automation. Besides, the functional characteristics, specific constraints and circumstances are considered for deriving requirements concerning usability, the technical process, the automation functions, used platform and the well-established models, which are described in detail. On the other hand, challenges result from the circumstances at different points in the single phases of the life cycle of the automated system. The requirements for life-cycle-management, tools and the changeability during runtime are described in detail.",
"title": ""
},
{
"docid": "2747952e921f9e0c2beb524957edf2a0",
"text": "AngloGold Ashanti is an international gold mining company that has recently implemented an information security awareness program worldwide at all of their operations. Following the implementation, there was a normal business need to evaluate and measure the success and effectiveness of the program. A measuring tool that can be applied globally and that addressed AngloGold Ashanti’s unique requirements was developed and applied at the mining sites located in the West Africa region. The objective of this paper is, firstly, to give a brief overview on the measuring tool developed and, secondly to report on the application and results in the West Africa region.",
"title": ""
},
{
"docid": "c43164c1828b7889137fe26afce61f58",
"text": "We describe an artificial ant colony capable of solving the traveling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks, and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.",
"title": ""
},
{
"docid": "d846edbd57098464fa2b0f05e0e54942",
"text": "This paper explores recent developments in agile systems engineering. We draw a distinction between agility in the systems engineering process versus agility in the resulting system itself. In the first case the emphasis is on carefully exploring the space of design alternatives and to delay the freeze point as long as possible as new information becomes available during product development. In the second case we are interested in systems that can respond to changed requirements after initial fielding of the system. We provide a list of known and emerging methods in both domains and explore a number of illustrative examples such as the case of the Iridium satellite constellation or recent developments in the automobile industry.",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "e79699c7578d30ab42ce173a7c1055f8",
"text": "Cellulose is the major component of plant biomass, which is readily available and does not compete with the food supply. Hydrolysis of cellulose as the entry point of biorefinery schemes is an important process for chemical and biochemical industries based on sugars, especially for fuel ethanol production. However, cellulose not only provides a renewable carbon source, but also offers challenges to researchers due to the structural recalcitrance. 2] Considerable efforts have been devoted to the study of hydrolysis of cellulose by enzymes, acids, and supercritical water. Recently, acid-catalyzed hydrolysis has attracted increasing attention. To overcome resistance to degradation through breaking hydrogen bonds and b-1,4glycosidic bonds, ionic liquids have been employed to form homogeneous solutions of cellulose prior to hydrolysis. Although the homogeneous hydrolysis can be carried out under mild conditions with a high glucose yield, the workup for separation of sugars, dehydrated products, and unreacted cellulose from the ionic liquid is normally difficult. 4c, 5] Hydrolysis of cellulose in diluted acids has been practiced for a long time. However, acid-waste generation and corrosion hazards are significant drawbacks of this process. To move towards more environmentally sustainable approaches, Onda et al. demonstrated that sulfonated activated carbon can convert amorphous ball-milled cellulose into glucose with a yield of 41 %. Almost simultaneously, Hara et al. completely hydrolyzed cellulose to water soluble b-1,4-glucans at 100 8C using a more robust sulfonated activated carbon catalyst. Hydrolysis of cellobiose and cellulose catalyzed by layered niobium molybdate was achieved by Takagaki et al. , athough the yield of glucose from cellulose was low. Fukuoka et al. found that mesoporouscarbon-supported Ru catalysts were also able to catalyze the hydrolysis of cellulose into glucose. More recently, sulfonated silica/carbon cellulose hydrolysis catalysts were synthesized by Jacobs et al. , affording glucose in 50 % yield. Zhang et al. employed sulfonated carbon with mesoporous structure for hydrolysis of cellulose, giving a glucose yield of 75 %, which is the highest recorded yield on a solid acid. Considering the real biomass components and practical process for glucose production, two challenges remain for a catalytic system. Firstly, solid catalysts should be readily separated from the solid residues. Although cellulose can be converted almost completely in some cases, 11] lignin components can not be converted and humins are formed sometimes as solid residues. Secondly, to achieve a high yield of glucose, the reaction was usually conducted at low cellulose/liquid ratio (ca. 1:100). 8–11] However, concentration of the glucose solution, prior to the production of ethanol or other compounds, is energy-consuming. Thus effective treatment of high cellulose loadings is required. In view of the importance of the cellulose/liquid ratio, Jacobs et al. carried out hexitol production from concentrated cellulose with heteropoly acid and Ru/C. We designed and synthesized a magnetic solid acid catalyst for the hydrolysis of cellulose at high cellulose/liquid ratio (1:10 or 1:15; Scheme 1). Sulfonic acid-functionalized mesopo-",
"title": ""
},
{
"docid": "91bf2f458111b34eb752c9e3c88eb10a",
"text": "The scope of this paper is to explore, analyze and develop a universal architecture that supports mobile payments and mobile banking, taking into consideration the third and the emerging fourth generation communication technologies. Interaction and cooperation between payment and banking systems, integration of existing technologies and exploitation of intelligent procedures provide the prospect to develop an open financial services architecture (OFSA), which satisfies requirements of all involved entities. A unified scenario is designed and a prototype is implemented to demonstrate the feasibility of the proposed architecture. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c32db73f8d9ef779b91bfffa9caeb946",
"text": "Sir: Negative symptoms like blunted affect, lack of spontaneity , and emotional and social withdrawal are disabling conditions for many schizophrenic patients. Pharmacologic strategies alone are frequently insufficient in the treatment of negative symptoms. New treatment approaches are therefore required. Bright light therapy is the treatment of choice for seasonal depression , but is now also shown to be efficacious in nonseasonal depression. 1 Until now, no studies of bright light therapy in schizophrenic patients have been published. This is the first study to evaluate the safety and tolerability of bright light therapy in patients diagnosed with the residual subtype of schizophrenia. Method. Ten patients (8 men and 2 women) with a diagnosis of schizophrenia (DSM-IV criteria) were included in the study, which was conducted from January 2001 to October 2003. At study entry, the mean age of all patients was 41.8 years. Inclusion criteria were residual subtype of schizophrenia (295.6) and stable antipsychotic medication treatment for at least 4 weeks. Antidepressants were not allowed, and any medication inducing photosensitivity was an exclusion criterion. All patients signed informed consent statements before they were enrolled in the study, and the study was approved by the local human subjects research committee. Bright light therapy with 10,000 lux (Chronolux CL–100; Samarit; Aachen, Germany) was applied 1 hour daily, 5 days a week, for 4 weeks. All patients were evaluated with the Positive and Negative Syndrome Scale (PANSS), 2 a visual analog scale (VAS) for mood and a VAS for drive (ranging from 0 mm [abso-lute best mood or drive] to 100 mm [absolute worst mood or drive]), the Clinical Global Impressions scale (CGI), 3 and the Hamilton Rating Scale for Depression (17 items). 4 Measurements were conducted by blinded raters at the screening visit and at weeks 1, 2, and 4, and follow-up examinations were conducted at weeks 8 and 12. Statistical analyses were conducted using SPSS, Version 12 (SPSS Inc.; Chicago, Ill.). The effect of light therapy on the time course of the outcome variables listed above was tested with the Friedman test, as the assumption of normality was not met. Post hoc comparisons between individual time points were performed using the Wilcoxon test. During the treatment period (weeks 1–4), patients were analyzed by an intent-to-treat method, replacing missing data by the last-observation-carried-forward method. Results. Nine patients concluded 4 weeks of treatment, and 1 patient discontinued after 2 weeks for personal reasons. …",
"title": ""
},
{
"docid": "ed0736d1f8c35ec8b0c2f5bb9adfb7f9",
"text": "Neff's (2003a, 2003b) notion of self-compassion emphasizes kindness towards one's self, a feeling of connectedness with others, and mindful awareness of distressing experiences. Because exposure to trauma and subsequent posttraumatic stress symptoms (PSS) may be associated with self-criticism and avoidance of internal experiences, the authors examined the relationship between self-compassion and PSS. Out of a sample of 210 university students, 100 endorsed experiencing a Criterion A trauma. Avoidance symptoms significantly correlated with self-compassion, but reexperiencing and hyperarousal did not. Individuals high in self-compassion may engage in less avoidance strategies following trauma exposure, allowing for a natural exposure process.",
"title": ""
},
{
"docid": "d39f806d1a8ecb33fab4b5ebb49b0dd1",
"text": "Texture analysis has been a particularly dynamic field with different computer vision and image processing applications. Most of the existing texture analysis techniques yield to significant results in different applications but fail in difficult situations with high sensitivity to noise. Inspired by previous works on texture analysis by structure layer modeling, this paper deals with representing the texture's structure layer using the structure tensor field. Based on texture pattern size approximation, the proposed algorithm investigates the adaptability of the structure tensor to the local geometry of textures by automatically estimating the sub-optimal structure tensor size. An extension of the algorithm targeting non-structured textures is also proposed. Results show that using the proposed tensor size regularization method, relevant local information can be extracted by eliminating the need of repetitive tensor field computation with different tensor size to reach an acceptable performance.",
"title": ""
},
{
"docid": "3a3a2261e1063770a9ccbd0d594aa561",
"text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.",
"title": ""
},
{
"docid": "66b7ed8c1d20bceafb0a1a4194cd91e8",
"text": "In this paper a novel watermarking scheme for image authentication and recovery is presented. The algorithm can detect modified regions in images and is able to recover a good approximation of the original content of the tampered regions. For this purpose, two different watermarks have been used: a semi-fragile watermark for image authentication and a robust watermark for image recovery, both embedded in the Discrete Wavelet Transform domain. The proposed method achieves good image quality with mean Peak Signal-to-Noise Ratio values of the watermarked images of 42 dB and identifies image tampering of up to 20% of the original image.",
"title": ""
},
{
"docid": "d9599c4140819670a661bd4955680bb7",
"text": "The paper assesses the demand for rural electricity services and contrasts it with the technology options available for rural electrification. Decentralised Distributed Generation can be economically viable as reflected by case studies reported in literature and analysed in our field study. Project success is driven by economically viable technology choice; however it is largely contingent on organisational leadership and appropriate institutional structures. While individual leadership can compensate for deployment barriers, we argue that a large scale roll out of rural electrification requires an alignment of economic incentives and institutional structures to implement, operate and maintain the scheme. This is demonstrated with the help of seven case studies of projects across north India. 1 Introduction We explore the contribution that decentralised and renewable energy technologies can make to rural electricity supply in India. We take a case study approach, looking at seven sites across northern India where renewable energy technologies have been established to provide electrification for rural communities. We supplement our case studies with stakeholder interviews and household surveys, estimating levels of demand for electricity services from willingness and ability to pay. We also assess the overall viability of Distributed Decentralised Generation (DDG) projects by investigating the costs of implementation as well as institutional and organisational barriers to their operation and replication. Renewable energy technologies represent some of the most promising options available for distributed and decentralised electrification. Demand for reliable electricity services is significant. It represents a key driver behind economic development and raising basic standards of living. This is especially applicable to rural India home to 70% of the nation's population and over 25% of the world's poor. Access to reliable and affordable electricity can help support income-generating activity and allow utilisation of modern appliances and agricultural equipment whilst replacing inefficient and polluting kerosene lighting. Presently only around 55% of households are electrified (MOSPI 2006) leaving over 20 million households without power. The supply of electricity across India currently lacks both quality and quantity with an extensive shortfall in supply, a poor record for outages, high levels of transmission and distribution (T&D) losses and an overall need for extended and improved infrastructure (GoI 2006). The Indian Government recently outlined an ambitious plan for 100% village level electrification by the end of 2007 and total household electrification by 2012. To achieve this, a major programme of grid extension and strengthening of the rural electricity infrastructure has been initiated under …",
"title": ""
},
{
"docid": "13c2c1a1bd4ff886f93d8f89a14e39e2",
"text": "One of the key elements in qualitative data analysis is the systematic coding of text (Strauss and Corbin 1990:57%60; Miles and Huberman 1994:56). Codes are the building blocks for theory or model building and the foundation on which the analyst’s arguments rest. Implicitly or explicitly, they embody the assumptions underlying the analysis. Given the context of the interdisciplinary nature of research at the Centers for Disease Control and Prevention (CDC), we have sought to develop explicit guidelines for all aspects of qualitative data analysis, including codebook development.",
"title": ""
},
{
"docid": "1607be849e72e9fe2ba172b86cf98bd6",
"text": "Phishing is an internet fraud that acquires a user‘s credentials by deceptions. It includes theft of password, credit card number, bank account details, and other confidential information. It is the criminal scheme to steal the user‘s confidential data. There are many anti-phishing techniques used to protect users against phishing attacks. The statistical of APWG trends report for 1 st quarter 2013 says that now a day the maximum phishing attacks are done using URL Obfuscation phishing technique. Due to the different characteristics and methods used in URL Obfuscation, the detection of Obfuscated URL is complex. The current URL Obfuscation anti-phishing technique cannot detect all the counterfeit URLs. In this paper we have reviewed URL Obfuscation phishing technique and the detection of that Obfuscated URLs. Keywords— Anti-phishing, Hyperlink, Internet Security, Phishing, URL Obfuscation",
"title": ""
}
] |
scidocsrr
|
b3a429a245088e0a5defbc505c4091b6
|
Can Computer Playfulness and Cognitive Absorption Lead to Problematic Technology Usage?
|
[
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "2a617a0388cc6653e4d014fc3019e724",
"text": "What kinds of psychological features do people have when they are overly involved in usage of the internet? Internet users in Korea were investigated in terms of internet over-use and related psychological profiles by the level of internet use. We used a modified Young's Internet Addiction Scale, and 13,588 users (7,878 males, 5,710 females), out of 20 million from a major portal site in Korea, participated in this study. Among the sample, 3.5% had been diagnosed as internet addicts (IA), while 18.4% of them were classified as possible internet addicts (PA). The Internet Addiction Scale showed a strong relationship with dysfunctional social behaviors. More IA tried to escape from reality than PA and Non-addicts (NA). When they got stressed out by work or were just depressed, IA showed a high tendency to access the internet. The IA group also reported the highest degree of loneliness, depressed mood, and compulsivity compared to the other groups. The IA group seemed to be more vulnerable to interpersonal dangers than others, showing an unusually close feeling for strangers. Further study is needed to investigate the direct relationship between psychological well-being and internet dependency.",
"title": ""
}
] |
[
{
"docid": "9003a12f984d2bf2fd84984a994770f0",
"text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.",
"title": ""
},
{
"docid": "6d6e21d332a022cc747325439b7cac74",
"text": "We present a computational analysis of the language of drug users when talking about their drug experiences. We introduce a new dataset of over 4,000 descriptions of experiences reported by users of four main drug types, and show that we can predict with an F1-score of up to 88% the drug behind a certain experience. We also perform an analysis of the dominant psycholinguistic processes and dominant emotions associated with each drug type, which sheds light on the characteristics of drug users.",
"title": ""
},
{
"docid": "c00c6539b78ed195224063bcff16fb12",
"text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.",
"title": ""
},
{
"docid": "a3e730ef71a91e1303d4cd92407fed26",
"text": "Purpose – This paper investigates the interplay among the configuration dimensions (network structure, network flow, relationship governance, and service architecture) of LastMile Supply Networks (LMSN) and the underlying mechanisms influencing omnichannel performance. Design/methodology/approach – Based on mixed-method design incorporating a multiple embedded case study, mapping, survey and archival records, this research involved undertaking in-depth withinand cross-case analyses to examine seven LMSNs, employing a configuration approach. Findings – The existing literature in the operations management (OM) field was shown to provide limited understanding of LMSNs within the emerging omnichannel context. Case results suggest that particular configurations have intrinsic capabilities, and that these directly influence omnichannel performance. The study further proposes a taxonomy of LMSNs comprising six forms, with two hybrids, supporting the notion of equifinality in configuration theory. Propositions are developed to further explore interdependencies between configurational attributes, refining the relationships between LMSN types and factors influencing LMSN performance. Practical implications – The findings provide retailers a set of design parameters for the (re)configuration of LMSNs and facilitate performance evaluation using the concept of fit between configurational attributes. The developed model sheds light on the consequential effects when certain configurational attributes are altered, providing design indications. Given the global trend in urbanization, improved LMSN performance would have positive societal impacts in terms of service and resource efficiency. Originality/value – This is one of the first studies in the OM field to critically analyze LMSNs and their behaviors in omnichannel. Additionally, the paper offers several important avenues for future research.",
"title": ""
},
{
"docid": "ea3fd6ece19949b09fd2f5f2de57e519",
"text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.",
"title": ""
},
{
"docid": "65ddfd636299f556117e53b5deb7c7e5",
"text": "BACKGROUND\nMobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.\n\n\nMETHODS\nA questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).\n\n\nRESULTS\nMultivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.\n\n\nCONCLUSIONS\nThe relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.",
"title": ""
},
{
"docid": "66df2a7148d67ffd3aac5fc91e09ee5d",
"text": "Tree boosting, which combines weak learners (typically decision trees) to generate a strong learner, is a highly effective and widely used machine learning method. However, the development of a high performance tree boosting model is a time-consuming process that requires numerous trial-and-error experiments. To tackle this issue, we have developed a visual diagnosis tool, BOOSTVis, to help experts quickly analyze and diagnose the training process of tree boosting. In particular, we have designed a temporal confusion matrix visualization, and combined it with a t-SNE projection and a tree visualization. These visualization components work together to provide a comprehensive overview of a tree boosting model, and enable an effective diagnosis of an unsatisfactory training process. Two case studies that were conducted on the Otto Group Product Classification Challenge dataset demonstrate that BOOSTVis can provide informative feedback and guidance to improve understanding and diagnosis of tree boosting algorithms.",
"title": ""
},
{
"docid": "97cb7718c75b266a086441912e4b22c3",
"text": "Introduction Teacher education finds itself in a critical stage. The pressure towards more school-based programs which is visible in many countries is a sign that not only teachers, but also parents and politicians, are often dissatisfied with the traditional approaches in teacher education In some countries a major part of preservice teacher education has now become the responsibility of the schools, creating a situation in which to a large degree teacher education takes the form of 'training on the job'. The argument for this tendency is that traditional teacher education programs are said to fail in preparing prospective teachers for the realities of the classroom (Goodlad, 1990). Many teacher educators object that a professional teacher should acquire more than just practical tools for managing classroom situations and that it is their job to present student teachers with a broader view on education and to offer them a proper grounding in psychology, sociology, etcetera. This is what Clandinin (1995) calls \" the sacred theory-practice story \" : teacher education conceived as the translation of theory on good teaching into practice. However, many studies have shown that the transfer of theory to practice is meager or even non-existent. Zeichner and Tabachnick (1981), for example, showed that many notions and educational conceptions, developed during preservice teacher education, were \"washed out\" during field experiences. Comparable findings were reported by Cole and Knowles (1993) and Veenman (1984), who also points towards the severe problems teachers experience once they have left preservice teacher education. Lortie (1975) presented us with another early study into the socialization process of teachers, showing the dominant role of practice in shaping teacher development. At Konstanz University in Germany, research has been carried out into the phenomenon of the \"transition shock\" (Müller-Fohrbrodt et al. It showed that, during their induction in the profession, teachers encounter a huge gap between theory and practice. As a consequence, they pass through a quite distinct attitude shift during their first year of teaching, in general creating an adjustment to current practices in the schools and not to recent scientific insights into learning and teaching.",
"title": ""
},
{
"docid": "a73917d842c18ed9c36a13fe9187ea4c",
"text": "Brain Magnetic Resonance Image (MRI) plays a non-substitutive role in clinical diagnosis. The symptom of many diseases corresponds to the structural variants of brain. Automatic structure segmentation in brain MRI is of great importance in modern medical research. Some methods were developed for automatic segmenting of brain MRI but failed to achieve desired accuracy. In this paper, we proposed a new patch-based approach for automatic segmentation of brain MRI using convolutional neural network (CNN). Each brain MRI acquired from a small portion of public dataset is firstly divided into patches. All of these patches are then used for training CNN, which is used for automatic segmentation of brain MRI. Experimental results showed that our approach achieved better segmentation accuracy compared with other deep learning methods.",
"title": ""
},
{
"docid": "ec1120018899c6c9fe16240b8e35efac",
"text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.",
"title": ""
},
{
"docid": "8a0cc5438a082ed9afd28ad8ed272034",
"text": "Researchers analyzed 23 blockchain implementation projects, each tracked for design decisions and architectural alignment showing benefits, detriments, or no effects from blockchain use. The results provide the basis for a framework that lets engineers, architects, investors, and project leaders evaluate blockchain technology’s suitability for a given application. This analysis also led to an understanding of why some domains are inherently problematic for blockchains. Blockchains can be used to solve some trust-based problems but aren’t always the best or optimal technology. Some problems that can be solved using them can also be solved using simpler methods that don’t necessitate as big an investment.",
"title": ""
},
{
"docid": "eea86b8c7d332edb903c213c5df89a53",
"text": "We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Syntactic scaffolds avoid expensive syntactic processing at runtime, only making use of a treebank during training, through a multitask objective. We improve over strong baselines on PropBank semantics, frame semantics, and coreference resolution, achieving competitive performance on all three tasks.",
"title": ""
},
{
"docid": "0a1f6c27cd13735858e7a6686fc5c2c9",
"text": "We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer’s policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.",
"title": ""
},
{
"docid": "fd4cd4edfd9fa8fe463643f02b90b21a",
"text": "We propose a generic method for iteratively approximating various second-order gradient steps-Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient-in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.",
"title": ""
},
{
"docid": "5a011a87ce3f37dc6b944d2686fa2f73",
"text": "Agents are self-contained objects within a software model that are capable of autonomously interacting with the environment and with other agents. Basing a model around agents (building an agent-based model, or ABM) allows the user to build complex models from the bottom up by specifying agent behaviors and the environment within which they operate. This is often a more natural perspective than the system-level perspective required of other modeling paradigms, and it allows greater flexibility to use agents in novel applications. This flexibility makes them ideal as virtual laboratories and testbeds, particularly in the social sciences where direct experimentation may be infeasible or unethical. ABMs have been applied successfully in a broad variety of areas, including heuristic search methods, social science models, combat modeling, and supply chains. This tutorial provides an introduction to tools and resources for prospective modelers, and illustrates ABM flexibility with a basic war-gaming example.",
"title": ""
},
{
"docid": "39838881287fd15b29c20f18b7e1d1eb",
"text": "In the software industry, a challenge firms often face is how to effectively commercialize innovations. An emerging business model increasingly embraced by entrepreneurs, called freemium, combines “free” and “premium” consumption in association with a product or service. In a nutshell, this model involves giving away for free a certain level or type of consumption while making money on premium consumption. We develop a unifying multi-period microeconomic framework with network externalities embedded into consumer learning in order to capture the essence of conventional for-fee models, several key freemium business models such as feature-limited or time-limited, and uniform market seeding models. Under moderate informativeness of word-of-mouth signals, we fully characterize conditions under which firms prefer freemium models, depending on consumer priors on the value of individual software modules, perceptions of crossmodule synergies, and overall value distribution across modules. Within our framework, we show that uniform seeding is always dominated by either freemium models or conventional for-fee models. We further discuss managerial and policy implications based on our analysis. Interestingly, we show that freemium, in one form or another, is always preferred from the social welfare perspective, and we provide guidance on when the firms need to be incentivized to align their interests with the society’s. Finally, we discuss how relaxing some of the assumptions of our model regarding costs or informativeness and heterogeneity of word of mouth may reduce the profit gap between seeding and the other models, and potentially lead to seeding becoming the preferred approach for the firm.",
"title": ""
},
{
"docid": "81f82ecbc43653566319c7e04f098aeb",
"text": "Social microblogs such as Twitter and Weibo are experiencing an explosive growth with billions of global users sharing their daily observations and thoughts. Beyond public interests (e.g., sports, music), microblogs can provide highly detailed information for those interested in public health, homeland security, and financial analysis. However, the language used in Twitter is heavily informal, ungrammatical, and dynamic. Existing data mining algorithms require extensive manually labeling to build and maintain a supervised system. This paper presents STED, a semi-supervised system that helps users to automatically detect and interactively visualize events of a targeted type from twitter, such as crimes, civil unrests, and disease outbreaks. Our model first applies transfer learning and label propagation to automatically generate labeled data, then learns a customized text classifier based on mini-clustering, and finally applies fast spatial scan statistics to estimate the locations of events. We demonstrate STED’s usage and benefits using twitter data collected from Latin America countries, and show how our system helps to detect and track example events such as civil unrests and crimes.",
"title": ""
},
{
"docid": "fcd0c523e74717c572c288a90c588259",
"text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.",
"title": ""
},
{
"docid": "387e9609e2fe3c6893b8ce0a1613f98a",
"text": "Many fault-tolerant and intrusion-tolerant systems require the ability to execute unsafe programs in a realistic environment without leaving permanent damages. Virtual machine technology meets this requirement perfectly because it provides an execution environment that is both realistic and isolated. In this paper, we introduce an OS level virtual machine architecture for Windows applications called Feather-weight Virtual Machine (FVM), under which virtual machines share as many resources of the host machine as possible while still isolated from one another and from the host machine. The key technique behind FVM is namespace virtualization, which isolates virtual machines by renaming resources at the OS system call interface. Through a copy-on-write scheme, FVM allows multiple virtual machines to physically share resources but logically isolate their resources from each other. A main technical challenge in FVM is how to achieve strong isolation among different virtual machines and the host machine, due to numerous namespaces and interprocess communication mechanisms on Windows. Experimental results demonstrate that FVM is more flexible and scalable, requires less system resource, incurs lower start-up and run-time performance overhead than existing hardware-level virtual machine technologies, and thus makes a compelling building block for security and fault-tolerant applications.",
"title": ""
}
] |
scidocsrr
|
aca37317ed979441b3d09fccf5d0561d
|
Neural Sketch Learning for Conditional Program Generation
|
[
{
"docid": "75a1c22e950ccb135c054353acb8571a",
"text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.",
"title": ""
},
{
"docid": "5bf4a17592eca1881a93cd4930f4187d",
"text": "The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.",
"title": ""
}
] |
[
{
"docid": "2ebb21cb1c6982d2d3839e2616cac839",
"text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.",
"title": ""
},
{
"docid": "ad88d2e2213624270328be0aa019b5cd",
"text": "The traditional decision-making framework for newsvendor models is to assume a distribution of the underlying demand. However, the resulting optimal policy is typically sensitive to the choice of the distribution. A more conservative approach is to assume that the distribution belongs to a set parameterized by a few known moments. An ambiguity-averse newsvendor would choose to maximize the worst-case profit. Most models of this type assume that only the mean and the variance are known, but do not attempt to include asymmetry properties of the distribution. Other recent models address asymmetry by including skewness and kurtosis. However, closed-form expressions on the optimal bounds are difficult to find for such models. In this paper, we propose a framework under which the expectation of a piecewise linear objective function is optimized over a set of distributions with known asymmetry properties. This asymmetry is represented by the first two moments of multiple random variables that result from partitioning the original distribution. In the simplest case, this reduces to semivariance. The optimal bounds can be solved through a second-order cone programming (SOCP) problem. This framework can be applied to the risk-averse and risk-neutral newsvendor problems and option pricing. We provide a closed-form expression for the worst-case newsvendor profit with only mean, variance and semivariance information.",
"title": ""
},
{
"docid": "5a4a6328fc88fbe32a81c904135b05c9",
"text": "Semi-supervised learning plays a significant role in multi-class classification, where a small number of labeled data are more deterministic while substantial unlabeled data might cause large uncertainties and potential threats. In this paper, we distinguish the label fitting of labeled and unlabeled training data through a probabilistic vector with an adaptive parameter, which always ensures the significant importance of labeled data and characterizes the contribution of unlabeled instance according to its uncertainty. Instead of using traditional least squares regression (LSR) for classification, we develop a new discriminative LSR by equipping each label with an adjustment vector. This strategy avoids incorrect penalization on samples that are far away from the boundary and simultaneously facilitates multi-class classification by enlarging the geometrical distance of instances belonging to different classes. An efficient alternative algorithm is exploited to solve the proposed model with closed form solution for each updating rule. We also analyze the convergence and complexity of the proposed algorithm theoretically. Experimental results on several benchmark datasets demonstrate the effectiveness and superiority of the proposed model for multi-class classification tasks.",
"title": ""
},
{
"docid": "48036770f56e84df8b05c198e8a89018",
"text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.",
"title": ""
},
{
"docid": "4e97003a5609901f1f18be1ccbf9db46",
"text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.",
"title": ""
},
{
"docid": "b91b887b3ec5d5b3100d711e1550f64b",
"text": "In this paper we describe the implementation of a complete ANN training procedure for speech recognition using the block mode back-propagation learning algorithm. We exploit the high performance SIMD architecture of GPU using CUDA and its C-like language interface. We also compare the speed-up obtained implementing the training procedure only taking advantage of the multi-thread capabilities of multi-core processors. Our approach has been tested by training acoustic models for large vocabulary speech recognition tasks, showing a 6 times reduction of the time required to train real-world large size networks with respect to an already optimized implementation using the Intel MKL libraries.",
"title": ""
},
{
"docid": "482063f167e0c2e677c4ca8fbd8228c0",
"text": "In this paper we present a novel method for real-time high quality previsualization and cinematic relighting. The physically based Path Tracing algorithm is used within an Augmented Reality setup to preview high-quality light transport. A novel differential version of progressive path tracing is proposed, which calculates two global light transport solutions that are required for differential rendering. A real-time previsualization framework is presented, which renders the solution with a low number of samples during interaction and allows for progressive quality improvement. If a user requests the high-quality solution of a certain view, the tracking is stopped and the algorithm progressively converges to an accurate solution. The problem of rendering complex light paths is solved by using photon mapping. Specular global illumination effects like caustics can easily be rendered. Our framework utilizes the massive parallel power of modern GPUs to achieve fast rendering with complex global illumination, a depth of field effect, and antialiasing.",
"title": ""
},
{
"docid": "d45c7f39c315bf5e8eab3052e75354bb",
"text": "Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images require the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.",
"title": ""
},
{
"docid": "672fa729e41d20bdd396f9de4ead36b3",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "44fe5a6d0d9c7b12fd88961d82778868",
"text": "Traumatic brain injury (TBI) remains a major cause of death and disability worldwide. Increasing evidence indicates that TBI is an important risk factor for neurodegenerative diseases including Alzheimer's disease, Parkinson's disease, and chronic traumatic encephalopathy. Despite improved supportive and rehabilitative care of TBI patients, unfortunately, all late phase clinical trials in TBI have yet to yield a safe and effective neuroprotective treatment. The disappointing clinical trials may be attributed to variability in treatment approaches and heterogeneity of the population of TBI patients as well as a race against time to prevent or reduce inexorable cell death. TBI is not just an acute event but a chronic disease. Among many mechanisms involved in secondary injury after TBI, emerging preclinical studies indicate that posttraumatic prolonged and progressive neuroinflammation is associated with neurodegeneration which may be treatable long after the initiating brain injury. This review provides an overview of recent understanding of neuroinflammation in TBI and preclinical cell-based therapies that target neuroinflammation and promote functional recovery after TBI.",
"title": ""
},
{
"docid": "0daa6d62dedf410bf782af662639507e",
"text": "The paper presents a novel, ultra-compact, reduced-area implementation of a D-type flip-flop using the GaAs Enhancement-Depletion (ED) PHEMT process of the OMMIC with the gate metal layout modified, at the device process level. The D cell has been developed as the building block of a serial to parallel 13-bit shifter embedded within an integrated core-chip for satellite X band SAR applications, but can be exploited for a wide set of logical GaAs-based applications. The novel D cell design, based on the Enhancement-Depletion Super-Buffer (EDSB) logical family, allows for an area reduction of about 20%, with respect to the conventional design, and simplified interconnections. Design rules have been developed to optimize the cell performances. Measured and simulated NOR transfer characteristics show good agreement. A dedicated layout for RF probing has been developed to test the D-type flip-flop behaviour and performances.",
"title": ""
},
{
"docid": "6f518559d8c99ea1e6368ec8c108cabe",
"text": "This paper introduces an integrated Local Interconnect Network (LIN) transceiver which sets a new performance benchmark in terms of electromagnetic compatibility (EMC). The proposed topology succeeds in an extraordinary high robustness against RF disturbances which are injected into the BUS and in very low electromagnetic emissions (EMEs) radiated by the LIN network without adding any external components for filtering. In order to evaluate the circuits superior EMC performance, it was designed using a HV-BiCMOS technology for automotive applications, the EMC behavior was measured and the results were compared with a state of the art topology.",
"title": ""
},
{
"docid": "8672ab8f10baf109492127ee599effdd",
"text": "In the embryonic and adult brain, neural stem cells proliferate and give rise to neurons and glia through highly regulated processes. Epigenetic mechanisms — including DNA and histone modifications, as well as regulation by non-coding RNAs — have pivotal roles in different stages of neurogenesis. Aberrant epigenetic regulation also contributes to the pathogenesis of various brain disorders. Here, we review recent advances in our understanding of epigenetic regulation in neurogenesis and its dysregulation in brain disorders, including discussion of newly identified DNA cytosine modifications. We also briefly cover the emerging field of epitranscriptomics, which involves modifications of mRNAs and long non-coding RNAs.",
"title": ""
},
{
"docid": "59344cfe759a89a68e7bc4b0a5c971b1",
"text": "A non-linear support vector machine (NLSVM) seizure classification SoC with 8-channel EEG data acquisition and storage for epileptic patients is presented. The proposed SoC is the first work in literature that integrates a feature extraction (FE) engine, patient specific hardware-efficient NLSVM classification engine, 96 KB SRAM for EEG data storage and low-noise, high dynamic range readout circuits. To achieve on-chip integration of the NLSVM classification engine with minimum area and energy consumption, the FE engine utilizes time division multiplexing (TDM)-BPF architecture. The implemented log-linear Gaussian basis function (LL-GBF) NLSVM classifier exploits the linearization to achieve energy consumption of 0.39 μ J/operation and reduces the area by 28.2% compared to conventional GBF implementation. The readout circuits incorporate a chopper-stabilized DC servo loop to minimize the noise level elevation and achieve noise RTI of 0.81 μ Vrms for 0.5-100 Hz bandwidth with an NEF of 4.0. The 5 × 5 mm (2) SoC is implemented in a 0.18 μm 1P6M CMOS process consuming 1.83 μ J/classification for 8-channel operation. SoC verification has been done with the Children's Hospital Boston-MIT EEG database, as well as with a specific rapid eye-blink pattern detection test, which results in an average detection rate, average false alarm rate and latency of 95.1%, 0.94% (0.27 false alarms/hour) and 2 s, respectively.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "6cd5b8ef199d926bccc583b7e058d9ee",
"text": "Over the last three decades, a large number of evolutionary algorithms have been developed for solving multi-objective optimization problems. However, there lacks an upto-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multiobjective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "3cc07ea28720245f9c4983b0a4b1a66d",
"text": "A first line of attack in exploratory data analysis is data visualization, i.e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable. Standard JohnsonLindenstrauss dimensionality reduction does not produce data visualizations. The t-SNE heuristic of van der Maaten and Hinton, which is based on non-convex optimization, has become the de facto standard for visualization in a wide range of applications. This work gives a formal framework for the problem of data visualization – finding a 2-dimensional embedding of clusterable data that correctly separates individual clusters to make them visually identifiable. We then give a rigorous analysis of the performance of t-SNE under a natural, deterministic condition on the “ground-truth” clusters (similar to conditions assumed in earlier analyses of clustering) in the underlying data. These are the first provable guarantees on t-SNE for constructing good data visualizations. We show that our deterministic condition is satisfied by considerably general probabilistic generative models for clusterable data such as mixtures of well-separated log-concave distributions. Finally, we give theoretical evidence that t-SNE provably succeeds in partially recovering cluster structure even when the above deterministic condition is not met.",
"title": ""
},
{
"docid": "6200d3c4435ae34e912fc8d2f92e904b",
"text": "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.",
"title": ""
}
] |
scidocsrr
|
faa5e078449e45aa488e8c0194a567af
|
Alcohol addiction and the attachment system: an empirical study of attachment style, alexithymia, and psychiatric disorders in alcoholic inpatients.
|
[
{
"docid": "e89cf17cf4d336468f75173767af63a5",
"text": "This article explores the possibility that romantic love is an attachment process--a biosocial process by which affectional bonds are formed between adult lovers, just as affectional bonds are formed earlier in life between human infants and their parents. Key components of attachment theory, developed by Bowlby, Ainsworth, and others to explain the development of affectional bonds in infancy, were translated into terms appropriate to adult romantic love. The translation centered on the three major styles of attachment in infancy--secure, avoidant, and anxious/ambivalent--and on the notion that continuity of relationship style is due in part to mental models (Bowlby's \"inner working models\") of self and social life. These models, and hence a person's attachment style, are seen as determined in part by childhood relationships with parents. Two questionnaire studies indicated that relative prevalence of the three attachment styles is roughly the same in adulthood as in infancy, the three kinds of adults differ predictably in the way they experience romantic love, and attachment style is related in theoretically meaningful ways to mental models of self and social relationships and to relationship experiences with parents. Implications for theories of romantic love are discussed, as are measurement problems and other issues related to future tests of the attachment perspective.",
"title": ""
}
] |
[
{
"docid": "cd0c68845416f111307ae7e14bfb7491",
"text": "Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals' activity space. First, a survey was conducted to collect individuals' daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment.",
"title": ""
},
{
"docid": "b43178b53f927eb90473e2850f948cb6",
"text": "We study the problem of learning a navigation policy for a robot to actively search for an object of interest in an indoor environment solely from its visual inputs. While scene-driven visual navigation has been widely studied, prior efforts on learning navigation policies for robots to find objects are limited. The problem is often more challenging than target scene finding as the target objects can be very small in the view and can be in an arbitrary pose. We approach the problem from an active perceiver perspective, and propose a novel framework that integrates a deep neural network based object recognition module and a deep reinforcement learning based action prediction mechanism. To validate our method, we conduct experiments on both a simulation dataset (AI2-THOR)and a real-world environment with a physical robot. We further propose a new decaying reward function to learn the control policy specific to the object searching task. Experimental results validate the efficacy of our method, which outperforms competing methods in both average trajectory length and success rate.",
"title": ""
},
{
"docid": "2f1ad82127aa6fb65b712d395c31f690",
"text": "This paper presents a 100-300-GHz quasi-optical network analyzer using compact transmitter and receiver modules. The transmitter includes a wideband double bow-tie slot antenna and employs a Schottky diode as a frequency harmonic multiplier. The receiver includes a similar antenna, a Schottky diode used as a subharmonic mixer, and an LO/IF diplexer. The 100-300-GHz RF signals are the 5th-11th harmonics generated by the frequency multiplier when an 18-27-GHz LO signal is applied. The measured transmitter conversion gain with Pin = 18$ dBm is from -35 to -59 dB for the 5th-11th harmonic, respectively, and results in a transmitter EIRP from +3 to -20 dBm up to 300 GHz. The measured mixer conversion gain is from -30 to -47 dB at the 5th-11th harmonic, respectively. The system has a dynamic range > 60 dB at 200 GHz in a 100-Hz bandwidth for a transmit and receive system based on 12-mm lenses and spaced 60 cm from each other. Frequency-selective surfaces at 150 and 200 GHz are tested by the proposed design and their measured results agree with simulations. Application areas are low-cost scalar network analyzers for wideband quasi-optical 100 GHz-1 THz measurements.",
"title": ""
},
{
"docid": "e58e294dbacf605e40ff2f59cc4f8a6a",
"text": "There are fundamental similarities between sleep in mammals and quiescence in the arthropod Drosophila melanogaster, suggesting that sleep-like states are evolutionarily ancient. The nematode Caenorhabditis elegans also has a quiescent behavioural state during a period called lethargus, which occurs before each of the four moults. Like sleep, lethargus maintains a constant temporal relationship with the expression of the C. elegans Period homologue LIN-42 (ref. 5). Here we show that quiescence associated with lethargus has the additional sleep-like properties of reversibility, reduced responsiveness and homeostasis. We identify the cGMP-dependent protein kinase (PKG) gene egl-4 as a regulator of sleep-like behaviour, and show that egl-4 functions in sensory neurons to promote the C. elegans sleep-like state. Conserved effects on sleep-like behaviour of homologous genes in C. elegans and Drosophila suggest a common genetic regulation of sleep-like states in arthropods and nematodes. Our results indicate that C. elegans is a suitable model system for the study of sleep regulation. The association of this C. elegans sleep-like state with developmental changes that occur with larval moults suggests that sleep may have evolved to allow for developmental changes.",
"title": ""
},
{
"docid": "3f6f191d3d60cd68238545f4b809d4b4",
"text": "This paper examines the dependence of the healthcare waste (HCW) generation rate on several social-economic and environmental parameters. Correlations were calculated between the quantities of healthcare waste generated (expressed in kg/bed/day) versus economic indices (GDP, healthcare expenditure per capita), social indices (HDI, IHDI, MPI, life expectancy, mean years of schooling, HIV prevalence, deaths due to tuberculosis and malaria, and under five mortality rate), and an environmental sustainability index (total CO2 emissions) from 42 countries worldwide. The statistical analysis included the examination of the normality of the data and the formation of linear multiple regression models to further investigate the correlation between those indices and HCW generation rates. Pearson and Spearman correlation coefficients were also calculated for all pairwise comparisons. Results showed that the life expectancy, the HDI, the mean years of schooling and the CO2 emissions positively affect the HCW generation rates and can be used as statistical predictors of those rates. The resulting best reduced regression model included the life expectancy and the CO2 emissions and explained 85% of the variability of the response.",
"title": ""
},
{
"docid": "7f1ad50ce66c855776aaacd0d53279aa",
"text": "A method to synchronize and control a system of parallel single-phase inverters without communication is presented. Inspired by the phenomenon of synchronization in networks of coupled oscillators, we propose that each inverter be controlled to emulate the dynamics of a nonlinear dead-zone oscillator. As a consequence of the electrical coupling between inverters, they synchronize and share the load in proportion to their ratings. We outline a sufficient condition for global asymptotic synchronization and formulate a methodology for controller design such that the inverter terminal voltages oscillate at the desired frequency, and the load voltage is maintained within prescribed bounds. We also introduce a technique to facilitate the seamless addition of inverters controlled with the proposed approach into an energized system. Experimental results for a system of three inverters demonstrate power sharing in proportion to power ratings for both linear and nonlinear loads.",
"title": ""
},
{
"docid": "097da6ee2d13e0b4b2f84a26752574f4",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "159cd44503cb9def6276cb2b9d33c40e",
"text": "In the airline industry, data analysis and data mining are a prerequisite to push customer relationship management (CRM) ahead. Knowledge about data mining methods, marketing strategies and airline business processes has to be combined to successfully implement CRM. This paper is a case study and gives an overview about distinct issues, which have to be taken into account in order to provide a first solution to run CRM processes. We do not focus on each individual task of the project; rather we give a sketch about important steps like data preparation, customer valuation and segmentation and also explain the limitation of the solutions.",
"title": ""
},
{
"docid": "01a5bc92db5ae56c3bae8ddc84a1aa9b",
"text": "Accurate and automatic detection and delineation of cervical cells are two critical precursor steps to automatic Pap smear image analysis and detecting pre-cancerous changes in the uterine cervix. To overcome noise and cell occlusion, many segmentation methods resort to incorporating shape priors, mostly enforcing elliptical shapes (e.g. [1]). However, elliptical shapes do not accurately model cervical cells. In this paper, we propose a new continuous variational segmentation framework with star-shape prior using directional derivatives to segment overlapping cervical cells in Pap smear images. We show that our star-shape constraint better models the underlying problem and outperforms state-of-the-art methods in terms of accuracy and speed.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "39fdfa5258c2cb22ed2d7f1f5b2afeaf",
"text": "Calling for research on automatic oversight for artificial intelligence systems.",
"title": ""
},
{
"docid": "5b61b6d96b7a4af62bf30b535a18e14a",
"text": "schooling were as universally endorsed as homework. Educators, parents, and policymakers of all political and pedagogical stripes insisted that homework is good and more is better—a view that was promoted most visibly in A Nation at Risk (National Commission on Excellence in Education, 1983) and What Works (U.S. Department of Education, 1986).1 Indeed, never in the history of American education was there a stronger professional and public consensus in favor of homework (see Gill & Schlossman, 1996; Gill & Schlossman, 2000). Homework has been touted for academic and character-building purposes, and for promoting America’s international competitiveness (see, e.g., Cooper, 2001; Keith, 1986; Maeroff, 1992; Maeroff, 1989; The Economist, 1995). It has been viewed as a key symbol, method, and yardstick of serious commitment to educational re-",
"title": ""
},
{
"docid": "268e434cedbf5439612b2197be73a521",
"text": "We have recently developed a chaotic gas turbine whose rotational motion might simulate turbulent Rayleigh-Bénard convection. The nondimensionalized equations of motion of our turbine are expressed as a star network of N Lorenz subsystems, referred to as augmented Lorenz equations. Here, we propose an application of the augmented Lorenz equations to chaotic cryptography, as a type of symmetric secret-key cryptographic method, wherein message encryption is performed by superimposing the chaotic signal generated from the equations on a plaintext in much the same way as in one-time pad cryptography. The ciphertext is decrypted by unmasking the chaotic signal precisely reproduced with a secret key consisting of 2N-1 (e.g., N=101) real numbers that specify the augmented Lorenz equations. The transmitter and receiver are assumed to be connected via both a quantum communication channel on which the secret key is distributed using a quantum key distribution protocol and a classical data communication channel on which the ciphertext is transmitted. We discuss the security and feasibility of our cryptographic method.",
"title": ""
},
{
"docid": "63046d1ca19a158052a62c8719f5f707",
"text": "Cloud machine learning (CML) techniques offer contemporary machine learning services, with pre-trained models and a service to generate own personalized models. This paper presents a completely unique emotional modeling methodology for incorporating human feeling into intelligent systems. The projected approach includes a technique to elicit emotion factors from users, a replacement illustration of emotions and a framework for predicting and pursuit user’s emotional mechanical phenomenon over time. The neural network based CML service has better training concert and enlarged exactness compare to other large scale deep learning systems. Opinions are important to almost all human activities and cloud based sentiment analysis is concerned with the automatic extraction of sentiment related information from text. With the rising popularity and availability of opinion rich resources such as personal blogs and online appraisal sites, new opportunities and issues arise as people now, actively use information technologies to explore and capture others opinions. In the existing system, a segmentation ranking model is designed to score the usefulness of a segmentation candidate for sentiment classification. A classification model is used for predicting the sentiment polarity of segmentation. The joint framework is trained directly using the sentences annotated with only sentiment polarity, without the use of any syntactic or sentiment annotations in segmentation level. However the existing system still has issue with classification accuracy results. To improve the classification performance, in the proposed system, cloud integrate the support vector machine, naive bayes and neural network algorithms along with joint segmentation approaches has been proposed to classify the very positive, positive, neutral, negative and very negative features more effectively using important feature selection. Also to handle the outliers we apply modified k-means clustering method on the given dataset. It is used to cloud cluster the outliers and hence the label as well as unlabeled features is handled efficiently. From the experimental result, we conclude that the proposed system yields better performance than the existing system.",
"title": ""
},
{
"docid": "d14da110523c56d3c1ab2be9d3fbcf8e",
"text": "Women are generally more risk averse than men. We investigated whether between- and within-gender variation in financial risk aversion was accounted for by variation in salivary concentrations of testosterone and in markers of prenatal testosterone exposure in a sample of >500 MBA students. Higher levels of circulating testosterone were associated with lower risk aversion among women, but not among men. At comparably low concentrations of salivary testosterone, however, the gender difference in risk aversion disappeared, suggesting that testosterone has nonlinear effects on risk aversion regardless of gender. A similar relationship between risk aversion and testosterone was also found using markers of prenatal testosterone exposure. Finally, both testosterone levels and risk aversion predicted career choices after graduation: Individuals high in testosterone and low in risk aversion were more likely to choose risky careers in finance. These results suggest that testosterone has both organizational and activational effects on risk-sensitive financial decisions and long-term career choices.",
"title": ""
},
{
"docid": "f78fcf875104f8bab2fa465c414331c6",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
},
{
"docid": "cec10dde2a3988b39d8b2e7655e92a3c",
"text": "As the performance gap between the CPU and main memory continues to grow, techniques to hide memory latency are essential to deliver a high performance computer system. Prefetching can often overlap memory latency with computation for array-based numeric applications. However, prefetching for pointer-intensive applications still remains a challenging problem. Prefetching linked data structures (LDS) is difficult because the address sequence of LDS traversal does not present the same arithmetic regularity as array-based applications and the data dependence of pointer dereferences can serialize the address generation process.\nIn this paper, we propose a cooperative hardware/software mechanism to reduce memory access latencies for linked data structures. Instead of relying on the past address history to predict future accesses, we identify the load instructions that traverse the LDS, and execute them ahead of the actual computation. To overcome the serial nature of the LDS address generation, we attach a prefetch controller to each level of the memory hierarchy and push, rather than pull, data to the CPU. Our simulations, using four pointer-intensive applications, show that the push model can achieve between 4% and 30% larger reductions in execution time compared to the pull model.",
"title": ""
},
{
"docid": "89feab547a2ab97f41ee9ea47a78ebd7",
"text": "Yarrowia lipolytica 3589, a tropical marine yeast, grew aerobically on a broad range of bromoalkanes varying in carbon chain length and differing in degree and position of bromide group. Amongst the bromoalkanes studied, viz. 2-bromopropane (2-BP), 1-bromobutane (1-BB), 1,5-dibromopentane (1,5-DBP) and 1-bromodecane (1-BD), the best utilized was 1-BD, with a maximal growth rate (μ(max) ) of 0.055 h⁻¹ and an affinity ratio (μ(max) /K(s) ) of 0.022. Utilization of these bromoalkanes as growth substrates was associated with a concomitant release of bromide (8202.9 µm) and cell mass (36 × 10⁹ cells/ml), occurring maximally on 1-BD. Adherence of yeast cells to these hydrophobic bromoalkanes was observed microscopically, with an increase in cell size and surface hydrophobicity. The maximal cell diameter was for 1-BD (4.66 µm), resulting in an increase in the calculated cell surface area (68.19 µm²) and sedimentation velocity (1.31 µm/s). Cell surface hydrophobicity values by microbial adhesion to solvents (MATS) analysis for yeasts grown on bromoalkanes and glucose were significantly high, i.e. >80%. Similarly, water contact angles also indicate that the cell surface of yeast cells grown in glucose possess a relatively more hydrophilic cell surface (θ = 49.1°), whereas cells grown in 1-BD possess a more hydrophobic cell surface (θ = 90.7°). No significant change in emulsification activity or surface tension was detected in the cell-free supernatant. Thus adherence to the bromoalkane droplets by an increase in cell size and surface hydrophobicity leading to debromination of the substrate might be the strategy employed in bromoalkane utilization and growth by Y. lipolytica 3589.",
"title": ""
},
{
"docid": "8721382dd1674fac3194d015b9c64f94",
"text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves",
"title": ""
},
{
"docid": "65e3890edd57a0a6de65b4e38f3cea1c",
"text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.",
"title": ""
}
] |
scidocsrr
|
f45d110ac512a7916525b8f457d0a45c
|
Active Learning for Multivariate Time Series Classification with Positive Unlabeled Data
|
[
{
"docid": "8a5ae40bc5921d7614ca34ddf53cebbc",
"text": "In natural language processing community, sentiment classification based on insufficient labeled data is a well-known challenging problem. In this paper, a novel semi-supervised learning algorithm called active deep network (ADN) is proposed to address this problem. First, we propose the semi-supervised learning framework of ADN. ADN is constructed by restricted Boltzmann machines (RBM) with unsupervised fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, in the semi-supervised learning framework, we apply active learning to identify reviews that should be labeled as training data, then using the selected labeled reviews and all unlabeled reviews to train ADN architecture. Moreover, we combine the information density with ADN, and propose information ADN (IADN) method, which can apply the information density of all unlabeled reviews in choosing the manual labeled reviews. Experiments on five sentiment classification datasets show that ADN and IADN outperform classical semi-supervised learning algorithms, and deep learning techniques applied for sentiment classification. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "2e6193301f53719e58782bece34cb55a",
"text": "There is an increasing trend in using robots for medical purposes. One specific area is the rehabilitation. There are some commercial exercise machines used for rehabilitation purposes. However, these machines have limited use because of their insufficient motion freedom. In addition, these types of machines are not actively controlled and therefore can not accommodate complicated exercises required during rehabilitation. In this study, a rule based intelligent control methodology is proposed to imitate the faculties of an experienced physiotherapist. These involve interpretation of patient reactions, storing the information received, acting according to the available data, and learning from the previous experiences. Robot manipulator is driven by a servo motor and controlled by a computer using force/torque and position sensor information. Impedance control technique is selected for the force control.",
"title": ""
},
{
"docid": "ef9b5b0fbfd71c8d939bfe947c60292d",
"text": "OBJECTIVE\nSome prolonged and turbulent grief reactions include symptoms that differ from the DSM-IV criteria for major depressive disorder. The authors investigated a new diagnosis that would include these symptoms.\n\n\nMETHOD\nThey developed observer-based definitions of 30 symptoms noted clinically in previous longitudinal interviews of bereaved persons and then designed a plan to investigate whether any combination of these would serve as criteria for a possible new diagnosis of complicated grief disorder. Using a structured diagnostic interview, they assessed 70 subjects whose spouses had died. Latent class model analyses and signal detection procedures were used to calibrate the data against global clinical ratings and self-report measures of grief-specific distress.\n\n\nRESULTS\nComplicated grief disorder was found to be characterized by a smaller set of the assessed symptoms. Subjects elected by an algorithm for these symptoms patterns did not significantly overlap with subjects who received a diagnosis of major depressive disorder.\n\n\nCONCLUSIONS\nA new diagnosis of complicated grief disorder may be indicated. Its criteria would include the current experience (more than a year after a loss) of intense intrusive thoughts, pangs of severe emotion, distressing yearnings, feeling excessively alone and empty, excessively avoiding tasks reminiscent of the deceased, unusual sleep disturbances, and maladaptive levels of loss of interest in personal activities.",
"title": ""
},
{
"docid": "bd1ebfe449a1a95ac37f6c084e3e6dad",
"text": "Within the last years educational games have attracted some attention from the academic community. Multiple enhancements of the learning experience are usually attributed to educational games, although the most cited is their potential to improve students' motivation. In spite of these expected advantages, how to introduce video games in the learning process is an issue that is not completely clear yet, which reduces the potential impact of educational video games. Our goal at the <;e-UCM> research group is to identify the barriers that are limiting the integration of games in the learning process and propose approaches to tackle them. The result of this work is the <;e-Adventure> platform, an educational game authoring tool that aims to make of video games just another educational tool at the disposal of the instructors. In this paper we describe how <;e-Adventure> contributes to the integration of games in the learning process through three main focuses: reduction of the high development costs of educational games, involvement of instructors in the development process to enhance the educational value, and the production of the games using a white-box model. In addition we describe the current research that we are conducting using the platform as a test-bed.",
"title": ""
},
{
"docid": "083d5b88cc1bf5490a0783a4a94e9fb2",
"text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.",
"title": ""
},
{
"docid": "238adc0417c167aeb64c23b576f434d0",
"text": "This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.",
"title": ""
},
{
"docid": "877d7d467711e8cb0fd03a941c7dc9da",
"text": "Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.",
"title": ""
},
{
"docid": "e0e33d26cc65569e80213069cb5ad857",
"text": "Capsule Networks have great potential to tackle problems in structural biology because of their aention to hierarchical relationships. is paper describes the implementation and application of a Capsule Network architecture to the classication of RAS protein family structures on GPU-based computational resources. e proposed Capsule Network trained on 2D and 3D structural encodings can successfully classify HRAS and KRAS structures. e Capsule Network can also classify a protein-based dataset derived from a PSI-BLAST search on sequences of KRAS and HRAS mutations. Our results show an accuracy improvement compared to traditional convolutional networks, while improving interpretability through visualization of activation vectors.",
"title": ""
},
{
"docid": "10f6ae0e9c254279b0cf0f5e98caa9cd",
"text": "The automatic assessment of photo quality from an aesthetic perspective is a very challenging problem. Most existing research has predominantly focused on the learning of a universal aesthetic model based on hand-crafted visual descriptors . However, this research paradigm can achieve only limited success because (1) such hand-crafted descriptors cannot well preserve abstract aesthetic properties , and (2) such a universal model cannot always capture the full diversity of visual content. To address these challenges, we propose in this paper a novel query-dependent aesthetic model with deep learning for photo quality assessment. In our method, deep aesthetic abstractions are discovered from massive images , whereas the aesthetic assessment model is learned in a query- dependent manner. Our work addresses the first problem by learning mid-level aesthetic feature abstractions via powerful deep convolutional neural networks to automatically capture the underlying aesthetic characteristics of the massive training images . Regarding the second problem, because photographers tend to employ different rules of photography for capturing different images , the aesthetic model should also be query- dependent . Specifically, given an image to be assessed, we first identify which aesthetic model should be applied for this particular image. Then, we build a unique aesthetic model of this type to assess its aesthetic quality. We conducted extensive experiments on two large-scale datasets and demonstrated that the proposed query-dependent model equipped with learned deep aesthetic abstractions significantly and consistently outperforms state-of-the-art hand-crafted feature -based and universal model-based methods.",
"title": ""
},
{
"docid": "0998097311e16ad38e2404435a778dcb",
"text": "Civilian Global Positioning System (GPS) receivers are vulnerable to a number of different attacks such as blocking, jamming, and spoofing. The goal of such attacks is either to prevent a position lock (blocking and jamming), or to feed the receiver false information so that it computes an erroneous time or location (spoofing). GPS receivers are generally aware of when blocking or jamming is occurring because they have a loss of signal. Spoofing, however, is a surreptitious attack. Currently, no countermeasures are in use for detecting spoofing attacks. We believe, however, that it is possible to implement simple, low-cost countermeasures that can be retrofitted onto existing GPS receivers. This would, at the very least, greatly complicate spoofing attacks. Introduction: The civilian Global Positioning System (GPS) is widely used by both government and private industry for many important applications. Some of these applications include public safety services such as police, fire, rescue and ambulance. The cargo industry, buses, taxis, railcars, delivery vehicles, agricultural harvesters, private automobiles, spacecraft, marine and airborne traffic also use GPS systems for navigation. In fact, the Federal Aviation Administration (FAA) is in the process of drafting an instruction requiring that all radio navigation systems aboard aircraft use GPS [1]. Additional uses include hiking and surveying, as well as being used in robotics, cell phones, animal tracking and even GPS wristwatches. Utility companies and telecommunication companies use GPS timing signals to regulate the base frequency of their distribution grids. GPS timing signals are also used by the financial industry, the broadcast industry, mobile telecommunication providers, the international financial industry, banking (for money transfers and time locks), and other distributed computer network applications [2,3]. In short, anyone who wants to know their exact location, velocity, or time might find GPS useful. Unfortunately, the civilian GPS signals are not secure [1]. Only the military GPS signals are encrypted (authenticated), but these are generally unavailable to civilians, foreign governments, and most of the U.S. government, including most of the Department of Defense (DoD). Plans are underway to upgrade the existing GPS system, but they apparently do not include adding encryption or authentication to the civilian GPS signal [4,5]. The GPS signal strength measured at the surface of the Earth is about –160dBw (1x10-16 Watts), which is roughly equivalent to viewing a 25-Watt light bulb from a distance of 10,000 miles. This weak signal can be easily blocked by destroying or shielding the GPS receiver’s antenna. The GPS signal can also be effectively jammed by a signal of a similar frequency, but greater strength. Blocking and jamming, however, are not the greatest security risk, because the GPS receiver will be fully aware it is not receiving the GPS signals needed to determine position and time. A more pernicious attack involves feeding the GPS receiver fake GPS signals so that it believes it is located somewhere in space and time that it is not. This “spoofing” attack is more elegant than jamming because it is surreptitious. The Vulnerability Assessment Team (VAT) at Los Alamos National Laboratory (LANL) has recently demonstrated the ease with which civilian GPS spoofing attacks can be implemented [6]. This spoofing is most easily accomplished by using a GPS satellite simulator. Such GPS satellite simulators are uncontrolled, and widely available. To conduct the spoofing attack, an adversary broadcasts a fake GPS signal with a higher signal strength than the true GPS signal. The GPS receiver believes that the fake signal is actually the true GPS signal from space, and ignores the true GPS signal. The receiver then proceeds to calculate erroneous position or time information based on this false signal. How Does GPS work? The GPS is operated by DoD. It consists of a constellation of 27 satellites (24 active and 3 standby) in 6 separate orbits and reached full official operational capability status on July 17, 1995 [7]. GPS users have the ability to obtain a 3-D position, velocity and time fix in all types of weather, 24-hours a day. GPS users can locate their position to within ± 18 ft on average or ± 60-90 ft for a worst case 3-D fix [8]. Each GPS satellite broadcasts two signals, a civilian unencrypted signal and a military encrypted signal. The civilian GPS signal was never intended for critical or security applications, though that is, unfortunately, how it is now often used. The DoD reserves the military encrypted GPS signal for sensitive applications such as smart weapons. This paper will be focusing on the civilian (unencrypted) GPS signal. Any discussion of civilian GPS vulnerabilities are fully unclassified [9]. The carrier wave for the civilian signal is the same frequency (1575.2 MHz) for all of the GPS satellites. The C/A code provides the GPS receiver on the Earth’s surface with a unique identification number (a.k.a. PRN or Pseudo Random Noise code). In this manner, each satellite transmits a unique identification number that allows the GPS receiver to know which satellites it is receiving signals from. The Nav/System data provides the GPS receiver with information about the position of all the satellites in the constellation as well as precise timing data from the atomic clocks aboard the satellites. L1 Carrier 1575.2 MHz",
"title": ""
},
{
"docid": "723cf2a8b6142a7e52a0ff3fb74c3985",
"text": "The Internet of Mobile Things (IoMT) requires support for a data lifecycle process ranging from sorting, cleaning and monitoring data streams to more complex tasks such as querying, aggregation, and analytics. Current solutions for stream data management in IoMT have been focused on partial aspects of a data lifecycle process, with special emphasis on sensor networks. This paper aims to address this problem by developing an offline and real-time data lifecycle process that incorporates a layered, data-flow centric, and an edge/cloud computing approach that is needed for handling heterogeneous, streaming and geographicallydispersed IoMT data streams. We propose an end to end architecture to support an instant intra-layer communication that establishes a stream data flow in real-time to respond to immediate data lifecycle tasks at the edge layer of the system. Our architecture also provides offline functionalities for later analytics and visualization of IoMT data streams at the core layer of the system. Communication and process are thus the defining factors in the design of our stream data management solution for IoMT. We describe and evaluate our prototype implementation using real-time transit data feeds and a commercial edge-based platform. Preliminary results are showing the advantages of running data lifecycle tasks at the edge of the network for reducing the volume of data streams that are redundant and should not be transported to the cloud. Keywords—stream data lifecycle, edge computing, cloud computing, Internet of Mobile Things, end to end architectures",
"title": ""
},
{
"docid": "54cef03846f090678efd5b67d3cb5b17",
"text": "This paper based on the speed control of induction motor (IM) using proportional integral controller (PI controller) and proportional integral derivative controller (PID controller) with the use of vector control technique. The conventional PID controller is compared with the conventional PI controller for full load condition. MATLAB simulation is carried out and results are investigated for speed control of Induction Motor without any controller, with PI controller and with PID controller on full load condition.",
"title": ""
},
{
"docid": "1fcdfd02a6ecb12dec5799d6580c67d4",
"text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.",
"title": ""
},
{
"docid": "1d29d30089ffd9748c925a20f8a1216e",
"text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.",
"title": ""
},
{
"docid": "68b2608c91525f3147f74b41612a9064",
"text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.",
"title": ""
},
{
"docid": "62f67cf8f628be029ce748121ff52c42",
"text": "This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.",
"title": ""
},
{
"docid": "0185d09853600b950f5a1af27e0cdd91",
"text": "In this paper, the problem of matching pairs of correlated random graphs with multi-valued edge attributes is considered. Graph matching problems of this nature arise in several settings of practical interest including social network de-anonymization, study of biological data, and web graphs. An achievable region of graph parameters for successful matching is derived by analyzing a new matching algorithm that we refer to as typicality matching. The algorithm operates by investigating the joint typicality of the adjacency matrices of the two correlated graphs. Our main result shows that the achievable region depends on the mutual information between the variables corresponding to the edge probabilities of the two graphs. The result is based on bounds on the typicality of permutations of sequences of random variables that might be of independent interest.",
"title": ""
},
{
"docid": "671952f18fb9041e7335f205666bf1f5",
"text": "This new handbook is an efficient way to keep up with the continuing advances in antenna technology and applications. The handbook is uniformly well written, up-to-date, and filled with a wealth of practical information. This makes it a useful reference for most antenna engineers and graduate students.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "e22f9516948725be20d8e331d5bafa56",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.",
"title": ""
},
{
"docid": "fb66a74a7cb4aa27556b428e378353a8",
"text": "This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Abstract—High-resolution radar sensors are able to resolve multiple measurements per object and therefore provide valuable information for vehicle environment perception. For instance, multiple measurements allow to infer the size of an object or to more precisely measure the object’s motion. Yet, the increased amount of data raises the demands on tracking modules: measurement models that are able to process multiple measurements for an object are necessary and measurement-toobject associations become more complex. This paper presents a new variational radar model for vehicles and demonstrates how this model can be incorporated in a Random-Finite-Setbased multi-object tracker. The measurement model is learned from actual data using variational Gaussian mixtures and avoids excessive manual engineering. In combination with the multiobject tracker, the entire process chain from the raw measurements to the resulting tracks is formulated probabilistically. The presented approach is evaluated on experimental data and it is demonstrated that data-driven measurement model outperforms a manually designed model.",
"title": ""
}
] |
scidocsrr
|
ae915c34345204fff23600f7737930a7
|
Treatment planning of the edentulous mandible
|
[
{
"docid": "0ad4432a79ea6b3eefbe940adf55ff7b",
"text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.",
"title": ""
}
] |
[
{
"docid": "525f9a7321a7b45111a19f458c9b976a",
"text": "This paper provides a literature review on Adaptive Line Enhancer (ALE) methods based on adaptive noise cancellation systems. Such methods have been used in various applications, including communication systems, biomedical engineering, and industrial applications. Developments in ALE in noise cancellation are reviewed, including the principles, adaptive algorithms, and recent modifications on the filter design proposed to increase the convergence rate and reduce the computational complexity for future implementation. The advantages and drawbacks of various adaptive algorithms, such as the Least Mean Square, Recursive Least Square, Affine Projection Algorithm, and their variants, are discussed in this review. Design modifications of filter structures used in ALE are also evaluated. Such filters include Finite Impulse Response, Infinite Impulse Response, lattice, and nonlinear adaptive filters. These structural modifications aim to achieve better adaptive filter performance in ALE systems. Finally, a perspective of future research on ALE systems is presented for further consideration.",
"title": ""
},
{
"docid": "188df015d60168b57f37e39089f3b14e",
"text": "Implementation of a nutrition programme for team sports involves application of scientific research together with the social skills necessary to work with a sports medicine and coaching staff. Both field and court team sports are characterized by intermittent activity requiring a heavy reliance on dietary carbohydrate sources to maintain and replenish glycogen. Energy and substrate demands are high during pre-season training and matches, and moderate during training in the competitive season. Dietary planning must include enough carbohydrate on a moderate energy budget, while also meeting protein needs. Strength and power team sports require muscle-building programmes that must be accompanied by adequate nutrition, and simple anthropometric measurements can help the nutrition practitioner monitor and assess body composition periodically. Use of a body mass scale and a urine specific gravity refractometer can help identify athletes prone to dehydration. Sports beverages and caffeine are the most common supplements, while opinion on the practical effectiveness of creatine is divided. Late-maturing adolescent athletes become concerned about gaining size and muscle, and assessment of maturity status can be carried out with anthropometric procedures. An overriding consideration is that an individual approach is needed to meet each athlete's nutritional needs.",
"title": ""
},
{
"docid": "1d3eb22e6f244fbe05d0cc0f7ee37b84",
"text": "Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.",
"title": ""
},
{
"docid": "bf08bc98eb9ef7a18163fc310b10bcf6",
"text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.",
"title": ""
},
{
"docid": "5f5258cec772f97c18a5ccda25f7a617",
"text": "While most prognostics approaches focus on accurate computation of the degradation rate and the Remaining Useful Life (RUL) of individual components, it is the rate at which the performance of subsystems and systems degrade that is of greater interest to the operators and maintenance personnel of these systems. Accurate and reliable predictions make it possible to plan the future operations of the system, optimize maintenance scheduling activities, and maximize system life. In system-level prognostics, we are interested in determining when the performance of a system will fall below pre-defined levels of acceptable performance. Our focus in this paper is on developing a comprehensive methodology for system-level prognostics under uncertainty that combines the use of an estimation scheme that tracks system state and degradation parameters, along with a prediction scheme that computes the RUL as a stochastic distribution over the life of the system. Two parallel methods have been developed for prediction: (1) methods based on stochastic simulation and (2) optimization methods, such as first order reliability method (FORM). We compare the computational complexity and the accuracy of the two prediction approaches using a case study of a system with several degrading components.",
"title": ""
},
{
"docid": "80fd067dd6cf2fe85ade3c632e82c04c",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.03.046 * Corresponding author. Tel.: +98 09126121921. E-mail address: shahbazi_mo@yahoo.com (M. Sha Recommender systems are powerful tools that allow companies to present personalized offers to their customers and defined as a system which recommends an appropriate product or service after learning the customers’ preferences and desires. Extracting users’ preferences through their buying behavior and history of purchased products is the most important element of such systems. Due to users’ unlimited and unpredictable desires, identifying their preferences is very complicated process. In most researches, less attention has been paid to user’s preferences varieties in different product categories. This may decrease quality of recommended items. In this paper, we introduced a technique of recommendation in the context of online retail store which extracts user preferences in each product category separately and provides more personalized recommendations through employing product taxonomy, attributes of product categories, web usage mining and combination of two well-known filtering methods: collaborative and content-based filtering. Experimental results show that proposed technique improves quality, as compared to similar approaches. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ef1f901e0fb01a99728282f743cc1c65",
"text": "Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition.",
"title": ""
},
{
"docid": "1cbc333cce4870cc0f465bb76b6e4d3c",
"text": "This note attempts to raise awareness within the network research community about the security of the interdomain routing infrastructure. We identify several attack objectives and mechanisms, assuming that one or more BGP routers have been compromised. Then, we review the existing and proposed countermeasures, showing that they are either generally ineffective (route filtering), or probably too heavyweight to deploy (S-BGP). We also review several recent proposals, and conclude by arguing that a significant research effort is urgently needed in the area of routing security.",
"title": ""
},
{
"docid": "04476184ca103b9d8012827615fc84a5",
"text": "In order to investigate the local filtering behavior of the Retinex model, we propose a new implementation in which paths are replaced by 2-D pixel sprays, hence the name \"random spray Retinex.\" A peculiar feature of this implementation is the way its parameters can be controlled to perform spatial investigation. The parameters' tuning is accomplished by an unsupervised method based on quantitative measures. This procedure has been validated via user panel tests. Furthermore, the spray approach has faster performances than the path-wise one. Tests and results are presented and discussed",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "8785e51ebe39057012b81c37a6ddc097",
"text": "In this paper, we present a set of distributed algorithms for estimating the electro-mechanical oscillation modes of large power system networks using synchrophasors. With the number of phasor measurement units (PMUs) in the North American grid scaling up to the thousands, system operators are gradually inclining toward distributed cyber-physical architectures for executing wide-area monitoring and control operations. Traditional centralized approaches, in fact, are anticipated to become untenable soon due to various factors such as data volume, security, communication overhead, and failure to adhere to real-time deadlines. To address this challenge, we propose three different communication and computational architectures by which estimators located at the control centers of various utility companies can run local optimization algorithms using local PMU data, and thereafter communicate with other estimators to reach a global solution. Both synchronous and asynchronous communications are considered. Each architecture integrates a centralized Prony-based algorithm with several variants of alternating direction method of multipliers (ADMM). We discuss the relative advantages and bottlenecks of each architecture using simulations of IEEE 68-bus and IEEE 145-bus power system, as well as an Exo-GENI-based software defined network.",
"title": ""
},
{
"docid": "2eac0a94204b24132e496639d759f545",
"text": "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.",
"title": ""
},
{
"docid": "6aebae4d8ed0af23a38a945b85c3b6ff",
"text": "Modern web applications are conglomerations of JavaScript written by multiple authors: application developers routinely incorporate code from third-party libraries, and mashup applications synthesize data and code hosted at different sites. In current browsers, a web application’s developer and user must trust third-party code in libraries not to leak the user’s sensitive information from within applications. Even worse, in the status quo, the only way to implement some mashups is for the user to give her login credentials for one site to the operator of another site. Fundamentally, today’s browser security model trades privacy for flexibility because it lacks a sufficient mechanism for confining untrusted code. We present COWL, a robust JavaScript confinement system for modern web browsers. COWL introduces label-based mandatory access control to browsing contexts in a way that is fully backwardcompatible with legacy web content. We use a series of case-study applications to motivate COWL’s design and demonstrate how COWL allows both the inclusion of untrusted scripts in applications and the building of mashups that combine sensitive information from multiple mutually distrusting origins, all while protecting users’ privacy. Measurements of two COWL implementations, one in Firefox and one in Chromium, demonstrate a virtually imperceptible increase in page-load latency.",
"title": ""
},
{
"docid": "4a9474c0813646708400fc02c344a976",
"text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.",
"title": ""
},
{
"docid": "5e2c4ebf3c2b4f0e9aabc5eacd2d4b80",
"text": "Manually annotating object bounding boxes is central to building computer vision datasets, and it is very time consuming (annotating ILSVRC [53] took 35s for one high-quality box [62]). It involves clicking on imaginary comers of a tight box around the object. This is difficult as these comers are often outside the actual object and several adjustments are required to obtain a tight box. We propose extreme clicking instead: we ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural and these points are easy to find. We crowd-source extreme point annotations for PASCAL VOC 2007 and 2012 and show that (1) annotation time is only 7s per box, 5 × faster than the traditional way of drawing boxes [62]: (2) the quality of the boxes is as good as the original ground-truth drawn the traditional way: (3) detectors trained on our annotations are as accurate as those trained on the original ground-truth. Moreover, our extreme clicking strategy not only yields box coordinates, but also four accurate boundary points. We show (4) how to incorporate them into GrabCut to obtain more accurate segmentations than those delivered when initializing it from bounding boxes: (5) semantic segmentations models trained on these segmentations outperform those trained on segmentations derived from bounding boxes.",
"title": ""
},
{
"docid": "0453d395af40160b4f66787bb9ac8e96",
"text": "Two aspect of programming languages, recursive definitions and type declarations are analyzed in detail. Church's %-calculus is used as a model of a programming language for purposes of the analysis. The main result on recursion is an analogue to Kleene's first recursion theorem: If A = FA for any %-expressions A and F, then A is an extension of YF in the sense that if E[YF], any expression containing YF, has a normal form then E[YF] = E[A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y. A system of types and type declarations is developed for the %-calculus and its semantic assumptions are identified. The system is shown to be adequate in the sense that it permits a preprocessor to check formulae prior to evaluation to prevent type errors. It is shown that any formula with a valid assignment of types to all its subexpressions must have a normal form. Thesis Supervisor: John M. Wozencraft Title: Professor of Electrical Engineering",
"title": ""
},
{
"docid": "5809c27155986612b0e4a9ef48b3b930",
"text": "Using the same technologies for both work and private life is an intensifying phenomenon. Mostly driven by the availability of consumer IT in the marketplace, individuals—more often than not—are tempted to use privately-owned IT rather than enterprise IT in order to get their job done. However, this dual-use of technologies comes at a price. It intensifies the blurring of the boundaries between work and private life—a development in stark contrast to the widely spread desire of employees to segment more clearly between their two lives. If employees cannot follow their segmentation preference, it is proposed that this misfit will result in work-to-life conflict (WtLC). This paper investigates the relationship between organizational encouragement for dual use and WtLC. Via a quantitative survey, we find a significant relationship between the two concepts. In line with boundary theory, the effect is stronger for people that strive for work-life segmentation.",
"title": ""
},
{
"docid": "5cc07ca331deb81681b3f18355c0e586",
"text": "BACKGROUND\nHyaluronic acid (HA) formulations are used for aesthetic applications. Different cross-linking technologies result in HA dermal fillers with specific characteristic visco-elastic properties.\n\n\nOBJECTIVE\nBio-integration of three CE-marked HA dermal fillers, a cohesive (monophasic) polydensified, a cohesive (monophasic) monodensified and a non-cohesive (biphasic) filler, was analysed with a follow-up of 114 days after injection. Our aim was to study the tolerability and inflammatory response of these fillers, their patterns of distribution in the dermis, and influence on tissue integrity.\n\n\nMETHODS\nThree HA formulations were injected intradermally into the iliac crest region in 15 subjects. Tissue samples were analysed after 8 and 114 days by histology and immunohistochemistry, and visualized using optical and transmission electron microscopy.\n\n\nRESULTS\nHistological results demonstrated that the tested HA fillers showed specific characteristic bio-integration patterns in the reticular dermis. Observations under the optical and electron microscopes revealed morphological conservation of cutaneous structures. Immunohistochemical results confirmed absence of inflammation, immune response and granuloma.\n\n\nCONCLUSION\nThe three tested dermal fillers show an excellent tolerability and preservation of the dermal cells and matrix components. Their tissue integration was dependent on their visco-elastic properties. The cohesive polydensified filler showed the most homogeneous integration with an optimal spreading within the reticular dermis, which is achieved by filling even the smallest spaces between collagen bundles and elastin fibrils, while preserving the structural integrity of the latter. Absence of adverse reactions confirms safety of the tested HA dermal fillers.",
"title": ""
},
{
"docid": "d646a27556108caebd7ee5691c98d642",
"text": "■ Abstract Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.",
"title": ""
},
{
"docid": "9adb3374f58016ee9bec1daf7392a64e",
"text": "To develop a less genotype-dependent maize-transformation procedure, we used 10-month-old Type I callus as target tissue for microprojectile bombardment. Twelve transgenic callus lines were obtained from two of the three anther-culture-derived callus cultures representing different gentic backgrounds. Multiple fertile transgenic plants (T0) were regenerated from each transgenic callus line. Transgenic leaves treated with the herbicide Basta showed no symptoms, indicating that one of the two introduced genes, bar, was functionally expressing. Data from DNA hybridization analysis confirmed that the introduced genes (bar and uidA) were integrated into the plant genome and that all lines derived from independent transformation events. Transmission of the introduced genes and the functional expression of bar in T1 progeny was also confirmed. Germination of T1 immature embryos in the presence of bialaphos was used as a screen for functional expression of bar; however, leaf painting of T1 plants proved a more accurate predictor of bar expression in plants. This study suggests that maize Type I callus can be transformed efficiently through microprojectile bombardment and that fertile transgenic plants can be recovered. This system should facilitate the direct introduction of agronomically important genes in to commercial genotypes.",
"title": ""
}
] |
scidocsrr
|
b30189c05d6d8215f8eaaa093a562443
|
Computer aided clothing pattern design with 3D editing and pattern alteration
|
[
{
"docid": "84e0768338a7c643dc93fb6fbdc16ac4",
"text": "Clothing computer design systems include three integrated parts: garment pattern design in 2D/3D, virtual try-on and realistic clothing simulation. Some important results have been obtained in pattern design and clothing simulation since the 1980s. However, in the area of virtual try-on, only limited methods have been proposed which are applicable to some defined garment styles or under restrictive sewing assumptions. This paper presents a series of new techniques from virtually sewing up complex garment patterns on human models to visualizing design effects through physical-based real-time simulation. We first employ an hierarchy of ellipsoids to approximate human models in which the bounding ellipsoids are optimized recursively.Wealso present a newscheme for including contact friction and resolving collisions. Four types of user interactive operation are introduced to manipulate cloth patterns for pre-positioning, virtual sewing and later obtaining cloth simulation. In the cloth simulation, we propose a simplified cloth dynamic model and an integration scheme to realize a high quality realtime cloth simulation.We demonstrate the robustness of our proposed systems by complex garment style virtual try-on and cloth simulation. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9197a5d92bd19ad29a82679bb2a94285",
"text": "Equation (1.1) expresses v0 as a convex combination of the neighbouring points v1, . . . , vk. In the simplest case k = 3, the weights λ1, λ2, λ3 are uniquely determined by (1.1) and (1.2) alone; they are the barycentric coordinates of v0 with respect to the triangle [v1, v2, v3], and they are positive. This motivates calling any set of non-negative weights satisfying (1.1–1.2) for general k, a set of coordinates for v0 with respect to v1, . . . , vk. There has long been an interest in generalizing barycentric coordinates to k-sided polygons with a view to possible multisided extensions of Bézier surfaces; see for example [8 ]. In this setting, one would normally be free to choose v1, . . . , vk to form a convex polygon but would need to allow v0 to be any point inside the polygon or on the polygon, i.e. on an edge or equal to a vertex. More recently, the need for such coordinates arose in methods for parameterization [2 ] and morphing [5 ], [6 ] of triangulations. Here the points v0, v1, . . . , vk will be vertices of a (planar) triangulation and so the point v0 will never lie on an edge of the polygon formed by v1, . . . , vk. If we require no particular properties of the coordinates, the problem is easily solved. Because v0 lies in the convex hull of v1, . . . , vk, there must exist at least one triangle T = [vi1 , vi2 , vi3 ] which contains v0, and so we can take λi1 , λi2 , λi3 to be the three barycentric coordinates of v0 with respect to T , and make the remaining coordinates zero. However, these coordinates depend randomly on the choice of triangle. An improvement is to take an average of such coordinates over certain covering triangles, as proposed in [2 ]. The resulting coordinates depend continuously on v0, v1, . . . , vk, yet still not smoothly. The",
"title": ""
}
] |
[
{
"docid": "097fd4372f5a17c5de5c6a6a3fdaeaa8",
"text": "Discriminative training in query spelling correction is difficult due to the complex internal structures of the data. Recent work on query spelling correction suggests a two stage approach a noisy channel model that is used to retrieve a number of candidate corrections, followed by discriminatively trained ranker applied to these candidates. The ranker, however, suffers from the fact the low recall of the first, suboptimal, search stage. This paper proposes to directly optimize the search stage with a discriminative model based on latent structural SVM. In this model, we treat query spelling correction as a multiclass classification problem with structured input and output. The latent structural information is used to model the alignment of words in the spelling correction process. Experiment results show that as a standalone speller, our model outperforms all the baseline systems. It also attains a higher recall compared with the noisy channel model, and can therefore serve as a better filtering stage when combined with a ranker.",
"title": ""
},
{
"docid": "322161b4a43b56e4770d239fe4d2c4c0",
"text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.",
"title": ""
},
{
"docid": "6e30387a3706dea2b7d18668c08bb31b",
"text": "The semantic web vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic arkup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which akes queries expressed in natural language and an ontology as input, and returns answers drawn from one or more knowledge bases (KBs). We ay that AquaLog is portable because the configuration time required to customize the system for a particular ontology is negligible. AquaLog resents an elegant solution in which different strategies are combined together in a novel way. It makes use of the GATE NLP platform, string etric algorithms, WordNet and a novel ontology-based relation similarity service to make sense of user queries with respect to the target KB. oreover it also includes a learning component, which ensures that the performance of the system improves over the time, in response to the articular community jargon used by end users. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "be82ba26b91658ee90b6075c75c5f7bd",
"text": "In this paper, we propose a content-based recommendation Algorithm which extends and updates the Minkowski distance in order to address the challenge of matching people and jobs. The proposed algorithm FoDRA (Four Dimensions Recommendation Algorithm) quantifies the suitability of a job seeker for a job position in a more flexible way, using a structured form of the job and the candidate's profile, produced from a content analysis of the unstructured form of the job description and the candidate's CV. We conduct an experimental evaluation in order to check the quality and the effectiveness of FoDRA. Our primary study shows that FoDRA produces promising results and creates new prospects in the area of Job Recommender Systems (JRSs).",
"title": ""
},
{
"docid": "53e6fe645eb83bcc0f86638ee7ce5578",
"text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.",
"title": ""
},
{
"docid": "e507c60b8eb437cbd6ca9692f1bf8727",
"text": "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.",
"title": ""
},
{
"docid": "9cf5fc6b50010d1489f12d161f302428",
"text": "With the advent of large code repositories and sophisticated search capabilities, code search is increasingly becoming a key software development activity. In this work we shed some light into how developers search for code through a case study performed at Google, using a combination of survey and log-analysis methodologies. Our study provides insights into what developers are doing and trying to learn when per- forming a search, search scope, query properties, and what a search session under different contexts usually entails. Our results indicate that programmers search for code very frequently, conducting an average of five search sessions with 12 total queries each workday. The search queries are often targeted at a particular code location and programmers are typically looking for code with which they are somewhat familiar. Further, programmers are generally seeking answers to questions about how to use an API, what code does, why something is failing, or where code is located.",
"title": ""
},
{
"docid": "2dc24d2ecaf2494543128f5e9e5f4864",
"text": "Design of a multiphase hybrid permanent magnet (HPM) generator for series hybrid electric vehicle (SHEV) application is presented in this paper. The proposed hybrid excitation topology together with an integral passive rectifier replaces the permanent magnet (PM) machine and active power electronics converter in hybrid/electric vehicles, facilitating the control over constant PM flux-linkage. The HPM topology includes two rotor elements: a PM and a wound field (WF) rotor with a 30% split ratio, coupled on the same shaft in one machine housing. Both rotors share a nine-phase stator that results in higher output voltage and power density when compared to three-phase design. The HPM generator design is based on a 3-kW benchmark PM machine to ensure the feasibility and validity of design tools and procedures. The WF rotor is designed to realize the same pole shape and number as in the PM section and to obtain the same flux-density in the air-gap while minimizing the WF input energy. Having designed and analyzed the machine using equivalent magnetic circuit and finite element analysis, a laboratory prototype HPM generator is built and tested with the measurements compared to predicted results confirming the designed characteristics and machine performance. The paper also presents comprehensive machine loss and mass audits.",
"title": ""
},
{
"docid": "5a9209f792ddd738d44f17b1175afe64",
"text": "PURPOSE\nIncrease in muscle force, endurance, and flexibility is desired in elite athletes to improve performance and to avoid injuries, but it is often hindered by the occurrence of myofascial trigger points. Dry needling (DN) has been shown effective in eliminating myofascial trigger points.\n\n\nMETHODS\nThis randomized controlled study in 30 elite youth soccer players of a professional soccer Bundesliga Club investigated the effects of four weekly sessions of DN plus water pressure massage on thigh muscle force and range of motion of hip flexion. A group receiving placebo laser plus water pressure massage and a group with no intervention served as controls. Data were collected at baseline (M1), treatment end (M2), and 4 wk follow-up (M3). Furthermore, a 5-month muscle injury follow-up was performed.\n\n\nRESULTS\nDN showed significant improvement of muscular endurance of knee extensors at M2 (P = 0.039) and M3 (P = 0.008) compared with M1 (M1:294.6 ± 15.4 N·m·s, M2:311 ± 25 N·m·s; M3:316.0 ± 28.6 N·m·s) and knee flexors at M2 compared with M1 (M1:163.5 ± 10.9 N·m·s, M2:188.5 ± 16.3 N·m·s) as well as hip flexion (M1: 81.5° ± 3.3°, M2:89.8° ± 2.8°; M3:91.8° ± 3.8°). Compared with placebo (3.8° ± 3.8°) and control (1.4° ± 2.9°), DN (10.3° ± 3.5°) showed a significant (P = 0.01 and P = 0.0002) effect at M3 compared with M1 on hip flexion; compared with nontreatment control (-10 ± 11.9 N·m), DN (5.2 ± 10.2 N·m) also significantly (P = 0.049) improved maximum force of knee extensors at M3 compared with M1. During the rest of the season, muscle injuries were less frequent in the DN group compared with the control group.\n\n\nCONCLUSION\nDN showed a significant effect on muscular endurance and hip flexion range of motion that persisted 4 wk posttreatment. Compared with placebo, it showed a significant effect on hip flexion that persisted 4 wk posttreatment, and compared with nonintervention control, it showed a significant effect on maximum force of knee extensors 4 wk posttreatment in elite soccer players.",
"title": ""
},
{
"docid": "7a52fecf868040da5db3bd6fcbdcc0b2",
"text": "Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.",
"title": ""
},
{
"docid": "b18f53b2a33546a361d3efa1787510ef",
"text": "How do International Monetary Fund (IMF) policy reforms-so-called 'conditionalities'-affect government health expenditures? We collected archival documents on IMF programmes from 1995 to 2014 to identify the pathways and impact of conditionality on government health spending in 16 West African countries. Based on a qualitative analysis of the data, we find that IMF policy reforms reduce fiscal space for investment in health, limit staff expansion of doctors and nurses, and lead to budget execution challenges in health systems. Further, we use cross-national fixed effects models to evaluate the relationship between IMF-mandated policy reforms and government health spending, adjusting for confounding economic and demographic factors and for selection bias. Each additional binding IMF policy reform reduces government health expenditure per capita by 0.248 percent (95% CI -0.435 to -0.060). Overall, our findings suggest that IMF conditionality impedes progress toward the attainment of universal health coverage.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "203c797bea19fa0d4d66d65832ccbded",
"text": "In soccer, scoring goals is a fundamental objective which depends on many conditions and constraints. Considering the RoboCup soccer 2D-simulator, this paper presents a data mining-based decision system to identify the best time and direction to kick the ball towards the goal to maximize the overall chances of scoring during a simulated soccer match. Following the CRISP-DM methodology, data for modeling were extracted from matches of major international tournaments (10691 kicks), knowledge about soccer was embedded via transformation of variables and a Multilayer Perceptron was used to estimate the scoring chance. Experimental performance assessment to compare this approach against previous LDA-based approach was conducted from 100 matches. Several statistical metrics were used to analyze the performance of the system and the results showed an increase of 7.7% in the number of kicks, producing an overall increase of 78% in the number of goals scored.",
"title": ""
},
{
"docid": "9f16e90dc9b166682ac9e2a8b54e611a",
"text": "Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.",
"title": ""
},
{
"docid": "300b599e2e3cc3b63bc38276f9621a16",
"text": "Swarm intelligence (SI) is based on collective behavior of selforganized systems. Typical swarm intelligence schemes include Particle Swarm Optimization (PSO), Ant Colony System (ACS), Stochastic Diffusion Search (SDS), Bacteria Foraging (BF), the Artificial Bee Colony (ABC), and so on. Besides the applications to conventional optimization problems, SI can be used in controlling robots and unmanned vehicles, predicting social behaviors, enhancing the telecommunication and computer networks, etc. Indeed, the use of swarm optimization can be applied to a variety of fields in engineering and social sciences. In this paper, we review some popular algorithms in the field of swarm intelligence for problems of optimization. The overview and experiments of PSO, ACS, and ABC are given. Enhanced versions of these are also introduced. In addition, some comparisons are made between these algorithms.",
"title": ""
},
{
"docid": "28e1c4c2622353fc87d3d8a971b9e874",
"text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.",
"title": ""
},
{
"docid": "b5fd22854e75a29507cde380999705a2",
"text": "This study presents a high-efficiency-isolated single-input multiple-output bidirectional (HISMB) converter for a power storage system. According to the power management, the proposed HISMB converter can operate at a step-up state (energy release) and a step-down state (energy storage). At the step-up state, it can boost the voltage of a low-voltage input power source to a high-voltage-side dc bus and middle-voltage terminals. When the high-voltage-side dc bus has excess energy, one can reversely transmit the energy. The high-voltage dc bus can take as the main power, and middle-voltage output terminals can supply powers for individual middle-voltage dc loads or to charge auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based HISMB converter accomplishes the bidirectional power control with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the energy of the leakage inductor of the coupled inductor can be recycled and released to the high-voltage-side dc bus and auxiliary power sources, and the voltage stresses on power switches can be greatly reduced. Moreover, the switching losses can be significantly decreased because of all power switches with zero-voltage-switching features. Therefore, the objectives of high-efficiency power conversion, electric isolation, bidirectional energy transmission, and various output voltage with different levels can be obtained. The effectiveness of the proposed HISMB converter is verified by experimental results of a kW-level prototype in practical applications.",
"title": ""
},
{
"docid": "b9d9fc6782c6ed9952d28309199e141d",
"text": "Recently, Edge Computing has emerged as a new computing paradigm dedicated for mobile applications for performance enhancement and energy efficiency purposes. Specifically, it benefits today's interactive applications on power-constrained devices by offloading compute-intensive tasks to the edge nodes which is in close proximity. Meanwhile, Field Programmable Gate Array (FPGA) is well known for its excellence in accelerating compute-intensive tasks such as deep learning algorithms in a high performance and energy efficiency manner due to its hardware-customizable nature. In this paper, we make the first attempt to leverage and combine the advantages of these two, and proposed a new network-assisted computing model, namely FPGA-based edge computing. As a case study, we choose three computer vision (CV)-based interactive mobile applications, and implement their backend computation parts on FPGA. By deploying such application-customized accelerator modules for computation offloading at the network edge, we experimentally demonstrate that this approach can effectively reduce response time for the applications and energy consumption for the entire system in comparison with traditional CPU-based edge/cloud offloading approach.",
"title": ""
}
] |
scidocsrr
|
c70bb419225959cbe49a6461f384f56b
|
Brain tumor segmentation using Cuckoo Search optimization for Magnetic Resonance Images
|
[
{
"docid": "9f76ca13fd4e61905f82a1009982adb9",
"text": "Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manuallysegmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed. 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "a38e863016bfcead5fd9af46365d4d5c",
"text": "Social networks generate a large amount of text content over time because of continuous interaction between participants. The mining of such social streams is more challenging than traditional text streams, because of the presence of both text content and implicit network structure within the stream. The problem of event detection is also closely related to clustering, because the events can only be inferred from aggregate trend changes in the stream. In this paper, we will study the two related problems of clustering and event detection in social streams. We will study both the supervised and unsupervised case for the event detection problem. We present experimental results illustrating the effectiveness of incorporating network structure in event discovery over purely content-based",
"title": ""
},
{
"docid": "2272d3ac8770f456c1cf2e461eba2da9",
"text": "EXECUTiVE SUMMARY This quarter, work continued on the design and construction of a robotic fingerspelling hand. The hand is being designed to aid in communication for individuals who are both deaf and blind. In the winter quarter, research was centered on determining an effective method of actuation for the robotic hand. This spring 2008 quarter, time was spent designing the mechanisms needed to mimic the size and motions of a human hand. Several methods were used to determine a proper size for the robotic hand, including using the ManneQuinPro human modeling system to approximate the size of an average male human hand and using the golden ratio to approximate the length of bone sections within the hand. After a proper average hand size was determined, a finger mechanism was designed in the SolidWorks design program that could be built and used in the robotic hand.",
"title": ""
},
{
"docid": "1212637c91d8c57299c922b6bde91ce8",
"text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.",
"title": ""
},
{
"docid": "9554640e49aea8bec5283463d5a2be1f",
"text": "In this paper, we study the problem of packing unequal circle s into a2D rectangular container. We solve this problem by proposing two greedy algorithms. Th e first algorithm, denoted by B1.0, selects the next circle to place according to the maximum hole degree rule , which is inspired from human activity in packing. The second algorithm, denoted by B1.5, improves B1.0 with aself look-ahead search strategy . The comparisons with the published methods on several inst ances taken from the literature show the good performance of our ap p oach.",
"title": ""
},
{
"docid": "0a263c6abbfc97faa169b95d415c9896",
"text": "We introduce ChronoStream, a distributed system specifically designed for elastic stateful stream computation in the cloud. ChronoStream treats internal state as a first-class citizen and aims at providing flexible elastic support in both vertical and horizontal dimensions to cope with workload fluctuation and dynamic resource reclamation. With a clear separation between application-level computation parallelism and OS-level execution concurrency, ChronoStream enables transparent dynamic scaling and failure recovery by eliminating any network I/O and state-synchronization overhead. Our evaluation on dozens of computing nodes shows that ChronoStream can scale linearly and achieve transparent elasticity and high availability without sacrificing system performance or affecting collocated tenants.",
"title": ""
},
{
"docid": "8a56dfbe83fbdd45d85c6b2ac793338b",
"text": "Idioms of distress communicate suffering via reference to shared ethnopsychologies, and better understanding of idioms of distress can contribute to effective clinical and public health communication. This systematic review is a qualitative synthesis of \"thinking too much\" idioms globally, to determine their applicability and variability across cultures. We searched eight databases and retained publications if they included empirical quantitative, qualitative, or mixed-methods research regarding a \"thinking too much\" idiom and were in English. In total, 138 publications from 1979 to 2014 met inclusion criteria. We examined the descriptive epidemiology, phenomenology, etiology, and course of \"thinking too much\" idioms and compared them to psychiatric constructs. \"Thinking too much\" idioms typically reference ruminative, intrusive, and anxious thoughts and result in a range of perceived complications, physical and mental illnesses, or even death. These idioms appear to have variable overlap with common psychiatric constructs, including depression, anxiety, and PTSD. However, \"thinking too much\" idioms reflect aspects of experience, distress, and social positioning not captured by psychiatric diagnoses and often show wide within-cultural variation, in addition to between-cultural differences. Taken together, these findings suggest that \"thinking too much\" should not be interpreted as a gloss for psychiatric disorder nor assumed to be a unitary symptom or syndrome within a culture. We suggest five key ways in which engagement with \"thinking too much\" idioms can improve global mental health research and interventions: it (1) incorporates a key idiom of distress into measurement and screening to improve validity of efforts at identifying those in need of services and tracking treatment outcomes; (2) facilitates exploration of ethnopsychology in order to bolster cultural appropriateness of interventions; (3) strengthens public health communication to encourage engagement in treatment; (4) reduces stigma by enhancing understanding, promoting treatment-seeking, and avoiding unintentionally contributing to stigmatization; and (5) identifies a key locally salient treatment target.",
"title": ""
},
{
"docid": "d06393c467e19b0827eea5f86bbf4e98",
"text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.",
"title": ""
},
{
"docid": "02e1994a5f6ecd3f6f4cc362b6e5af3b",
"text": "Risk management has been recognized as an effective way to reduce system development failure. Information system development (ISD) is a highly complex and unpredictable activity associated with high risks. With more and more organizations outsource or offshore substantial resources in system development, organizations face up new challenges and risks not common to traditional development models. Classical risk management approaches have relied on tactical, bottomup analysis, which do not readily scale to distributed environment. Therefore, risk management in distributed environment is becoming a critical area of concern. This paper uses a systemic approach developed by Software Engineering Institute to identify risks of ISD in distributed environment. Four key risk factors were identified from prior literature: objective, preparation, execution, and environment. In addition, the impact of these four risk factors on the success of information system development will also be examined.",
"title": ""
},
{
"docid": "db3f317940f308407d217bbedf14aaf0",
"text": "Imagine your daily activities. Perhaps you will be at home today, relaxing and completing chores. Maybe you are a scientist, and plan to conduct a long series of experiments in a laboratory. You might work in an office building: you walk about your floor, greeting others, getting coffee, preparing documents, etc. There are many activities you perform regularly in large environments. If a system understood your intentions it could help you achieve your goals, or automate aspects of your environment. More generally, an understanding of human intentions would benefit, and is perhaps prerequisite for, AI systems that assist and augment human capabilities. We present a framework that continuously forecasts long-term spatial and semantic intentions (what you will do and where you will go) of a first-person camera wearer. We term our algorithm “Demonstrating Agent Rewards for K-futures Online” (DARKO). We use a first-person camera to meet the challenge of observing the wearer’s behavior everywhere. In Figure 1, DARKO forecasts multiple quantities: (1) the user intends to go to the shower (out of all possible destinations in their house), (2) their trajectory through Figure 1: Forecasting future behavior from first-person video. The overhead map shows where the person is likely to go, predicted from the first frame. Each s",
"title": ""
},
{
"docid": "3dcf758545558c5d3c98947c30f99842",
"text": "Problematic smartphone use is an important public health challenge and is linked with poor mental health outcomes. However, little is known about the mechanisms that maintain this behavior. We recruited a sample of 308 participants from Amazon’s Mechanical Turk labor market. Participants responded to standardized measures of problematic smartphone use, and frequency of smartphone use, depression and anxiety and possible mechanisms including behavioral activation, need for touch, fear of missing out (FoMO), and emotion regulation. Problematic smartphone use was most correlated with anxiety, need for touch and FoMO. The frequency of use was most correlated (inversely) with depression. In regression models, problematic smartphone use was associated with FoMO, depression (inversely), anxiety, and need for touch. Frequency of use was associated with need for touch, and (inversely) with depressive symptoms. Behavioral activation mediated associations between smartphone use (both problematic and usage frequency) and depression and anxiety symptoms. Emotional suppression also mediated the association between problematic smartphone use and anxiety. Results demonstrate the importance of social and tactile need fulfillment variables such as FoMO and need for touch as critical mechanisms that can explain problematic smartphone use and its association with depression and",
"title": ""
},
{
"docid": "7655ddc0c703bb96df16b8a67958c34e",
"text": "This paper describes the design and experiment results of 25 Gbps, 4 channels optical transmitter which consist of a vertical-cavity surface emitting laser (VCSEL) driver with an asymmetric pre-emphasis circuit and an electrical receiver. To make data transfers faster in directly modulated a VCSEL-based optical communications, the driver circuit requires an asymmetric pre-emphasis signal to compensate for the nonlinear characteristics of VCSEL. An asymmetric pre-emphasis signal can be created by the adjusting a duty ratio with a delay circuit. A test chip was fabricated in the 65-nm standard CMOS process and demonstrated. An experimental evaluation showed that this transmitter enlarged the eye opening of a 25 Gbps, PRBS=29-1 test signal by 8.8% and achieve four channels fully optical link with an optical receiver at a power of 10.3 mW=Gbps=ch at 25 Gbps.",
"title": ""
},
{
"docid": "0f5bbaeb27ef89892ce2125a8cc94af7",
"text": "Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) are the two most common types of acoustic models used in statistical parametric approaches for generating low-level speech waveforms from high-level symbolic inputs via intermediate acoustic feature sequences. However, these models have their limitations in representing complex, nonlinear relationships between the speech generation inputs and the acoustic features. Inspired by the intrinsically hierarchical process of human speech production and by the successful application of deep neural networks (DNNs) to automatic speech recognition (ASR), deep learning techniques have also been applied successfully to speech generation, as reported in recent literature. This article systematically reviews these emerging speech generation approaches, with the dual goal of helping readers gain a better understanding of the existing techniques as well as stimulating new work in the burgeoning area of deep learning for parametric speech generation.",
"title": ""
},
{
"docid": "bc28f28d21605990854ac9649d244413",
"text": "Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.",
"title": ""
},
{
"docid": "1a1077a20e261e6a846706720a567094",
"text": "Proposed new actuation mechanism realizes active or semi-active mobility for flexible long cables such as fiberscopes and scope cameras. A ciliary vibration mechanism was developed using flexible ciliary tapes that can be attached easily to existing cables. Driving characteristics of the active cables were confirmed through experiments and numerical analyses. Finally, the actuation mechanism was applied for an advanced scope camera that can reduce friction with obstacles and avoid stuck or tangled cables",
"title": ""
},
{
"docid": "872d589cd879dee7d88185851b9546ab",
"text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
},
{
"docid": "408f58b7dd6cb1e6be9060f112773888",
"text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.",
"title": ""
},
{
"docid": "37b5ab95b1b488c5aee9a5cfed87c095",
"text": "A key step in the understanding of printed documents is their classification based on the nature of information they contain and their layout. In this work we consider a dynamic scenario in which document classes are not known a priori and new classes can appear at any time. This open world setting is both realistic and highly challenging. We use an SVM-based classifier based only on image-level features and use a nearest-neighbor approach for detecting new classes. We assess our proposal on a real-world dataset composed of 562 invoices belonging to 68 different classes. These documents were digitalized after being handled by a corporate environment, thus they are quite noisy---e.g., big stamps and handwritten signatures at unfortunate positions and alike. The experimental results are highly promising.",
"title": ""
},
{
"docid": "bd9009579020d6ed1b4de90d41f1c353",
"text": "The design, prototyping, and characterization of a radiation pattern reconfigurable antenna (RA) targeting 5G communications are presented. The RA is based on a reconfigurable parasitic layer technique in which a driven dipole antenna is located along the central axis of a 3-D parasitic layer structure enclosing it. The reconfigurable parasitic structure is similar to a hexagonal prism, where the top/bottom bases are formed by a hexagonal domed structure. The surfaces of the parasitic structure house electrically small metallic pixels with various geometries. The adjacent pixels are connected by PIN diode switches to change the geometry of the parasitic surface, thus providing reconfigurability in the radiation pattern. This RA is designed to operate over a 4.8–5.2 GHz frequency band, producing various radiation patterns with a beam-steering capability in both the azimuth (<inline-formula> <tex-math notation=\"LaTeX\">$0 {^{\\circ }} <\\phi < 360 {^{\\circ }}$ </tex-math></inline-formula>) and elevation planes (<inline-formula> <tex-math notation=\"LaTeX\">$-18 {^{\\circ }} <\\theta < 18 {^{\\circ }}$ </tex-math></inline-formula>). Small-cell access points equipped with RAs are used to investigate the system level performances for 5G heterogeneous networks. The results show that using distributed mode optimization, RA equipped small-cell systems could provide up to 29% capacity gains and 13% coverage improvements as compared to legacy omnidirectional antenna equipped systems.",
"title": ""
},
{
"docid": "226607ad7be61174871fcab384ac31b4",
"text": "Traditional image stitching using parametric transforms such as homography, only produces perceptually correct composites for planar scenes or parallax free camera motion between source frames. This limits mosaicing to source images taken from the same physical location. In this paper, we introduce a smoothly varying affine stitching field which is flexible enough to handle parallax while retaining the good extrapolation and occlusion handling properties of parametric transforms. Our algorithm which jointly estimates both the stitching field and correspondence, permits the stitching of general motion source images, provided the scenes do not contain abrupt protrusions.",
"title": ""
}
] |
scidocsrr
|
7c864c37d20aa08948af106b46b42ca3
|
UA-DETRAC 2017 : Report of AVSS 2017 & IT 4 S Challenge on Advance Traffic Monitoring
|
[
{
"docid": "198311a68ad3b9ee8020b91d0b029a3c",
"text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"title": ""
},
{
"docid": "a77eddf9436652d68093946fbe1d2ed0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] |
[
{
"docid": "5a8d4bfb89468d432b7482062a0cbf2d",
"text": "While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
},
{
"docid": "17e761f30e9f8cffa84a5a2c142e4665",
"text": "In this paper, a neural-dynamic optimization-based nonlinear model predictive control (NMPC) is developed for controlling leader-follower mobile robots formation. Consider obstacles in the environments, a control strategy is proposed for the formations which includes separation-bearing-orientation scheme (SBOS) for regular leader-follower formation and separation-distance scheme (SDS) for obstacle avoidance. During the formation motion, the leader robot shall track a desired trajectory and the desire leader-follower relationship can be maintained through SBOS method; meanwhile, the followers can avoid the collision by applying the SDS. The formation-error kinematics of both SBOS and SDS are derived and a constrained quadratic programming (QP) can be obtained by transforming the MPC method. Then, over a finite-receding horizon, the QP problem can be solved by utilizing the primal-dual neural network (PDNN) with parallel capability. The computation complexity can be greatly reduced by the implemented neural-dynamic optimization. Compared with other existing formation control approaches, the developed solution in this paper is rooted in NMPC techniques with input constraints and the novel QP problem formulation. Finally, experimental studies of the proposed formation control approach have been performed on several mobile robots to verify the effectiveness.",
"title": ""
},
{
"docid": "41c718697d19ee3ca0914255426a38ab",
"text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.",
"title": ""
},
{
"docid": "338efe667e608779f4f41d1cdb1839bb",
"text": "In ASP.NET, Programmers maybe use POST or GET to pass parameter's value. Two methods are easy to come true. But In ASP.NET, It is not easy to pass parameter's value. In ASP.NET, Programmers maybe use many methods to pass parameter's value, such as using Application, Session, Querying, Cookies, and Forms variables. In this paper, by way of pass value from WebForm1.aspx to WebForm2.aspx and show out the value on WebForm2. We can give and explain actually examples in ASP.NET language to introduce these methods.",
"title": ""
},
{
"docid": "643be78202e4d118e745149ed389b5ef",
"text": "Little clinical research exists on the contribution of the intrinsic foot muscles (IFM) to gait or on the specific clinical evaluation or retraining of these muscles. The purpose of this clinical paper is to review the potential functions of the IFM and their role in maintaining and dynamically controlling the medial longitudinal arch. Clinically applicable methods of evaluation and retraining of these muscles for the effective management of various foot and ankle pain syndromes are discussed.",
"title": ""
},
{
"docid": "8f0073815a64e4f5d3e4e8cb9290fa65",
"text": "In this paper, we investigate the benefits of applying a form of network coding known as random linear coding (RLC) to unicast applications in disruption-tolerant networks (DTNs). Under RLC, nodes store and forward random linear combinations of packets as they encounter each other. For the case of a single group of packets originating from the same source and destined for the same destination, we prove a lower bound on the probability that the RLC scheme achieves the minimum time to deliver the group of packets. Although RLC significantly reduces group delivery delays, it fares worse in terms of average packet delivery delay and network transmissions. When replication control is employed, RLC schemes reduce group delivery delays without increasing the number of transmissions. In general, the benefits achieved by RLC are more significant under stringent resource (bandwidth and buffer) constraints, limited signaling, highly dynamic networks, and when applied to packets in the same flow. For more practical settings with multiple continuous flows in the network, we show the importance of deploying RLC schemes with a carefully tuned replication control in order to achieve reduction in average delay, which is observed to be as large as 20% when buffer space is constrained.",
"title": ""
},
{
"docid": "21511302800cd18d21dbc410bec3cbb2",
"text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.",
"title": ""
},
{
"docid": "9696e2f6ff6e16f378ae377798ee3332",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.06.054 * Corresponding author. Address: School of Compu ogy, Beijing Jiaotong University, Beijing 100044, Chin E-mail address: jnchen06@163.com (J. Chen). As an important preprocessing technology in text classification, feature selection can improve the scalability, efficiency and accuracy of a text classifier. In general, a good feature selection method should consider domain and algorithm characteristics. As the Naïve Bayesian classifier is very simple and efficient and highly sensitive to feature selection, so the research of feature selection specially for it is significant. This paper presents two feature evaluation metrics for the Naïve Bayesian classifier applied on multiclass text datasets: Multi-class Odds Ratio (MOR), and Class Discriminating Measure (CDM). Experiments of text classification with Naïve Bayesian classifiers were carried out on two multi-class texts collections. As the results indicate, CDM and MOR gain obviously better selecting effect than other feature selection approaches. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7fe6b1ba27c13c95d1a48ca401e25fd",
"text": "BACKGROUND\nselecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD).\n\n\nORDINAL-TO-INTERVAL SCALE CONVERSION EXAMPLE\na breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests.\n\n\nRESULTS\nthe sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable.\n\n\nCONCLUSION\nby using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "e457ab9e14f6fa104a15421d9263815a",
"text": "Many aquaculture systems generate high amounts of wastewater containing compounds such as suspended solids, total nitrogen and total phosphorus. Today, aquaculture is imperative because fish demand is increasing. However, the load of waste is directly proportional to the fish production. Therefore, it is necessary to develop more intensive fish culture with efficient systems for wastewater treatment. A number of physical, chemical and biological methods used in conventional wastewater treatment have been applied in aquaculture systems. Constructed wetlands technology is becoming more and more important in recirculating aquaculture systems (RAS) because wetlands have proven to be well-established and a cost-effective method for treating wastewater. This review gives an overview about possibilities to avoid the pollution of water resources; it focuses initially on the use of systems combining aquaculture and plants with a historical review of aquaculture and the treatment of its effluents. It discusses the present state, taking into account the load of pollutants in wastewater such as nitrates and phosphates, and finishes with recommendations to prevent or at least reduce the pollution of water resources in the future.",
"title": ""
},
{
"docid": "a2d699f3c600743c732b26071639038a",
"text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.",
"title": ""
},
{
"docid": "29d08d266bc84ba761283bb8ae827d0b",
"text": "Statistical classifiers typically build (parametric) probabilistic models of the training data, and compute the probability that an unknown sample belongs to each of the possible classes using these models. We utilize two established measures to compare the performance of statistical classifiers namely; classification accuracy (or error rate) and the area under ROC. Naïve Bayes has obtained much relevance in data classification for machine learning and datamining. In our work, a comparative analysis of the accuracy performance of statistical classifiers namely Naïve Bayes (NB), MDL discretized NB, 4 different variants of NB and 8 popular non-NB classifiers was carried out on 21 medical datasets using classification accuracy and true positive rate. Our results indicate that the classification accuracy of Naïve Bayes (MDL discretized) on the average is the best performer. The significance of this work through the results of the comparative analysis, we are of the opinion that medical datamining with generative methods like Naïve Bayes is computationally simple yet effective and are to be used whenever possible as the benchmark for statistical classifiers.",
"title": ""
},
{
"docid": "f18a19159e71e4d2a92a465217f93366",
"text": "Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages.",
"title": ""
},
{
"docid": "eb083b4c46d49a6cc639a89b74b1f269",
"text": "ROC analyses generated low area under the curve (.695, 95% confidence interval (.637.752)) and cutoff scores with poor sensitivity/specificity balance. BDI-II. Because the distribution of BDI-II scores was not normal, percentile ranks for raw scores were provided for the total sample and separately by gender. symptoms two scales were used: The Beck Depression Inventory-II (BDIII) smokers and non smokers, we found that the mean scores on the BDI-II (9.21 vs.",
"title": ""
},
{
"docid": "4855ecd626160518339ee2caf8f9c2cf",
"text": "The Metamorphoses Greek myth includes a story about a woman raised as a male falling in love with another woman, and being transformed into a man prior to a wedding ceremony and staying with her. It is therefore considered that people who desire to live as though they have the opposite gender have existed since ancient times. People who express a sense of discomfort with their anatomical sex and related roles have been reported in the medical literature since the middle of the 19th century. However, homosexual, fetishism, gender identity disorder, and associated conditions were mixed together and regarded as types of sexual perversion that were considered ethically objectionable until the 1950s. The first performance of sex-reassignment surgery in 1952 attracted considerable attention, and the sexologist Harry Benjamin reported a case of 'a woman kept in the body of a man', which was called transsexualism. John William Money studied the sexual consciousness about disorders of sex development and advocated the concept of gender in 1957. Thereafter the disparity between anatomical sex and gender identity was referred to as the psychopathological condition of gender identity disorder, and this was used for its diagnostic name when it was introduced into DSM-III in 1980. However, gender identity disorder encompasses a spectrum of conditions, and DSM-III -R categorized it into three types: transsexualism, nontranssexualism, and not otherwise specified. The first two types were subsequently combined and standardized into the official diagnostic name of 'gender identity disorder' in DSM-IV. In contrast, gender identity disorder was categorized into four groups (including transsexualism and dual-role transvestism) in ICD-10. A draft proposal of DSM-5 has been submitted, in which the diagnostic name of gender identity disorder has been changed to gender dysphoria. Also, it refers to 'assigned gender' rather than to 'sex', and includes disorders of sexual development. Moreover, the subclassifications regarding sexual orientation have been deleted. The proposed DSM-5 reflects an attempt to include only a medical designation of people who have suffered due to the gender disparity, thereby respecting the concept of transgender in accepting the diversity of the role of gender. This indicates that transgender issues are now at a turning point.",
"title": ""
},
{
"docid": "f715f471118b169502941797d17ceac6",
"text": "Software is a knowledge intensive product, which can only evolve if there is effective and efficient information exchange between developers. Complying to coding conventions improves information exchange by improving the readability of source code. However, without some form of enforcement, compliance to coding conventions is limited. We look at the problem of information exchange in code and propose gamification as a way to motivate developers to invest in compliance. Our concept consists of a technical prototype and its integration into a Scrum environment. By means of two experiments with agile software teams and subsequent surveys, we show that gamification can effectively improve adherence to coding conventions.",
"title": ""
},
{
"docid": "7e8b58b88a1a139f9eb6642a69eb697a",
"text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0b9dde7982cf2b99a979dbc0d6dfceba",
"text": "PURPOSE\nTo develop a reliable and valid questionnaire of bilingual language status with predictable relationships between self-reported and behavioral measures.\n\n\nMETHOD\nIn Study 1, the internal validity of the Language Experience and Proficiency Questionnaire (LEAP-Q) was established on the basis of self-reported data from 52 multilingual adult participants. In Study 2, criterion-based validity was established on the basis of standardized language tests and self-reported measures from 50 adult Spanish-English bilinguals. Reliability and validity of the questionnaire were established on healthy adults whose literacy levels were equivalent to that of someone with a high school education or higher.\n\n\nRESULTS\nFactor analyses revealed consistent factors across both studies and suggested that the LEAP-Q was internally valid. Multiple regression and correlation analyses established criterion-based validity and suggested that self-reports were reliable indicators of language performance. Self-reported reading proficiency was a more accurate predictor of first-language performance, and self-reported speaking proficiency was a more accurate predictor of second-language performance. Although global measures of self-reported proficiency were generally predictive of language ability, deriving a precise estimate of performance on a particular task required that specific aspects of language history be taken into account.\n\n\nCONCLUSION\nThe LEAP-Q is a valid, reliable, and efficient tool for assessing the language profiles of multilingual, neurologically intact adult populations in research settings.",
"title": ""
}
] |
scidocsrr
|
5c04bb6549016dbcaf0b700e1a48b69b
|
Time-series data mining
|
[
{
"docid": "96be7a58f4aec960e2ad2273dea26adb",
"text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.",
"title": ""
}
] |
[
{
"docid": "00129c31c4f37d3d44540c4ad97e5cca",
"text": "To understand how function arises from the interactions between neurons, it is necessary to use methods that allow the monitoring of brain activity at the single-neuron, single-spike level and the targeted manipulation of the diverse neuron types selectively in a closed-loop manner. Large-scale recordings of neuronal spiking combined with optogenetic perturbation of identified individual neurons has emerged as a suitable method for such tasks in behaving animals. To fully exploit the potential power of these methods, multiple steps of technical innovation are needed. We highlight the current state of the art in electrophysiological recording methods, combined with optogenetics, and discuss directions for progress. In addition, we point to areas where rapid development is in progress and discuss topics where near-term improvements are possible and needed.",
"title": ""
},
{
"docid": "a537edc6579892249d157e2dc2f31077",
"text": "An efficient decoupling feeding network is proposed in this letter. It is composed of two directional couplers and two sections of transmission line for connection use. By connecting the two couplers, an indirect coupling with controlled magnitude and phase is introduced, which can be used to cancel out the direct coupling caused by space waves and surface waves between array elements. To demonstrate the method, a two-element microstrip antenna array with the proposed network has been designed, fabricated and measured. Both simulated and measured results have simultaneously proved that the proposed method presents excellent decoupling performance. The measured mutual coupling can be reduced to below -58 dB at center frequency. Meanwhile it has little influence on return loss and radiation patterns. The decoupling mechanism is simple and straightforward which can be easily applied in phased array antennas and MIMO systems.",
"title": ""
},
{
"docid": "ed2464f8cf0495e10d8b2a75a7d8bc3b",
"text": "Personalized services such as news recommendations are becoming an integral part of our digital lives. The problem is that they extract a steep cost in terms of privacy. The service providers collect and analyze user's personal data to provide the service, but can infer sensitive information about the user in the process. In this work we ask the question \"How can we provide personalized news recommendation without sharing sensitive data with the provider?\"\n We propose a local private intelligence assistance framework (PrIA), which collects user data and builds a profile about the user and provides recommendations, all on the user's personal device. It decouples aggregation and personalization: it uses the existing aggregation services on the cloud to obtain candidate articles but makes the personalized recommendations locally. Our proof-of-concept implementation and small scale user study shows the feasibility of a local news recommendation system. In building a private profile, PrIA avoids sharing sensitive information with the cloud-based recommendation service. However, the trade-off is that unlike cloud-based services, PrIA cannot leverage collective knowledge from large number of users. We quantify this trade-off by comparing PrIA with Google's cloud-based recommendation service. We find that the average precision of PrIA's recommendation is only 14% lower than that of Google's service. Rather than choose between privacy or personalization, this result motivates further study of systems that can provide both with acceptable trade-offs.",
"title": ""
},
{
"docid": "64bb57e2cc7d278b490b3cd7389585b2",
"text": "Prior data pertaining to transient entrainment and associated phenomena have been best explained by pacing capture of a reentrant circuit. On this basis, we hypothesized that rapid pacing from a single site of two different constant pacing rates could constantly capture an appropriately selected bipolar electrogram recording site from one direction with a constant stimulus-to-electrogram interval during pacing at one rate, yet be constantly captured from another direction with a different constant stimulus-to-electrogram interval when pacing at a different constant pacing rate. To test this hypothesis, we studied a group of patients, each with a representative tachycardia (ventricular tachycardia, circus-movement tachycardia involving an atrioventricular bypass pathway, atrial tachycardia, and atrial flutter). For each tachycardia, pacing was performed from a single site for at least two different constant rates faster than the spontaneous rate of the tachycardia. We observed in these patients that a local bipolar recording site was constantly captured from different directions at two different pacing rates without interrupting the tachycardia at pacing termination. The evidence that the same site was captured from a different direction at two different pacing rates was supported by demonstrating a change in conduction time to that site associated with a change in the bipolar electrogram morphology at that site when comparing pacing at each rate. The mean conduction time (stimulus-to-recording site electrogram interval) was 319 +/- 69 msec while pacing at a mean cycle length of 265 +/- 50 msec, yet only 81 +/- 38 msec while pacing at a second mean cycle length of 233 +/- 51 msec, a mean change in conduction time of 238 +/- 56 msec. Remarkably, the faster pacing rate resulted in a shorter conduction time. The fact that the same electrode recording site was activated from different directions without interruption of the spontaneous tachycardia at pacing termination is difficult to explain on any mechanistic basis other than reentry. Also, these changes in conduction time and electrogram morphology occurred in parallel with the demonstration of progressive fusion beats on the electrocardiogram, the latter being an established criterion for transient entrainment.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "caead07ebeea66cb5d8e57c956a11289",
"text": "End-to-end bandwidth estimation tools like Iperf though fairly accurate are intrusive. In this paper, we describe how with an instrumented TCP stack (Web100), we can estimate the end-to-end bandwidth accurately, while consuming significantly less network bandwidth and time. We modified Iperf to use Web100 to detect the end of slow-start and estimate the end-toend bandwidth by measuring the amount of data sent for a short period (1 second) after the slow-start, when the TCP throughput is relatively stable. We obtained bandwidth estimates differing by less than 10% when compared to running Iperf for 20 seconds, and savings in bandwidth estimation time of up to 94% and savings in network traffic of up to 92%.",
"title": ""
},
{
"docid": "f34562a98d4a9768f08bc607aec796a5",
"text": "The greyfin croaker Pennahia anea is one of the most common croakers currently on retail sale in Hong Kong, but there are no regional studies on its biology or fishery. The reproductive biology of the species, based on 464 individuals obtained from local wet markets, was studied over 16 months (January 2008–April 2009) using gonadosomatic index (GSI) and gonad histology. Sizes used in this study ranged from 8.0 to 19.0 cm in standard length (SL). Both the larger and smaller size classes were missing from samples, implying that they are infrequently caught in the fishery. Based on GSI data, the approximate minimum sizes for male and female maturation were 12 cm SL. The size at 50% maturity for females was 14.3 cm SL, while all males in the samples were mature. Both GSI and gonad histology suggest that spawning activity occurred from March–April to June, with a peak in May. Since large croakers are declining in the local and regional fisheries, small species such as P. anea are becoming important, although they are mostly taken as bycatch. In view of unmanaged fishing pressure, and given the decline in large croakers and sizes of P. anea presently caught, proper management of the species is suggested.",
"title": ""
},
{
"docid": "ac0b86c5a0e7949c5e77610cee865e2b",
"text": "BACKGROUND\nDegenerative lumbosacral stenosis is a common problem in large breed dogs. For severe degenerative lumbosacral stenosis, conservative treatment is often not effective and surgical intervention remains as the last treatment option. The objective of this retrospective study was to assess the middle to long term outcome of treatment of severe degenerative lumbosacral stenosis with pedicle screw-rod fixation with or without evidence of radiological discospondylitis.\n\n\nRESULTS\nTwelve client-owned dogs with severe degenerative lumbosacral stenosis underwent pedicle screw-rod fixation of the lumbosacral junction. During long term follow-up, dogs were monitored by clinical evaluation, diagnostic imaging, force plate analysis, and by using questionnaires to owners. Clinical evaluation, force plate data, and responses to questionnaires completed by the owners showed resolution (n = 8) or improvement (n = 4) of clinical signs after pedicle screw-rod fixation in 12 dogs. There were no implant failures, however, no interbody vertebral bone fusion of the lumbosacral junction was observed in the follow-up period. Four dogs developed mild recurrent low back pain that could easily be controlled by pain medication and an altered exercise regime.\n\n\nCONCLUSIONS\nPedicle screw-rod fixation offers a surgical treatment option for large breed dogs with severe degenerative lumbosacral stenosis with or without evidence of radiological discospondylitis in which no other treatment is available. Pedicle screw-rod fixation alone does not result in interbody vertebral bone fusion between L7 and S1.",
"title": ""
},
{
"docid": "988c161ceae388f5dbcdcc575a9fa465",
"text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.",
"title": ""
},
{
"docid": "e3663ed1bea4b2639369146db302d1bd",
"text": "In recent years, heterogeneous face biometrics has attracted more attentions in the face recognition community. After published in 2009, the HFB database has been applied by tens of research groups and widely used for Near infrared vs. Visible light (NIR-VIS) face recognition. Despite its success the HFB database has two disadvantages: a limited number of subjects, lacking specific evaluation protocols. To address these issues we collected the NIR-VIS 2.0 database. It contains 725 subjects, imaged by VIS and NIR cameras in four recording sessions. Because the 3D modality in the HFB database was less used in the literature, we don't consider it in the current version. In this paper, we describe the composition of the database, evaluation protocols and present the baseline performance of PCA on the database. Moreover, two interesting tricks, the facial symmetry and heterogeneous component analysis (HCA) are also introduced to improve the performance.",
"title": ""
},
{
"docid": "f028a403190899f96fcd6d6f9efbd2f1",
"text": "It is aimed to design a X-band monopulse microstrip antenna array that can be used almost in all modern tracking radars and having superior properties in angle detection and angular accuracy than the classical ones. In order to create a monopulse antenna array, a rectangular microstrip antenna is designed and 16 of it gathered together using the nonlinear central feeding to suppress the side lobe level (SLL) of the antenna. The monopulse antenna is created by the combining 4 of these 4×4 array antennas with a microstrip comparator designed using four branch line coupler. Good agreement is noted between the simulation and measurement results.",
"title": ""
},
{
"docid": "588129d869fefae4abb657a8396232e0",
"text": "A cold-adapted lipase producing bacterium, designated SS-33T, was isolated from sea sediment collected from the Bay of Bengal, India, and subjected to a polyphasic taxonomic study. Strain SS-33T exhibited the highest 16S rRNA gene sequence similarity with Staphylococcus cohnii subsp. urealyticus (97.18 %), Staphylococcus saprophyticus subsp. bovis (97.16 %) and Staphylococcus cohnii subsp. cohnii (97.04 %). Phylogenetic analysis based on the 16S rRNA gene sequences showed that strain SS-33T belongs to the genus Staphylococcus. Cells of strain SS-33T were Gram-positive, coccus-shaped, non-spore-forming, non-motile, catalase-positive and oxidase-negative. The major fatty acid detected in strain SS-33T was anteiso-C15:0 and the menaquinone was MK-7. The genomic DNA G + C content was 33 mol%. The DNA-DNA hybridization among strain SS-33T and the closely related species indicated that strain SS-33T represents a novel species of the genus Staphylococcus. On the basis of the morphological, physiological and chemotaxonomic characteristics, the results of phylogenetic analysis and the DNA-DNA hybridization, a novel species is proposed for strain SS-33T, with the name Staphylococcus lipolyticus sp. nov. The strain type is SS-33T (=MTCC 10101T = JCM 16560T). Staphylococcus lipolyticus SS-33T hydrolyzed various substrates including tributyrin, olive oil, Tween 20, Tween 40, Tween 60, and Tween 80 at low temperatures, as well as mesophilic temperatures. Lipase from strain SS-33T was partially purified by acetone precipitation. The molecular weight of lipase protein was determined 67 kDa by SDS-PAGE. Zymography was performed to monitor the lipase activity in Native-PAGE. Calcium ions increased lipase activity twofold. The optimum pH of lipase was pH 7.0 and optimum temperature was 30 °C. However, lipase exhibited 90 % activity of its optimum temperature at 10 °C and became more stable at 10 °C as compared to 30 °C. The lipase activity and stability at low temperature has wide ranging applications in various industrial processes. Therefore, cold-adapted mesophilic lipase from strain SS-33T may be used for industrial applications. This is the first report of the production of cold-adapted mesophilic lipase by any Staphylococcus species.",
"title": ""
},
{
"docid": "d7dbaa82fcabd2071d59cb0847a583a0",
"text": "CONTEXT\nA number of studies suggest a positive association between breastfeeding and cognitive development in early and middle childhood. However, the only previous study that investigated the relationship between breastfeeding and intelligence in adults had several methodological shortcomings.\n\n\nOBJECTIVE\nTo determine the association between duration of infant breastfeeding and intelligence in young adulthood.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nProspective longitudinal birth cohort study conducted in a sample of 973 men and women and a sample of 2280 men, all of whom were born in Copenhagen, Denmark, between October 1959 and December 1961. The samples were divided into 5 categories based on duration of breastfeeding, as assessed by physician interview with mothers at a 1-year examination.\n\n\nMAIN OUTCOME MEASURES\nIntelligence, assessed using the Wechsler Adult Intelligence Scale (WAIS) at a mean age of 27.2 years in the mixed-sex sample and the Børge Priens Prøve (BPP) test at a mean age of 18.7 years in the all-male sample. Thirteen potential confounders were included as covariates: parental social status and education; single mother status; mother's height, age, and weight gain during pregnancy and cigarette consumption during the third trimester; number of pregnancies; estimated gestational age; birth weight; birth length; and indexes of pregnancy and delivery complications.\n\n\nRESULTS\nDuration of breastfeeding was associated with significantly higher scores on the Verbal, Performance, and Full Scale WAIS IQs. With regression adjustment for potential confounding factors, the mean Full Scale WAIS IQs were 99.4, 101.7, 102.3, 106.0, and 104.0 for breastfeeding durations of less than 1 month, 2 to 3 months, 4 to 6 months, 7 to 9 months, and more than 9 months, respectively (P =.003 for overall F test). The corresponding mean scores on the BPP were 38.0, 39.2, 39.9, 40.1, and 40.1 (P =.01 for overall F test).\n\n\nCONCLUSION\nIndependent of a wide range of possible confounding factors, a significant positive association between duration of breastfeeding and intelligence was observed in 2 independent samples of young adults, assessed with 2 different intelligence tests.",
"title": ""
},
{
"docid": "56c7c065c390d1ed5f454f663289788d",
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"title": ""
},
{
"docid": "cd59460d293aa7ecbb9d7b96ed451b9a",
"text": "PURPOSE\nThe prevalence of work-related upper extremity musculoskeletal disorders and visual symptoms reported in the USA has increased dramatically during the past two decades. This study examined the factors of computer use, workspace design, psychosocial factors, and organizational ergonomics resources on musculoskeletal and visual discomfort and their impact on the safety and health of computer work employees.\n\n\nMETHODS\nA large-scale, cross-sectional survey was administered to a US manufacturing company to investigate these relationships (n = 1259). Associations between these study variables were tested along with moderating effects framed within a conceptual model.\n\n\nRESULTS\nSignificant relationships were found between computer use and psychosocial factors of co-worker support and supervisory relations with visual and musculoskeletal discomfort. Co-worker support was found to be significantly related to reports of eyestrain, headaches, and musculoskeletal discomfort. Supervisor relations partially moderated the relationship between workspace design satisfaction and visual and musculoskeletal discomfort.\n\n\nCONCLUSION\nThis study provides guidance for developing systematic, preventive measures and recommendations in designing office ergonomics interventions with the goal of reducing musculoskeletal and visual discomfort while enhancing office and computer workers' performance and safety.",
"title": ""
},
{
"docid": "b6c9844bdad60c5373cac2bcd018d899",
"text": "Cloud computing is currently gaining enormous momentum due to a number of promised benefits: ease of use in terms of deployment, administration, and maintenance, along with high scalability and flexibility to create new services. However, as more personal and business applications migrate to the cloud, service quality will become an important differentiator between providers. In particular, quality of experience as perceived by users has the potential to become the guiding paradigm for managing quality in the cloud. In this article, we discuss technical challenges emerging from shifting services to the cloud, as well as how this shift impacts QoE and QoE management. Thereby, a particular focus is on multimedia cloud applications. Together with a novel QoE-based classification scheme of cloud applications, these challenges drive the research agenda on QoE management for cloud applications.",
"title": ""
},
{
"docid": "48ce635355fbb5ffb7d6166948b4f135",
"text": "Computational generation of literary artifacts very often resorts to template-like schemas that can be instantiated into complex structures. With this view in mind, the present paper reviews a number of existing attempts to provide an elementary set of patterns for basic plots. An attempt is made to formulate these descriptions of possible plots in terms of character functions, an abstraction of plot-bearing elements of a story originally formulated by Vladimir Propp. These character functions act as the building blocks of the Propper system, an existing framework for computational story generation. The paper explores the set of extensions required to the original set of character functions to allow for a basic representation of the analysed schemata, and a solution for automatic generation of stories based on this formulation of the narrative schemas. This solution uncovers important insights on the relative expressive power of the representation of narrative in terms of character functions, and their impact on the generative potential of the framework is discussed. 1998 ACM Subject Classification F.4.1 Knowledge Representation Formalisms and Methods",
"title": ""
},
{
"docid": "7b5f90b4b0b11ffdb25ececb2eaf56f6",
"text": "The human ABO(H) blood group phenotypes arise from the evolutionarily oldest genetic system found in primate populations. While the blood group antigen A is considered the ancestral primordial structure, under the selective pressure of life-threatening diseases blood group O(H) came to dominate as the most frequently occurring blood group worldwide. Non-O(H) phenotypes demonstrate impaired formation of adaptive and innate immunoglobulin specificities due to clonal selection and phenotype formation in plasma proteins. Compared with individuals with blood group O(H), blood group A individuals not only have a significantly higher risk of developing certain types of cancer but also exhibit high susceptibility to malaria tropica or infection by Plasmodium falciparum. The phenotype-determining blood group A glycotransferase(s), which affect the levels of anti-A/Tn cross-reactive immunoglobulins in phenotypic glycosidic accommodation, might also mediate adhesion and entry of the parasite to host cells via trans-species O-GalNAc glycosylation of abundantly expressed serine residues that arise throughout the parasite's life cycle, while excluding the possibility of antibody formation against the resulting hybrid Tn antigen. In contrast, human blood group O(H), lacking this enzyme, is indicated to confer a survival advantage regarding the overall risk of developing cancer, and individuals with this blood group rarely develop life-threatening infections involving evolutionarily selective malaria strains.",
"title": ""
},
{
"docid": "9d803b0ce1f1af621466b1d7f97b7edf",
"text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.",
"title": ""
},
{
"docid": "f7a6102ec2ebab9970233e90060bfb9c",
"text": "The malar region has been a crucial target in many facial rejuvenation techniques because the beauty and youthful contour of a convex midface and a smooth eyelid–cheek sulcus are key features of a pleasing face-lift result. The full midface subperiosteal lift has helped to address these issues. However, the desire of patients currently for a rapid recovery and return to work with a natural-looking result has influenced procedural selection. Another concern is for safer procedures with reduced potential risk. Progressively fewer invasive techniques, such as the minimal access cranial suspension (MACS) lift, have been a response to these core concerns. After 3 years of performing the conventional three purse-string suture MACS lift, the author developed a practical procedural modification. For a total of 17 patients, the author combined limited regional subperiosteal lift and periosteal fixation with a simple sling approach to the more fully released malar tissue mass to make a single-point suspension just above the lateral orbital rim. The percutaneous sling lift appears to offer a degree and naturalness of rejuvenation of the malar region similar to those of the MACS lift and the full subperiosteal midface lift, but with fewer suspension points and less undermining. Also, the author observed less ecchymosis and edema than with the full midface subperiosteal lift, as would be expected. In all 17 cases, the need for the second and third purse-string sutures was eliminated. The early results for the percutaneous sling lift indicate that it offers promising results, rapid recovery, and reduced risk of serious complications.",
"title": ""
},
{
"docid": "2db786fb0d27992950e7b8238a76226d",
"text": "Alberto González, Advisor This study draws concepts from rhetorical criticism, vernacular rhetoric, visual rhetoric, and whiteness studies, to investigate how Asian/Asian Americans’ online identities are being constructed and mediated by Internet memes. This study examines the use of Internet memes as persuasive discourses for entertainment, spreading stereotypes, and online activism by examining the meme images and texts, including their content, rhetorical components, and structure. Internet memes that directly depict Asian/Asian Americans are collected from three popular meme websites: Reddit, Know Your Meme, and Tumblr. The findings indicate that Internet memes complicate the construction of racial identity, invoking the negotiation and conflicts concerning racial identities described by dominant as well as vernacular discourses. They not only function as entertaining jokes but also reflect the social conflicts surrounding race. However, the prevalence and development of memes also bring new possibilities for social justice movements. Furthermore, the study provides implications of memes for users and anti-racist activities, as well as suggests future research directions mainly in the context of globalization.",
"title": ""
}
] |
scidocsrr
|
eb3ab27f99915abd020a21b269292bca
|
MahNMF: Manhattan Non-negative Matrix Factorization
|
[
{
"docid": "a21d1956026b29bc67b92f8508a62e1c",
"text": "We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.",
"title": ""
},
{
"docid": "9edfe5895b369c0bab8d83838661ea0a",
"text": "(57) Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By moni toring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (un structured data) into discrete-phase-space states, and hence into a graph (Structured data) for extraction of condition change. ABSTRACT",
"title": ""
},
{
"docid": "e2867713be67291ee8c25afa3e2d1319",
"text": "In recent years the <i>l</i><sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the <i>l</i><sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of <i>l</i><sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the <i>l</i><sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in <i>O</i>(<i>n</i> log <i>n</i>) time and <i>O</i>(<i>n</i>) memory where <i>n</i> is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that <i>l</i><sub>1</sub>, <sub>∞</sub> leads to better performance than both <i>l</i><sub>2</sub> and <i>l</i><sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.",
"title": ""
}
] |
[
{
"docid": "2d87e26389b9d4ebf896bd9cbd281e69",
"text": "Finger-vein biometrics has been extensively investigated for personal authentication. One of the open issues in finger-vein verification is the lack of robustness against image-quality degradation. Spurious and missing features in poor-quality images may degrade the system’s performance. Despite recent advances in finger-vein quality assessment, current solutions depend on domain knowledge. In this paper, we propose a deep neural network (DNN) for representation learning to predict image quality using very limited knowledge. Driven by the primary target of biometric quality assessment, i.e., verification error minimization, we assume that low-quality images are falsely rejected in a verification system. Based on this assumption, the low- and high-quality images are labeled automatically. We then train a DNN on the resulting data set to predict the image quality. To further improve the DNN’s robustness, the finger-vein image is divided into various patches, on which a patch-based DNN is trained. The deepest layers associated with the patches form together a complementary and an over-complete representation. Subsequently, the quality of each patch from a testing image is estimated and the quality scores from the image patches are conjointly input to probabilistic support vector machines (P-SVM) to boost quality-assessment performance. To the best of our knowledge, this is the first proposed work of deep learning-based quality assessment, not only for finger-vein biometrics, but also for other biometrics in general. The experimental results on two public finger-vein databases show that the proposed scheme accurately identifies high- and low-quality images and significantly outperforms existing approaches in terms of the impact on equal error-rate decrease.",
"title": ""
},
{
"docid": "bb1554d174df80e7db20e943b4a69249",
"text": "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7].\n The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use?\n In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships.",
"title": ""
},
{
"docid": "c7c5fde8197d87f2551a2897d5fd4487",
"text": "The Parallel Meaning Bank is a corpus of translations annotated with shared, formal meaning representations comprising over 11 million words divided over four languages (English, German, Italian, and Dutch). Our approach is based on cross-lingual projection: automatically produced (and manually corrected) semantic annotations for English sentences are mapped onto their word-aligned translations, assuming that the translations are meaning-preserving. The semantic annotation consists of five main steps: (i) segmentation of the text in sentences and lexical items; (ii) syntactic parsing with Combinatory Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and (v) compositional semantic analysis based on Discourse Representation Theory. These steps are performed using statistical models trained in a semisupervised manner. The employed annotation models are all language-neutral. Our first results are promising.",
"title": ""
},
{
"docid": "0efc0e61946979158277aa9314227426",
"text": "Many chronic diseases possess a shared biology. Therapies designed for patients at risk of multiple diseases need to account for the shared impact they may have on related diseases to ensure maximum overall well-being. Learning from data in this setting differs from classical survival analysis methods since the incidence of an event of interest may be obscured by other related competing events. We develop a semiparametric Bayesian regression model for survival analysis with competing risks, which can be used for jointly assessing a patient’s risk of multiple (competing) adverse outcomes. We construct a Hierarchical Bayesian Mixture (HBM) model to describe survival paths in which a patient’s covariates influence both the estimation of the type of adverse event and the subsequent survival trajectory through Multivariate Random Forests. In addition variable importance measures, which are essential for clinical interpretability are induced naturally by our model. We aim with this setting to provide accurate individual estimates but also interpretable conclusions for use as a clinical decision support tool. We compare our method with various state-of-the-art benchmarks on both synthetic and clinical data.",
"title": ""
},
{
"docid": "e28ba2ea209537cf9867428e3cf7fdd7",
"text": "People take their mobile phones everywhere they go. In Saudi Arabia, the mobile penetration is very high and students use their phones for different reasons in the classroom. The use of mobile devices in classroom triggers an alert of the impact it might have on students’ learning. This study investigates the association between the use of mobile phones during classroom and the learners’ performance and satisfaction. Results showed that students get distracted, and that this diversion of their attention is reflected in their academic success. However, this is not applicable for all. Some students received high scores even though they declared using mobile phones in classroom, which triggers a request for a deeper study.",
"title": ""
},
{
"docid": "443191f41aba37614c895ba3533f80ed",
"text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.",
"title": ""
},
{
"docid": "d6d275b719451982fa67d442c55c186c",
"text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.",
"title": ""
},
{
"docid": "17dce24f26d7cc196e56a889255f92a8",
"text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.",
"title": ""
},
{
"docid": "ee5b04d7b62186775a7b6ab77b8bbd60",
"text": "Answers submitted to CQA forums are often elaborate, contain spam, are marred by slurs and business promotions. It is difficult for a reader to go through numerous such answers to gauge community opinion. As a result summarization becomes a prioritized task. However, there is a dearth of neural approaches for CQA summarization due to the lack of large scale annotated dataset. We create CQASUMM, the first annotated CQA summarization dataset by filtering the 4.4 million Yahoo! Answers L6 dataset. We sample threads where the best answer can double up as a reference and build hundred word summaries from them. We provide scripts1 to reconstruct the dataset and introduce the new task of Community Question Answering Summarization.\n Multi document summarization(MDS) has been widely studied using news corpora. However documents in CQA have higher variance and contradicting opinion. We compare the popular MDS techniques and evaluate their performance on our CQA corpora. We find that most MDS workflows are built for the entirely factual news corpora, whereas our corpus has a fair share of opinion based instances too. We therefore introduce OpinioSumm, a new MDS which outperforms the best baseline by 4.6% w.r.t ROUGE-1 score.",
"title": ""
},
{
"docid": "51a750fcc6cff4e51095aa80ce25c7d2",
"text": "We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "b145483a8c91b846876f571f5a138f48",
"text": "Please cite this article in press as: N. Gra doi:10.1016/j.imavis.2008.04.014 This paper presents a novel approach for combining a set of registered images into a composite mosaic with no visible seams and minimal texture distortion. To promote execution speed in building large area mosaics, the mosaic space is divided into disjoint regions of image intersection based on a geometric criterion. Pair-wise image blending is performed independently in each region by means of watershed segmentation and graph cut optimization. A contribution of this work – use of watershed segmentation on image differences to find possible cuts over areas of low photometric difference – allows for searching over a much smaller set of watershed segments, instead of over the entire set of pixels in the intersection zone. Watershed transform seeks areas of low difference when creating boundaries of each segment. Constraining the overall cutting lines to be a sequence of watershed segment boundaries results in significant reduction of search space. The solution is found efficiently via graph cut, using a photometric criterion. The proposed method presents several advantages. The use of graph cuts over image pairs guarantees the globally optimal solution for each intersection region. The independence of such regions makes the algorithm suitable for parallel implementation. The separated use of the geometric and photometric criteria leads to reduced memory requirements and a compact storage of the input data. Finally, it allows the efficient creation of large mosaics, without user intervention. We illustrate the performance of the approach on image sequences with prominent 3-D content and moving objects. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2f2c36452ab45c4234904d9b11f28eb7",
"text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"title": ""
},
{
"docid": "2265121606a423d581ca696a9b7cee31",
"text": "Heterochromatin protein 1 (HP1) was first described in Drosophila melanogaster as a heterochromatin associated protein with dose-dependent effect on gene silencing. The HP1 family is evolutionarily highly conserved and there are multiple members within the same species. The multi-functionality of HP1 reflects its ability to interact with diverse nuclear proteins, ranging from histones and transcriptional co-repressors to cohesion and DNA replication factors. As its name suggests, HP1 is well-known as a silencing protein found at pericentromeres and telomeres. In contrast to previous views that heterochromatin is transcriptionally inactive; noncoding RNAs transcribed from heterochromatic DNA repeats regulates the assembly and function of heterochromatin ranging from fission yeast to animals. Moreover, more recent progress has shed light on the paradoxical properties of HP1 in the nucleus and has revealed, unexpectedly, its existence in the euchromatin. Therefore, HP1 proteins might participate in both transcription repression in heterochromatin and euchromatin.",
"title": ""
},
{
"docid": "6b01a80b6502cb818024e0ac3b00114b",
"text": "BACKGROUND\nArithmetical skills are essential to the effective exercise of citizenship in a numerate society. How these skills are acquired, or fail to be acquired, is of great importance not only to individual children but to the organisation of formal education and its role in society.\n\n\nMETHOD\nThe evidence on the normal and abnormal developmental progression of arithmetical abilities is reviewed; in particular, evidence for arithmetical ability arising from innate specific cognitive skills (innate numerosity) vs. general cognitive abilities (the Piagetian view) is compared.\n\n\nRESULTS\nThese include evidence from infancy research, neuropsychological studies of developmental dyscalculia, neuroimaging and genetics. The development of arithmetical abilities can be described in terms of the idea of numerosity -- the number of objects in a set. Early arithmetic is usually thought of as the effects on numerosity of operations on sets such as set union. The child's concept of numerosity appears to be innate, as infants, even in the first week of life, seem to discriminate visual arrays on the basis of numerosity. Development can be seen in terms of an increasingly sophisticated understanding of numerosity and its implications, and in increasing skill in manipulating numerosities. The impairment in the capacity to learn arithmetic -- dyscalculia -- can be interpreted in many cases as a deficit in the concept in the child's concept of numerosity. The neuroanatomical bases of arithmetical development and other outstanding issues are discussed.\n\n\nCONCLUSIONS\nThe evidence broadly supports the idea of an innate specific capacity for acquiring arithmetical skills, but the effects of the content of learning, and the timing of learning in the course of development, requires further investigation.",
"title": ""
},
{
"docid": "3ba011d181a4644c8667b139c63f50ff",
"text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.",
"title": ""
},
{
"docid": "0e54be77f69c6afbc83dfabc0b8b4178",
"text": "Spinal muscular atrophy (SMA) is a neurodegenerative disease characterized by loss of motor neurons in the anterior horn of the spinal cord and resultant weakness. The most common form of SMA, accounting for 95% of cases, is autosomal recessive proximal SMA associated with mutations in the survival of motor neurons (SMN1) gene. Relentless progress during the past 15 years in the understanding of the molecular genetics and pathophysiology of SMA has resulted in a unique opportunity for rational, effective therapeutic trials. The goal of SMA therapy is to increase the expression levels of the SMN protein in the correct cells at the right time. With this target in sight, investigators can now effectively screen potential therapies in vitro, test them in accurate, reliable animal models, move promising agents forward to clinical trials, and accurately diagnose patients at an early or presymptomatic stage of disease. A major challenge for the SMA community will be to prioritize and develop the most promising therapies in an efficient, timely, and safe manner with the guidance of the appropriate regulatory agencies. This review will take a historical perspective to highlight important milestones on the road to developing effective therapies for SMA.",
"title": ""
},
{
"docid": "6bbbddca9ba258afb25d6e8af9bfec82",
"text": "With the ever increasing popularity of electronic commerce, the evaluation of antecedents and of customer satisfaction have become very important for the cyber shopping store (CSS) and for researchers. The various models of customer satisfaction that researchers have provided so far are mostly based on the traditional business channels and thus may not be appropriate for CSSs. This research has employed case and survey methods to study the antecedents of customer satisfaction. Though case methods a research model with hypotheses is developed. And through survey methods, the relationships between antecedents and satisfaction are further examined and analyzed. We find five antecedents of customer satisfaction to be more appropriate for online shopping on the Internet. Among them homepage presentation is a new and unique antecedent which has not existed in traditional marketing.",
"title": ""
},
{
"docid": "df5ef1235844aa1593203f96cd2130bd",
"text": "It is generally well acknowledged that humans are capable of having a theory of mind (ToM) of others. We present here a model which borrows mechanisms from three dissenting explanations of how ToM develops and functions, and show that our model behaves in accordance with various ToM experiments (Wellman, Cross, & Watson, 2001; Leslie, German, & Polizzi, 2005).",
"title": ""
},
{
"docid": "ed23845ded235d204914bd1140f034c3",
"text": "We propose a general framework to learn deep generative models via Variational Gradient Flow (VGrow) on probability spaces. The evolving distribution that asymptotically converges to the target distribution is governed by a vector field, which is the negative gradient of the first variation of the f -divergence between them. We prove that the evolving distribution coincides with the pushforward distribution through the infinitesimal time composition of residual maps that are perturbations of the identity map along the vector field. The vector field depends on the density ratio of the pushforward distribution and the target distribution, which can be consistently learned from a binary classification problem. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used divergences, including KullbackLeibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with stateof-the-art GANs. ∗Yuling Jiao (yulingjiaomath@whu.edu.cn) †Can Yang (macyang@ust.hk) 1 ar X iv :1 90 1. 08 46 9v 2 [ cs .L G ] 7 F eb 2 01 9",
"title": ""
}
] |
scidocsrr
|
90ac93734d1255e3fed9569138c05db8
|
Generalizing the Convolution Operator to Extend CNNs to Irregular Domains
|
[
{
"docid": "be593352763133428b837f1c593f30cf",
"text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"title": ""
},
{
"docid": "645395d46f653358d942742711d50c0b",
"text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets",
"title": ""
}
] |
[
{
"docid": "0cd96187b257ee09060768650432fe6d",
"text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.",
"title": ""
},
{
"docid": "ee5b46719023b5dbae96997bbf9925b0",
"text": "The teaching of reading in different languages should be informed by an effective evidence base. Although most children will eventually become competent, indeed skilled, readers of their languages, the pre-reading (e.g. phonological awareness) and language skills that they bring to school may differ in systematic ways for different language environments. A thorough understanding of potential differences is required if literacy teaching is to be optimized in different languages. Here we propose a theoretical framework based on a psycholinguistic grain size approach to guide the collection of evidence in different countries. We argue that the development of reading depends on children's phonological awareness in all languages studied to date. However, we propose that because languages vary in the consistency with which phonology is represented in orthography, there are developmental differences in the grain size of lexical representations, and accompanying differences in developmental reading strategies across orthographies.",
"title": ""
},
{
"docid": "5387c752db7b4335a125df91372099b3",
"text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.",
"title": ""
},
{
"docid": "91599bb49aef3e65ee158ced65277d80",
"text": "We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.",
"title": ""
},
{
"docid": "947bb564a2a4207d33ca545d8194add4",
"text": "Classical theories of the firm assume access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using twenty-five online field experiments (representing $2.8 million) with major U.S. retailers and brokerages, that this assumption typically does not hold. Statistical evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign—a “small” impact on a noisy dependent variable can generate positive returns. A concise statistical argument shows that the required sample size for an experiment to generate sufficiently informative confidence intervals is typically in excess of ten million person-weeks. This also implies that heterogeneity bias (or model misspecification) unaccounted for by observational methods only needs to explain a tiny fraction of the variation in sales to severely bias estimates. The weak informational feedback means most firms cannot even approach profit maximization.",
"title": ""
},
{
"docid": "553ec50cb948fb96d96b5481ada71399",
"text": "Enormous amount of online information, available in legal domain, has made legal text processing an important area of research. In this paper, we attempt to survey different text summarization techniques that have taken place in the recent past. We put special emphasis on the issue of legal text summarization, as it is one of the most important areas in legal domain. We start with general introduction to text summarization, briefly touch the recent advances in single and multi-document summarization, and then delve into extraction based legal text summarization. We discuss different datasets and metrics used in summarization and compare performances of different approaches, first in general and then focused to legal text. we also mention highlights of different summarization techniques. We briefly cover a few software tools used in legal text summarization. We finally conclude with some future research directions.",
"title": ""
},
{
"docid": "577b9ea82dd60b394ad3024452986d96",
"text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods involving manual detection are not only time consuming, expensive and inaccurate, but in the age of big data they are also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive review of financial fraud detection research using such data mining methods, with a particular focus on computational intelligence (CI)-based techniques. Over fifty scientific literature, primarily spanning the period 2004-2014, were analysed in this study; literature that reported empirical studies focusing specifically on CI-based financial fraud detection were considered in particular. Research gap was identified as none of the existing review articles addresses the association among fraud types, CIbased detection algorithms and their performance, as reported in the literature. We have presented a comprehensive classification as well as analysis of existing fraud detection literature based on key aspects such as detection algorithm used, fraud type investigated, and performance of the detection methods for specific financial fraud types. Some of the key issues and challenges associated with the current practices and potential future direction of research have also",
"title": ""
},
{
"docid": "338d3b05db192186bb6caf6f36904dd0",
"text": "The threat of malicious insiders to organizations is persistent and increasing. We examine 15 real cases of insider threat sabotage of IT systems to identify several key points in the attack time-line, such as when the insider clearly became disgruntled, began attack preparations, and carried out the attack. We also determine when the attack stopped, when it was detected, and when action was taken on the insider. We found that 7 of the insiders we studied clearly became disgruntled more than 28 days prior to attack, but 9 did not carry out malicious acts until less than a day prior to attack. Of the 15 attacks, 8 ended within a day, 12 were detected within a week, and in 10 cases action was taken on the insider within a month. This exercise is a proof-of-concept for future work on larger data sets, and in this paper we detail our study methods and results, discuss challenges we faced, and identify potential new research directions.",
"title": ""
},
{
"docid": "3256b2050c603ca16659384a0e98a22c",
"text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.",
"title": ""
},
{
"docid": "e775fbbad557e2335268111ab7fc1875",
"text": "In recent times the rate at which information is being processed and shared through the internet has tremendously increased. Internet users are in need of systems and tools that will help them manage this information overload. Search engines and recommendation systems have been recently adopted to help solve this problem. The aim of this research is to model a spontaneous research paper recommender system that recommends serendipitous research papers from two large normally mismatched information spaces or domains using BisoNets. Set and graph theory methods were employed to model the problem, whereas text mining methodologies were used to develop nodes and links of the BisoNets. Nodes were constructed from keywords, while links between nodes were established through weighting that was determined from the co-occurrence of corresponding keywords in the same title and domain. Preliminary results from the word clouds indicates that there is no obvious relationship between the two domains. The strongest links in the established information networks can be exploited to display associations that can be discovered between the two matrices. Research paper recommender systems exploit these latent relationships to recommend serendipitous articles when Bisociative Knowledge Discovery techniques and methodologies are utilized appropriately.",
"title": ""
},
{
"docid": "9d849042d1775cf9008678f98f1a3452",
"text": "Nonuniform sampling can be utilized to achieve certain desirable results. Periodic nonuniform sampling can decrease the required sampling rate for signals. Random sampling can be used as a digital alias-free signal processing method in analog-to-digital conversion. In this paper, we first present the fractional spectrum estimation of signals that are bandlimited in the fractional Fourier domain based on the general periodic random sampling approach. To show the estimation effect, the unbiasedness, the variance, and the optimal estimation condition are analyzed. The reconstruction of the fractional spectrum from the periodic random samples is also proposed. Second, the effects of sampling jitters and observation errors on the performance of the fractional spectrum estimation are analyzed, where the new defined fractional characteristic function is used to compensate the estimation bias from sampling jitters. Furthermore, we investigate the fractional spectral analysis from two widely used random sampling schemes, i.e., simple random sampling and stratified random sampling. Finally, all of the analysis results are applied and verified using a radar signal processing system.",
"title": ""
},
{
"docid": "5ccda95046b0e5d1cfc345011b1e350d",
"text": "Considerable emphasis is currently placed on reducing healthcare-associated infection through improving hand hygiene compliance among healthcare professionals. There is also increasing discussion in the lay media of perceived poor hand hygiene compliance among healthcare staff. Our aim was to report the outcomes of a systematic search for peer-reviewed, published studies - especially clinical trials - that focused on hand hygiene compliance among healthcare professionals. Literature published between December 2009, after publication of the World Health Organization (WHO) hand hygiene guidelines, and February 2014, which was indexed in PubMed and CINAHL on the topic of hand hygiene compliance, was searched. Following examination of relevance and methodology of the 57 publications initially retrieved, 16 clinical trials were finally included in the review. The majority of studies were conducted in the USA and Europe. The intensive care unit emerged as the predominant focus of studies followed by facilities for care of the elderly. The category of healthcare worker most often the focus of the research was the nurse, followed by the healthcare assistant and the doctor. The unit of analysis reported for hand hygiene compliance was 'hand hygiene opportunity'; four studies adopted the 'my five moments for hand hygiene' framework, as set out in the WHO guidelines, whereas other papers focused on unique multimodal strategies of varying design. We concluded that adopting a multimodal approach to hand hygiene improvement intervention strategies, whether guided by the WHO framework or by another tested multimodal framework, results in moderate improvements in hand hygiene compliance.",
"title": ""
},
{
"docid": "4fc67f5a4616db0906b943d7f13c856d",
"text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
},
{
"docid": "27464fdcd9a56975bf381773fd4da76d",
"text": "Although evidence with respect to its prevalence is mixed, it is clear that fathers perpetrate a serious proportion of filicide. There also seems to be a consensus that paternal filicide has attracted less research attention than its maternal counterpart and is therefore less well understood. National registries are a very rich source of data, but they generally provide limited information about the perpetrator as psychiatric, psychological and behavioral data are often lacking. This paper presents a fully documented case of a paternal filicide. Noteworthy is that two motives were present: spousal revenge as well as altruism. The choice of the victim was in line with emerging evidence indicating that children with disabilities in general and with autism in particular are frequent victims of filicide-suicide. Finally, a schizoid personality disorder was diagnosed. Although research is quite scarce on that matter, some research outcomes have showed an association between schizoid personality disorder and homicide and violence.",
"title": ""
},
{
"docid": "7eac260700c56178533ec687159ac244",
"text": "Chat robot, a computer program that simulates human conversation, or chat, through artificial intelligence an intelligence chat bot will be used to give information or answers to any question asked by user related to bank. It is more like a virtual assistant, people feel like they are talking with real person. They speak the same language we do, can answer questions. In banks, at user care centres and enquiry desks, human is insufficient and usually takes long time to process the single request which results in wastage of time and also reduce quality of user service. The primary goal of this chat bot is user can interact with mentioning their queries in plain English and the chat bot can resolve their queries with appropriate response in return The proposed system would help duplicate the user utility experience with one difference that employee and yet get the queries attended and resolved. It can extend daily life, by providing solutions to help desks, telephone answering systems, user care centers. This paper defines the dataset that we have prepared from FAQs of bank websites, architecture and methodology used for developing such chatbot. Also this paper discusses the comparison of seven ML classification algorithm used for getting the class of input to chat bot.",
"title": ""
},
{
"docid": "9c09cf2c1fd62e7d24f472e03b615017",
"text": "Summarization is the process of reducing a text document to create a summary that retains the most important points of the original document. Extractive summarizers work on the given text to extract sentences that best convey the message hidden in the text. Most extractive summarization techniques revolve around the concept of finding keywords and extracting sentences that have more keywords than the rest. Keyword extraction usually is done by extracting relevant words having a higher frequency than others, with stress on important ones'. Manual extraction or annotation of keywords is a tedious process brimming with errors involving lots of manual effort and time. In this paper, we proposed an algorithm to extract keyword automatically for text summarization in e-newspaper datasets. The proposed algorithm is compared with the experimental result of articles having the similar title in four different e-Newspapers to check the similarity and consistency in summarized results.",
"title": ""
},
{
"docid": "9cf81f7fc9fdfcf5718aba0a67b89a45",
"text": "Many modern games provide environments in which agents perform decision making at several levels of granularity. In the domain of real-time strategy games, an effective agent must make high-level strategic decisions while simultaneously controlling individual units in battle. We advocate reactive planning as a powerful technique for building multi-scale game AI and demonstrate that it enables the specification of complex, real-time agents in a unified agent architecture. We present several idioms used to enable authoring of an agent that concurrently pursues strategic and tactical goals, and an agent for playing the real-time strategy game StarCraft that uses these design patterns.",
"title": ""
},
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
}
] |
scidocsrr
|
65e66ad82fb578764ca436453dbc2756
|
User acceptance of a G2B system: a case of electronic procurement system in Malaysia
|
[
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "669fcb6f51aa8883d037e1de18b1513f",
"text": "Purpose – The purpose of this paper is to present a multi-faceted summary and classification of the existing literature in the field of quality of service for e-government and outline the main components of a quality model for e-government services. Design/methodology/approach – Starting with fundamental quality principles the paper examines and analyzes 36 different quality approaches concerning public sector services, e-services in general and more specifically e-government services. Based on the dimensions measured by each approach the paper classifies the approaches and concludes on the basic factors needed for the development of a complete quality model of e-government services. Findings – Based on the classification of literature approaches, the paper provides information about the main components of a quality model that may be used for the continuous monitoring and measuring of public e-services’ quality. The classification forms the basis for answering questions that must be addressed by the quality model, such as: What to assess?; Who will perform the assessment? and How the assessment will be done? Practical implications – This model can be used by the management of public organizations in order to measure and monitor the quality of e-services delivered to citizens. Originality/value – The results of the work presented in this paper form the basis for the development of a quality model for e-government services.",
"title": ""
}
] |
[
{
"docid": "0ccf6d97ff8a6b664a73056ec8e39dc7",
"text": "1. Resilient healthcare This integrative review focuses on the methodological strategies employed by studies on resilient healthcare. Resilience engineering (RE), which involves the study of coping with complexity (Woods and Hollnagel, 2006) in modern socio-technical systems (Bergström et al., 2015); emerged in about 2000. The RE discipline is quickly developing, and it has been applied to healthcare, aviation, the petrochemical industry, nuclear power plants, railways, manufacturing, natural disasters and other fields (Righi et al., 2015). The term ‘resilient healthcare’ (RHC) refers to the application of the concepts and methods of RE in the healthcare field, specifically regarding patient safety (Hollnagel et al., 2013a). Instead of the traditional risk management approach based on retrospective analyses of errors, RHC focuses on ‘everyday clinical work’, specifically on the ways it unfolds in practice (Braithwaite et al., 2017). Wears et al. (2015) defined RHC as follows. The ability of the health care system (a clinic, a ward, a hospital, a county) to adjust its functioning prior to, during, or following events (changes, disturbances or opportunities), and thereby sustain required operations under both expected and unexpected conditions. (p. xxvii) After more than a decade of theoretical development in the field of resilience, scholars are beginning to identify its methodological challenges (Woods, 2015; Nemeth and Herrera, 2015). The lack of welldefined constructs to conceptualize resilience challenges the ability to operationalize those constructs in empirical research (Righi et al., 2015; Wiig and Fahlbruch, forthcoming). Further, studying complexity requires challenging methodological designs to obtain evidence about the tested constructs to inform and further develop theory (Bergström and Dekker, 2014). It is imperative to gather emerging knowledge on applied methodology in empirical RHC research to map and discuss the methodological strategies in the healthcare domain. The insights gained might create and refine methodological designs to enable further development of RHC concepts and theory. This study aimed to describe and synthesize the methodological strategies currently applied in https://doi.org/10.1016/j.ssci.2018.08.025 Received 10 October 2016; Received in revised form 13 August 2018; Accepted 27 August 2018 ⁎ Corresponding author. E-mail addresses: siv.hilde.berg@sus.no (S.H. Berg), Kristin.akerjordet@uis.no (K. Akerjordet), mirjam.ekstedt@lnu.se (M. Ekstedt), karina.aase@uis.no (K. Aase). Safety Science 110 (2018) 300–312 Available online 05 September 2018 0925-7535/ © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/BY-NC-ND/4.0/). T empirical RHC research in terms of the empirical fields, applied research designs, methods, analytical strategies, main topics and data collection sources at different systemic levels, and to assess the quality of those studies. We argue that one implication of studying sociotechnical systems is that multiple levels in a given system must be addressed, as proposed by, for example, Rasmussen (1997). As such, this study synthesized the ways that RHC studies have approached empirical data at various systemic levels. 2. Methodology in resilient healthcare research ‘Research methodology’ is a strategy or plan of action that shapes the choices and uses of various methods and links them to desired outcomes (Crotty, 1998). This study broadly used the term ‘methodological strategy’ to denote an observed study’s overall research design, data collection sources, data collection methods and analytical methods at different systemic levels. The methodological issues discussed in the RHC literature to date have concerned the methods used to study everyday clinical practice, healthcare complexity and the operationalization of the constructs measuring resilience. 2.1. Methods of studying healthcare complexity RE research is characterized by its study of complexities. In a review of the rationale behind resilience research, Bergström et al. (2015) found that RE researchers typically justified their research by referring to the complexity of modern socio-technical systems that makes them inherently risky. Additionally, in the healthcare field, references are made to the complex adaptive system (CAS) perspective (Braithwaite et al., 2013). CAS emerged from complexity theory, and it takes a dynamic approach to human and nonhuman agents (Urry, 2003). Healthcare is part of a complex socio-technical system and an example of a CAS comprising professionals, patients, managers, policymakers and technologies, all of which interact with and rely on trade-offs and adjustments to succeed in everyday clinical work (Braithwaite et al., 2013). Under complexity theory, complex systems are viewed as open systems that interact with their environments, implying a need to understand the systems’ environments before understanding the systems. Because these environments are complex, no standard methodology can provide a complete understanding (Bergström and Dekker, 2014), and the opportunities for experimental research are limited. Controlled studies might not be able to identify the complex interconnections and multiple variables that influence care; thus, non-linear methods are necessary to describe and understand those systems. Consequently, research on complexity imposes methodological challenges related to the development of valid evidence (Braithwaite et al., 2013). It has been argued that triangulation is necessary to study complex work settings in order to reveal actual phenomena and minimize bias leading to misinterpretation (Nemeth et al., 2011). Methodological triangulation has been suggested, as well as data triangulation, as a strategic way to increase the internal and external validity of RE/RHC research (Nemeth et al., 2011; Mendonca, 2008). Data triangulation involves collecting data from various sources, such as reports, policy documents, multiple professional groups and patient feedback, whereas methodological triangulation involves combining different qualitative methods or mixing qualitative and quantitative methods. Multiple methods have been suggested for research on everyday clinical practice and healthcare complexity. Hollnagel (2014) suggested qualitative methods, such as qualitative interviews, field observations and organizational development techniques (e.g. appreciative inquiry and cooperative inquiry). Nemeth and Herrera (2015) proposed observation in actual settings as a core value of the RE field of practice. Drawing on the methods of cognitive system engineering, Nemeth et al. (2011) described the uses of cognitive task analysis (CTA) to study resilience. CTA comprises numerous methods, one of which is the critical decision method (CDM). CDM is a retrospective interview in which subjects are asked about critical events and decisions. Other proposed methods for studying complex work settings were work domain analysis (WDA), process tracing, artefact analysis and rapid prototyping. System modelling, using methods such as trend analysis, cluster analysis, social network analysis and log linear modelling, has been proposed as a way to study resilience from a socio-technical/CAS perspective (Braithwaite et al., 2013; Anderson et al., 2013). The functional resonance analysis method (FRAM) has been employed to study interactions and dependencies as they develop in specific situations. FRAM is presented as a way to study how complex and dynamic sociotechnical systems work (Hollnagel, 2012). In addition, Leveson et al. (2006) suggested STAMP, a model of accident causation based on systems theory, as a method to analyse resilience. 2.2. Operationalization of resilience A vast amount of the RE literature has been devoted to developing theories on resilience, emphasizing that the domain is in a theory development stage (Righi et al., 2015). This process of theory development is reflected in the diverse definitions and indicators of resilience proposed over the past decade e.g. 3, (Woods, 2006, 2011; Wreathall, 2006). Numerous constructs have been developed, such as resilient abilities (Woods, 2011; Hollnagel, 2008, 2010; Nemeth et al., 2008; Hollnagel et al., 2013b), Safety-II (Hollnagel, 2014), Work-as-done (WAD) and Work-as-imagined (WAI) (Hollnagel et al., 2015), and performance variability (Hollnagel, 2014). The operationalization of these constructs has been a topic of discussion. According to Westrum (2013), one challenge to determining measures of resilience in healthcare relates to the characteristics of resilience as a family of related ideas rather than as a single construct. The applied definitions of ‘resilience’ in RE research have focused on a given system’s adaptive capacities and its abilities to adopt or absorb disturbing conditions. This conceptual understanding of resilience has been applied to RHC [6, p. xxvii]. By understanding resilience as a ‘system’s ability’, the healthcare system is perceived as a separate ontological category. The system is regarded as a unit that might have individual goals, actions or abilities not necessarily shared by its members. Therefore, RHC is greater than the sum of its members’ individual actions, which is a perspective found in methodological holism (Ylikoski, 2012). The challenge is to operationalize the study of ‘the system as a whole’. Some scholars have advocated on behalf of locating the empirical basis of resilience by studying individual performances and aggregating those data to develop a theory of resilience (Mendonca, 2008; Furniss et al., 2011). This approach uses the strategy of finding the properties of the whole (the healthcare system) within the parts at the micro level, which is found in methodological individualism. The WAD and performance variability constructs bring resilience closer to an empirical ground by fr",
"title": ""
},
{
"docid": "a86114aeee4c0bc1d6c9a761b50217d4",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
},
{
"docid": "f033c98f752c8484dc616425ebb7ce5b",
"text": "Ethnography is the study of social interactions, behaviours, and perceptions that occur within groups, teams, organisations, and communities. Its roots canbe traced back to anthropological studies of small, rural (andoften remote) societies thatwereundertaken in the early 1900s, when researchers such as Bronislaw Malinowski and Alfred Radcliffe-Brown participated in these societies over long periods and documented their social arrangements and belief systems. This approach was later adopted by members of the Chicago School of Sociology (for example, Everett Hughes, Robert Park, Louis Wirth) and applied to a variety of urban settings in their studies of social life. The central aim of ethnography is to provide rich, holistic insights into people’s views and actions, as well as the nature (that is, sights, sounds) of the location they inhabit, through the collection of detailed observations and interviews. As Hammersley states, “The task [of ethnographers] is to document the culture, the perspectives and practices, of the people in these settings.The aim is to ‘get inside’ theway each groupof people sees theworld.” Box 1 outlines the key features of ethnographic research. Examples of ethnographic researchwithin thehealth services literature include Strauss’s study of achieving and maintaining order between managers, clinicians, and patients within psychiatric hospital settings; Taxis and Barber’s exploration of intravenous medication errors in acute care hospitals; Costello’s examination of death and dying in elderly care wards; and Østerlund’s work on doctors’ and nurses’ use of traditional and digital information systems in their clinical communications. Becker and colleagues’ Boys in White, an ethnographic study of medical education in the late 1950s, remains a classic in this field. Newer developments in ethnographic inquiry include auto-ethnography, in which researchers’ own thoughts andperspectives fromtheir social interactions form the central element of a study; meta-ethnography, in which qualitative research texts are analysed and synthesised to empirically create new insights and knowledge; and online (or virtual) ethnography, which extends traditional notions of ethnographic study from situated observation and face to face researcher-participant interaction to technologically mediated interactions in online networks and communities.",
"title": ""
},
{
"docid": "cc12a6ccdfbe2242eb4f9f72d5a17cd2",
"text": "Software is everywhere, from mission critical systems such as industrial power stations, pacemakers and even household appliances. This growing dependence on technology and the increasing complexity software has serious security implications as it means we are potentially surrounded by software that contain exploitable vulnerabilities. These challenges have made binary analysis an important area of research in computer science and has emphasized the need for building automated analysis systems that can operate at scale, speed and efficacy; all while performing with the skill of a human expert. Though great progress has been made in this area of research, there remains limitations and open challenges to be addressed. Recognizing this need, DARPA sponsored the Cyber Grand Challenge (CGC), a competition to showcase the current state of the art in systems that perform; automated vulnerability detection, exploit generation and software patching. This paper is a survey of the vulnerability detection and exploit generation techniques, underlying technologies and related works of two of the winning systems Mayhem and Mechanical Phish. Keywords—Cyber reasoning systems, automated binary analysis, automated exploit generation, dynamic symbolic execution, fuzzing",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "3af338a01d1419189b7706375feec0c2",
"text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws",
"title": ""
},
{
"docid": "1657df28bba01b18fb26bb8c823ad4b4",
"text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough.",
"title": ""
},
{
"docid": "a9a7916c7cb3d2c56457b0cc5cb0471c",
"text": "In this paper, we propose a novel approach to integrating inertial sensor data into a pose-graph free dense mapping algorithm that we call GravityFusion. A range of dense mapping algorithms have recently been proposed, though few integrate inertial sensing. We build on ElasticFusion, a particularly elegant approach that fuses color and depth information directly into small surface patches called surfels. Traditional inertial integration happens at the level of camera motion, however, a pose graph is not available here. Instead, we present a novel approach that incorporates the gravity measurements directly into the map: Each surfel is annotated by a gravity measurement, and that measurement is updated with each new observation of the surfel. We use mesh deformation, the same mechanism used for loop closure in ElasticFusion, to enforce a consistent gravity direction among all the surfels. This eliminates drift in two degrees of freedom, avoiding the typical curving of maps that are particularly pronounced in long hallways, as we qualitatively show in the experimental evaluation.",
"title": ""
},
{
"docid": "585c589cdab52eaa63186a70ac81742d",
"text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).",
"title": ""
},
{
"docid": "2ce789863ff0d3359f741adddb09b9f1",
"text": "The largest source of sound events is web videos. Most videos lack sound event labels at segment level, however, a significant number of them do respond to text queries, from a match found using metadata by search engines. In this paper we explore the extent to which a search query can be used as the true label for detection of sound events in videos. We present a framework for large-scale sound event recognition on web videos. The framework crawls videos using search queries corresponding to 78 sound event labels drawn from three datasets. The datasets are used to train three classifiers, and we obtain a prediction on 3.7 million web video segments. We evaluated performance using the search query as true label and compare it with human labeling. Both types of ground truth exhibited close performance, to within 10%, and similar performance trend with increasing number of evaluated segments. Hence, our experiments show potential for using search query as a preliminary true label for sound event recognition in web videos.",
"title": ""
},
{
"docid": "38f19c7087d5529e2f6b84beca42de3a",
"text": "We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i.e. NER, Chunking, and POS tagging). Misconceptions and inconsistent conclusions in existing literature are examined and clarified under statistical experiments. In the comparison and analysis process, we reach several practical conclusions which can be useful to practitioners.",
"title": ""
},
{
"docid": "7fd21ee95850fec1f1e00b766eebbc06",
"text": "HP’s StoreAll with Express Query is a scalable commercial file archiving product that offers sophisticated file metadata management and search capabilities [3]. A new REST API enables fast, efficient searching to find all files that meet a given set of metadata criteria and the ability to tag files with custom metadata fields. The product brings together two significant systems: a scale out file system and a metadata database based on LazyBase [10]. In designing and building the combined product, we identified several real-world issues in using a pipelined database system in a distributed environment, and overcame several interesting design challenges that were not contemplated by the original research prototype. This paper highlights our experiences.",
"title": ""
},
{
"docid": "3d9f1288235847f6c4e9b2c0966c51e9",
"text": "Over the past decade, many laboratories have begun to explore brain-computer interface (BCI) technology as a radically new communication option for those with neuromuscular impairments that prevent them from using conventional augmentative communication methods. BCI's provide these users with communication channels that do not depend on peripheral nerves and muscles. This article summarizes the first international meeting devoted to BCI research and development. Current BCI's use electroencephalographic (EEG) activity recorded at the scalp or single-unit activity recorded from within cortex to control cursor movement, select letters or icons, or operate a neuroprosthesis. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI which recognizes the commands contained in the input and expresses them in device control. Current BCI's have maximum information transfer rates of 5-25 b/min. Achievement of greater speed and accuracy depends on improvements in signal processing, translation algorithms, and user training. These improvements depend on increased interdisciplinary cooperation between neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective methods for evaluating alternative methods. The practical use of BCI technology depends on the development of appropriate applications, identification of appropriate user groups, and careful attention to the needs and desires of individual users. BCI research and development will also benefit from greater emphasis on peer-reviewed publications, and from adoption of standard venues for presentations and discussion.",
"title": ""
},
{
"docid": "1d0a84f55e336175fa60d3fa9eec9664",
"text": "In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80% corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"title": ""
},
{
"docid": "1590742097219610170bd62eb3799590",
"text": "In this paper, we develop a vision-based system that employs a combined RGB and depth descriptor to classify hand gestures. The method is studied for a human-machine interface application in the car. Two interconnected modules are employed: one that detects a hand in the region of interaction and performs user classification, and another that performs gesture recognition. The feasibility of the system is demonstrated using a challenging RGBD hand gesture data set collected under settings of common illumination variation and occlusion.",
"title": ""
},
{
"docid": "36867b8478a8bd6be79902efd5e9d929",
"text": "Most state-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementation and evaluation of a multi-dimensional storage virtualization system called Stonehenge, which is able to virtualize a cluster-based physical storage system along multiple dimensions, including bandwidth, capacity, and latency. As a result, Stonehenge is able to multiplex multiple virtual disks, each with a distinct bandwidth, capacity, and latency attribute, on a single physical storage system as if they are separate physical disks. A key enabling technology for Stonehenge is an efficiency-aware real-time disk scheduling algorithm called dual-queue disk scheduling, which maximizes disk utilization efficiency while providing Quality of Service (QoS) guarantees. To optimize disk utilization efficiency, Stonehenge exploits run-time measurements extensively, for admission control, computing latency-derived bandwidth requirement, and predicting disk service time.",
"title": ""
},
{
"docid": "c743c63848ca96f0eb47090ea648d897",
"text": "Cyber-Physical Systems (CPSs) are the future generation of highly connected embedded systems having applications in diverse domains including Oil and Gas. Employing Product Line Engineering (PLE) is believed to bring potential benefits with respect to reduced cost, higher productivity, higher quality, and faster time-to-market. However, relatively few industrial field studies are reported regarding the application of PLE to develop large-scale systems, and more specifically CPSs. In this paper, we report about our experiences and insights gained from investigating the application of model-based PLE at a large international organization developing subsea production systems (typical CPSs) to manage the exploitation of oil and gas production fields. We report in this paper 1) how two systematic domain analyses (on requirements engineering and product configuration/derivation) were conducted to elicit CPS PLE requirements and challenges, 2) key results of the domain analysis (commonly observed in other domains), and 3) our initial experience of developing and applying two Model Based System Engineering (MBSE) PLE solution to address some of the requirements and challenges elicited during the domain analyses.",
"title": ""
},
{
"docid": "cbf10563c5eb251f765b93be554b7439",
"text": "BACKGROUND\nAlthough fine-needle aspiration (FNA) is a safe and accurate diagnostic procedure for assessing thyroid nodules, it has limitations in diagnosing follicular neoplasms due to its relatively high false-positive rate. The purpose of the present study was to evaluate the diagnostic role of core-needle biopsy (CNB) for thyroid nodules with follicular neoplasm (FN) in comparison with FNA.\n\n\nMETHODS\nA series of 107 patients (24 men, 83 women; mean age, 47.4 years) from 231 FNAs and 107 patients (29 men, 78 women; mean age, 46.3 years) from 186 CNBs with FN readings, all of whom underwent surgery, from October 2008 to December 2013 were retrospectively analyzed. The false-positive rate, unnecessary surgery rate, and malignancy rate for the FNA and CNB patients according to the final diagnosis following surgery were evaluated.\n\n\nRESULTS\nThe CNB showed a significantly lower false-positive and unnecessary surgery rate than the FNA (4.7% versus 30.8%, 3.7% versus 26.2%, p < 0.001, respectively). In the FNA group, 33 patients (30.8%) had non-neoplasms, including nodular hyperplasia (n = 32) and chronic lymphocytic thyroiditis (n = 1). In the CNB group, 5 patients (4.7%) had non-neoplasms, all of which were nodular hyperplasia. Moreover, the CNB group showed a significantly higher malignancy rate than FNA (57.9% versus 28%, p < 0.001).\n\n\nCONCLUSIONS\nCNB showed a significantly lower false-positive rate and a higher malignancy rate than FNA in diagnosing FN. Therefore, CNB could minimize unnecessary surgery and provide diagnostic confidence when managing patients with FN to perform surgery.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "59e49a798fed8479df98435003f4647e",
"text": "The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single-depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the usability of applications that involve interaction with external objects, such as sport training or exercising systems. The problem becomes more critical when Kinect incorrectly perceives body parts. This is because applications have limited information about the recognition correctness, and using those parts to synthesize body postures would result in serious visual artifacts. In this paper, we propose a new method to reconstruct valid movement from incomplete and noisy postures captured by Kinect. We first design a set of measurements that objectively evaluates the degree of reliability on each tracked body part. By incorporating the reliability estimation into a motion database query during run time, we obtain a set of similar postures that are kinematically valid. These postures are used to construct a latent space, which is known as the natural posture space in our system, with local principle component analysis. We finally apply frame-based optimization in the space to synthesize a new posture that closely resembles the true user posture while satisfying kinematic constraints. Experimental results show that our method can significantly improve the quality of the recognized posture under severely occluded environments, such as a person exercising with a basketball or moving in a small room.",
"title": ""
}
] |
scidocsrr
|
ade1510581160486c98f3131a7f24f81
|
Theia: A Fast and Scalable Structure-from-Motion Library
|
[
{
"docid": "bf1bd9bdbe8e4a93e814ea9dc91e6eb3",
"text": "A new robust matching method is proposed. The progressive sample consensus (PROSAC) algorithm exploits the linear ordering defined on the set of correspondences by a similarity function used in establishing tentative correspondences. Unlike RANSAC, which treats all correspondences equally and draws random samples uniformly from the full set, PROSAC samples are drawn from progressively larger sets of top-ranked correspondences. Under the mild assumption that the similarity measure predicts correctness of a match better than random guessing, we show that PROSAC achieves large computational savings. Experiments demonstrate it is often significantly faster (up to more than hundred times) than RANSAC. For the derived size of the sampled set of correspondences as a function of the number of samples already drawn, PROSAC converges towards RANSAC in the worst case. The power of the method is demonstrated on wide-baseline matching problems.",
"title": ""
},
{
"docid": "c1797ddf6dd23374e17490d09d6e70b2",
"text": "This paper presents a general solution to the determination of the pose of a perspective camera with unknown focal length from images of four 3D reference points. Our problem is a generalization of the P3P and P4P problems previously developed for fully calibrated cameras. Given four 2D-to-3D correspondences, we estimate camera position, orientation and recover the camera focal length. We formulate the problem and provide a minimal solution from four points by solving a system of algebraic equations. We compare the Hidden variable resultant and Grobner basis techniques for solving the algebraic equations of our problem. By evaluating them on synthetic and on real-data, we show that the Grobner basis technique provides stable results.",
"title": ""
}
] |
[
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "4287db8deb3c4de5d7f2f5695c3e2e70",
"text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.",
"title": ""
},
{
"docid": "58a75098bc32cb853504a91ddc53e1e8",
"text": "In this study, forest type mapping data set taken from UCI (University of California, Irvine) machine learning repository database has been classified using different machine learning algorithms including Multilayer Perceptron, k-NN, J48, Naïve Bayes, Bayes Net and KStar. In this dataset, there are 27 spectral values showing the type of three different forests (Sugi, Hinoki, mixed broadleaf). As the performance measure criteria, the classification accuracy has been used to evaluate the classifier algorithms and then to select the best method. The best classification rates have been obtained 90.43% with MLP, and 89.1013% with k-NN classifier (for k=5). As can be seen from the obtained results, the machine learning algorithms including MLP and k-NN classifier have obtained very promising results in the classification of forest type with 27 spectral features.",
"title": ""
},
{
"docid": "8171294a51cb3a83c43243ed96948c3d",
"text": "The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with finite number of snapshots even in cases where the signals are linearly dependent.",
"title": ""
},
{
"docid": "7af9293fbe12f3e859ee579d0f8739a5",
"text": "We present the findings from a Dutch field study of 30 outsourcing deals totaling to more than 100 million Euro, where both customers and corresponding IT-outsourcing providers participated. The main objective of the study was to examine from a number of well-known factors whether they discriminate between IT-outsourcing success and failure in the early phase of service delivery and to determine their impact on the chance on a successful deal. We investigated controllable factors to increase the odds during sourcing and rigid factors as a warning sign before closing a deal. Based on 250 interviews we collected 28 thousand data points. From the data and the perceived failure or success of the closed deals we investigated the discriminative power of the determinants (ex post). We found three statistically significant controllable factors that discriminated in an early phase between failure and success. They are: working according to the transition plan, demand management and, to our surprise, communication within the supplier organisation (so not between client and supplier). These factors also turned out to be the only significant factors for a (logistic) model predicting the chance of a successful IT-outsourcing. Improving demand management and internal communication at the supplier increases the odds the most. Sticking to the transition plan only modestly. Other controllable factors were not significant in our study. They are managing the business case, transfer of staff or assets, retention of expertise and communication within the client organisation. Of the rigid factors, the motive to outsource, cultural differences, and the type of work were insignificant. The motive of the supplier was significant: internal motivations like increasing profit margins or business volume decreased the chance of success while external motivations like increasing market share or becoming a player increased the success rate. From the data we inferred that the degree of experience with sourcing did not show to be a convincing factor of success. Hiring sourcing consultants worked contra-productive: it lowered chances of success.",
"title": ""
},
{
"docid": "b6d71f472848de18eadff0944eab6191",
"text": "Traditional approaches for object discovery assume that there are common characteristics among objects, and then attempt to extract features specific to objects in order to discriminate objects from background. However, the assumption “common features” may not hold, considering different variations between and within objects. Instead, we look at this problem from a different angle: if we can identify background regions, then the rest should belong to foreground. In this paper, we propose to model background to localize possible object regions. Our method is based on the observations: (1) background has limited categories, such as sky, tree, water, ground, etc., and can be easier to recognize, while there are millions of objects in our world with different shapes, colors and textures; (2) background is occluded because of foreground objects. Thus, we can localize objects based on voting from fore/background occlusion boundary. Our contribution lies: (1) we use graph-based image segmentation to yield high quality segments, which effectively leverages both flat segmentation and hierarchical segmentation approaches; (2) we model background to infer and rank object hypotheses. More specifically, we use background appearance and discriminative patches around fore/background boundary to build the background model. The experimental results show that our method can generate good quality object proposals and rank them where objects are covered highly within a small pool of proposed regions. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c4caf2968f7f2509b199d8d0ce5eec2d",
"text": "for competition that is based on information, their ability to exploit intangible assets has become far more decisive than their ability to invest in and manage physical assets. Several years ago, in recognition of this change, we introduced a concept we called the balanced scorecard. The balanced scorecard supplemented traditional fi nancial measures with criteria that measured performance from three additional perspectives – those of customers, internal business processes, and learning and growth. (See the exhibit “Translating Vision and Strategy: Four Perspectives.”) It therefore enabled companies to track fi nancial results while simultaneously monitoring progress in building the capabilities and acquiring the intangible assets they would need for future growth. The scorecard wasn’t Editor’s Note: In 1992, Robert S. Kaplan and David P. Norton’s concept of the balanced scorecard revolutionized conventional thinking about performance metrics. By going beyond traditional measures of fi nancial performance, the concept has given a generation of managers a better understanding of how their companies are really doing. These nonfi nancial metrics are so valuable mainly because they predict future fi nancial performance rather than simply report what’s already happened. This article, fi rst published in 1996, describes how the balanced scorecard can help senior managers systematically link current actions with tomorrow’s goals, focusing on that place where, in the words of the authors, “the rubber meets the sky.” Using the Balanced Scorecard as a Strategic Management System",
"title": ""
},
{
"docid": "734fc66c7c745498ca6b2b7fc6780919",
"text": "In this paper, we investigate the use of an unsupervised label clustering technique and demonstrate that it enables substantial improvements in visual relationship prediction accuracy on the Person in Context (PIC) dataset. We propose to group object labels with similar patterns of relationship distribution in the dataset into fewer categories. Label clustering not only mitigates both the large classification space and class imbalance issues, but also potentially increases data samples for each clustered category. We further propose to incorporate depth information as an additional feature into the instance segmentation model. The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver. We have rigorously evaluated the proposed techniques and performed various ablation analysis to validate the benefits of them.",
"title": ""
},
{
"docid": "471f4399e42aa0b00effac824a309ad6",
"text": "Resource management in Cloud Computing has been dominated by system-level virtual machines to enable the management of resources using a coarse grained approach, largely in a manner independent from the applications running on these infrastructures. However, in such environments, although different types of applications can be running, the resources are delivered equally to each one, missing the opportunity to manage the available resources in a more efficient and application driven way. So, as more applications target managed runtimes, high level virtualization is a relevant abstraction layer that has not been properly explored to enhance resource usage, control, and effectiveness. We propose a VM economics model to manage cloud infrastructures, governed by a quality-of-execution (QoE) metric and implemented by an extended virtual machine. The Adaptive and Resource-Aware Java Virtual Machine (ARA-JVM) is a cluster-enabled virtual execution environment with the ability to monitor base mechanisms (e.g. thread cheduling, garbage collection, memory or network consumptions) to assess application's performance and reconfigure these mechanisms in runtime according to previously defined resource allocation policies. Reconfiguration is driven by incremental gains in quality-of-execution (QoE), used by the VM economics model to balance relative resource savings and perceived performance degradation. Our work in progress, aims to allow cloud providers to exchange resource slices among virtual machines, continually addressing where those resources are required, while being able to determine where the reduction will be more economically effective, i.e., will contribute in lesser extent to performance degradation.",
"title": ""
},
{
"docid": "e388d63d917358d6c3733c0b2e598511",
"text": "This paper integrates theory, ethnography, and collaborative artwork to explore improvisational activity as both topic and tool of multidisciplinary HCI inquiry. Building on theories of improvisation drawn from art, music, HCI and social science, and two ethnographic studies based on interviews, participant observation and collaborative art practice, we seek to elucidate the improvisational nature of practice in both art and ordinary action, including human-computer interaction. We identify five key features of improvisational action -- reflexivity, transgression, tension, listening, and interdependence -- and show how these can deepen and extend both linear and open-ended methodologies in HCI and design. We conclude by highlighting collaborative engagement based on 'intermodulation' as a tool of multidisciplinary inquiry for HCI research and design.",
"title": ""
},
{
"docid": "002b890e5a9065027bc8749487b208e7",
"text": "The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture: to enable the creation of visually rich computer generated imagery for visual effects in movie production. Following in the footsteps of reyes over the past 30 years, this means supporting extremely complex geometry, texturing, and shading. In the current generation of renderers, it is essential to support very accurate global illumination as a means to naturally tie together different assets in a picture.\n This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade on hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure, which is then used by path-sampling logic to evaluate contributions and to inform what\n further rays to cast through the scene. We propose a shade before hit paradigm instead and minimise I/O strain on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling.\n We describe a full architecture built around this approach, featuring spectral light transport and a flexible implementation of multiple importance sampling (mis), resulting in a system able to support a comparable amount of extensibility to what made the reyes rendering architecture successful over many decades.",
"title": ""
},
{
"docid": "aeadbf476331a67bec51d5d6fb6cc80b",
"text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance",
"title": ""
},
{
"docid": "cda6d8c94602170e2534fc29973ecff8",
"text": "In 1912, Max Wertheimer published his paper on phi motion, widely recognized as the start of Gestalt psychology. Because of its continued relevance in modern psychology, this centennial anniversary is an excellent opportunity to take stock of what Gestalt psychology has offered and how it has changed since its inception. We first introduce the key findings and ideas in the Berlin school of Gestalt psychology, and then briefly sketch its development, rise, and fall. Next, we discuss its empirical and conceptual problems, and indicate how they are addressed in contemporary research on perceptual grouping and figure-ground organization. In particular, we review the principles of grouping, both classical (e.g., proximity, similarity, common fate, good continuation, closure, symmetry, parallelism) and new (e.g., synchrony, common region, element and uniform connectedness), and their role in contour integration and completion. We then review classic and new image-based principles of figure-ground organization, how it is influenced by past experience and attention, and how it relates to shape and depth perception. After an integrated review of the neural mechanisms involved in contour grouping, border ownership, and figure-ground perception, we conclude by evaluating what modern vision science has offered compared to traditional Gestalt psychology, whether we can speak of a Gestalt revival, and where the remaining limitations and challenges lie. A better integration of this research tradition with the rest of vision science requires further progress regarding the conceptual and theoretical foundations of the Gestalt approach, which is the focus of a second review article.",
"title": ""
},
{
"docid": "2b00f2b02fa07cdd270f9f7a308c52c5",
"text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.",
"title": ""
},
{
"docid": "2fb41c981494c285f663f74e1dae6299",
"text": "OMNIGLOT is a dataset containing 1,623 hand-written characters from 50 various alphabets. Each character is represented by about 20 images that makes the problem very challenging. The dataset is split into 24,345 training datapoints and 8,070 test images. We randomly pick 1,345 training examples for validation. During training we applied dynamic binarization of data similarly to dynamic MNIST.",
"title": ""
},
{
"docid": "869748c038d81976938b50652827f89c",
"text": "Complex elbow fractures are exceedingly challenging to treat. Treatment of severe distal humeral fractures fails because of either displacement or nonunion at the supracondylar level or stiffness resulting from prolonged immobilization. Coronal shear fractures of the capitellum and trochlea are difficult to repair and may require extensile exposure. Olecranon fracture-dislocations are complex fractures of the olecranon associated with subluxation or dislocation of the radial head and/or the coronoid process. The radioulnar relationship usually is preserved in anterior but disrupted in posterior fracture-dislocations. A skeletal distractor can be useful in facilitating reduction. Coronoid fractures can be classified according to whether the fracture involves the tip, the anteromedial facet, or the base (body) of the coronoid. Anteromedial coronoid fractures are actually varus posteromedial rotatory fracture subluxations and are often serious injuries. These patterns of injury predict associated injuries and instability as well as surgical approach and treatment. The radial head is the bone most commonly fractured in the adult elbow. If the coronoid is fractured, the radial head becomes a critical factor in elbow stability. Its role becomes increasingly important as other soft-tissue and bony constraints are compromised. Articular injury to the radial head is commonly more severe than noted on plain radiographs. Fracture fragments are often anterior. Implants applied to the surface of the radial head must be placed in a safe zone.",
"title": ""
},
{
"docid": "7e1438d99cf737335fbdc871ecaa1486",
"text": "Based on LDA(Latent Dirichlet Allocation) topic model, a generative model for multi-document summarization, namely Titled-LDA that simultaneously models the content of documents and the titles of document is proposed. This generative model represents each document with a mixture of topics, and extends these approaches to title modeling by allowing the mixture weights for topics to be determined by the titles of the document. In the mixing stage, the algorithm can learn the weight in an adaptive asymmetric learning way based on two kinds of information entropies. In this way, the final model incorporated the title information and the content information appropriately, which helped the performance of summarization. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on DUC2002 corpus.",
"title": ""
},
{
"docid": "1d9b50bf7fa39c11cca4e864bbec5cf3",
"text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.",
"title": ""
},
{
"docid": "16b8a948e76a04b1703646d5e6111afe",
"text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.",
"title": ""
},
{
"docid": "acc960b2fd1066efce4655da837213f4",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.082 ⇑ Corresponding author. Tel.: +562 978 4834. E-mail addresses: goberreu@ing.uchile.cl (G. Ober (J.D. Velásquez). URL: http://wi.dii.uchile.cl/ (J.D. Velásquez). Plagiarism detection is of special interest to educational institutions, and with the proliferation of digital documents on the Web the use of computational systems for such a task has become important. While traditional methods for automatic detection of plagiarism compute the similarity measures on a document-to-document basis, this is not always possible since the potential source documents are not always available. We do text mining, exploring the use of words as a linguistic feature for analyzing a document by modeling the writing style present in it. The main goal is to discover deviations in the style, looking for segments of the document that could have been written by another person. This can be considered as a classification problem using self-based information where paragraphs with significant deviations in style are treated as outliers. This so-called intrinsic plagiarism detection approach does not need comparison against possible sources at all, and our model relies only on the use of words, so it is not language specific. We demonstrate that this feature shows promise in this area, achieving reasonable results compared to benchmark models. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
e5d837e197c7527a53d1a4487e340db0
|
Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words
|
[
{
"docid": "96bb4155000096c1cba6285ad82c9a4d",
"text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.002 ⇑ Corresponding author. Tel.: +65 6790 6636; fax: + E-mail addresses: leecs@ntu.edu.sg (C.S. Lee), malo 1 Tel.: +65 67905772; fax: +65 6791 5214. Recent events indicate that sharing news in social media has become a phenomenon of increasing social, economic and political importance because individuals can now participate in news production and diffusion in large global virtual communities. Yet, knowledge about factors influencing news sharing in social media remains limited. Drawing from the uses and gratifications (U&G) and social cognitive theories (SCT), this study explored the influences of information seeking, socializing, entertainment, status seeking and prior social media sharing experience on news sharing intention. A survey was designed and administered to 203 students in a large local university. Results from structural equation modeling (SEM) analysis revealed that respondents who were driven by gratifications of information seeking, socializing, and status seeking were more likely to share news in social media platforms. Prior experience with social media was also a significant determinant of news sharing intention. Implications and directions for future work are discussed. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "f9876540ce148d7b27bab53839f1bf19",
"text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.",
"title": ""
},
{
"docid": "f5b3519d4ec0fd7f9cb67bf409bec5ac",
"text": "The AECOO industry is highly fragmented; therefore, efficient information sharing and exchange between various players are evidently needed. Furthermore, the information about facility components should be managed throughout the lifecycle and be easily accessible for all players in the AECOO industry. BIM is emerging as a method of creating, sharing, exchanging and managing the information throughout the lifecycle between all the stakeholders. RFID, on the other hand, has emerged as an automatic data collection and information storage technology, and has been used in different applications in AECOO. This research proposes permanently attaching RFID tags to facility components where the memory of the tags is populated with accumulated lifecycle information of the components taken from a standard BIM database. This information is used to enhance different processes throughout the lifecycle. A conceptual RFID-based system structure and data storage/retrieval design are elaborated. To explore the technical feasibility of the proposed approach, two case studies have been implemented and tested.",
"title": ""
},
{
"docid": "14fe4e2fb865539ad6f767b9fc9c1ff5",
"text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.",
"title": ""
},
{
"docid": "1557392e8482bafe53eb50fccfd60157",
"text": "A common practice among servers in restaurants is to give their dining parties an unexpected gift in the form of candy when delivering the check. Two studies were conducted to evaluate the impact of this gesture on the tip percentages received by servers. Study 1 found that customers who received a small piece of chocolate along with the check tipped more than did customers who received no candy. Study 2 found that tips varied with the amount of the candy given to the customers as well as with the manner in which it was offered. It is argued that reciprocity is a stronger explanation for these findings than either impression management or the good mood effect.",
"title": ""
},
{
"docid": "cb8dc6127632eb50f1a51a2ea115ad83",
"text": "This paper proposes a new design of a SPOKE-type permanent magnet brushless direct current (BLDC) motor by using pushing magnet. A numerical analysis is developed to calculate the maximum value of air-gap flux density. First, the analytical model of the SPOKE-type motor was established, and Laplace equations of magnetic scalar potential and a series of boundary conditions were given. Then, the analytical expressions of magnet field strength and magnet flux density were obtained in the air gap produced by ferrite permanent magnets. The developed analytical model was obtained by solving the magnetic scalar potential. Finally, the air-gap field distribution and back-electromotive force of spoke type machine was analyzed. The analysis works for internal rotor motor topologies, and either radial or parallel magnetized permanent magnets. This paper validates results of the analytical model by finite-element analysis as well as with the experimental analysis for SPOKE-type BLDC motors.",
"title": ""
},
{
"docid": "0724e800d88d1d7cd1576729f975b09a",
"text": "Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.",
"title": ""
},
{
"docid": "28ccab4b6b7c9c70bc07e4b3219d99d4",
"text": "The Wireless Networking After Next (WNaN) radio is a handheld-sized radio that delivers unicast, multicast, and disruption-tolerant traffic in networks of hundreds of radios. This paper addresses scalability of the network from the routing control traffic point of view. Starting from a basic version of an existing mobile ad-hoc network (MANET) proactive link-state routing protocol, we describe the enhancements that were necessary to provide good performance in these conditions. We focus on techniques to reduce control traffic while maintaining route integrity. We present simulation results from 250-node mobile networks demonstrating the effectiveness of the routing mechanisms. Any MANET with design parameters and constraints similar to the WNaN radio will benefit from these improvements.",
"title": ""
},
{
"docid": "56e7fba1f9730b85c52403c2ddad9417",
"text": "probe such as a category name, and that a norm is This work was supported by the Office of Naval Research under Grant NR 197-058, and by grants from the National Science and Engineering Research Council of Canada and from the Social Sciences and Humanities Research Council of Canada (410-68-0583). Dale Griffin, Leslie McPherson, and Daniel Read provided valuable assistance. Many friends and colleagues commented helpfully on earlier versions. The comments of Anne Treisman and Amos Tversky were especially influential. The preparation of the manuscript benefited greatly from a workshop entitled \"The Priority of the Specific,\" organized by Lee Brooks and Larry Jacoby at Elora, Ontario, in June of 1983. Correspondence concerning this article should be addressed to Daniel Kahneman, who is now at the Department of Psychology, University of California, Berkeley, California 94720, or Dale Miller, Department of Psychology, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada. produced by aggregating the set of recruited representations. The assumptions of distributed activation and rapid aggregation are not unique to this treatment. Related ideas have been advanced in theories of adaptation level (Helson, 1964; Restle, 1978a, 1978b) and other theories of context effects in judgment (N. H. Anderson, 1981; Birnbaum, 1982; Parducci, 1965, 1974); in connectionist models of distributed processing (Hinton & Anderson, 1981; McClelland, 1985; McClelland & Rumelhart, 1985); and in holographic models of memory (Eich, 1982; Metcalfe Eich, 1985; Murdock, 1982). The present analysis relates most closely to exemplar models of concepts (Brooks, 1978, in press; Hintzman, in press; Hintzman & Ludlam, 1980; Jacoby & Brooks, 1984; Medin & Schaffer, 1978; Smith & Medin, 1981). We were drawn to exemplar models in large part because they provide the only satisfactory account of the norms evoked by questions about arbitrary categories, such as \"Is this person friendlier than most other people on your block?\" Exemplar models assume that several representations are evoked at once and that activation varies in degree. They do not require the representations of exemplars to be accessible to conscious and explicit retrieval, and they allow representations to be fragmentary. The present model of norms adopts all of these assumptions. In addition, we propose that events are sometimes compared to counterfactual alternatives that are constructed ad hoc rather than retrieved from past experience. These ideas extend previous work on the availability and simulation heuristics (Kahneman & Tversky, 1982; Tversky & Kahneman, 1973). A constructive process must be invoked to explain some cases of surprise. Thus, an observer who knows Marty's affection for his aunt and his propensity for emotional displays may be surprised if Marty does not cry at her funeral—even if Marty rarely cries and if no one else cries at that funeral. Surprise is produced in such cases by the contrast between a stimulus and a counterfactual alternative that is constructed, not retrieved. Constructed elements also play a crucial role in counterfactual emotions such as frustration or regret, in which reality is compared to an imagined view of what might have been (Kahneman & Tversky, 1982). At the core of the present analysis are the rules and constraints that govern the spontaneous retrieval or construction of alter-",
"title": ""
},
{
"docid": "8218ce22ac1cccd73b942a184c819d8c",
"text": "The extended SMAS facelift techniques gave plastic surgeons the ability to correct the nasolabial fold and medial cheek. Retensioning the SMAS transmits the benefit through the multilinked fibrous support system of the facial soft tissues. The effect is to provide a recontouring of the ptotic soft tissues, which fills out the cheeks as it reduces nasolabial fullness. Indirectly, dermal tightening occurs to a lesser but more natural degree than with traditional facelift surgery. Although details of current techniques may be superseded, the emerging surgical principles are becoming more clearly defined. This article presents these principles and describes the author's current surgical technique.",
"title": ""
},
{
"docid": "dd01611bcbc8a50fbe20bdc676326ce5",
"text": "PURPOSE\nWe evaluated the accuracy of magnetic resonance imaging in determining the size and shape of localized prostate cancer.\n\n\nMATERIALS AND METHODS\nThe subjects were 114 men who underwent multiparametric magnetic resonance imaging before radical prostatectomy with patient specific mold processing of the specimen from 2013 to 2015. T2-weighted images were used to contour the prostate capsule and cancer suspicious regions of interest. The contours were used to design and print 3-dimensional custom molds, which permitted alignment of excised prostates with magnetic resonance imaging scans. Tumors were reconstructed in 3 dimensions from digitized whole mount sections. Tumors were then matched with regions of interest and the relative geometries were compared.\n\n\nRESULTS\nOf the 222 tumors evident on whole mount sections 118 had been identified on magnetic resonance imaging. For the 118 regions of interest mean volume was 0.8 cc and the longest 3-dimensional diameter was 17 mm. However, for matched pathological tumors, of which most were Gleason score 3 + 4 or greater, mean volume was 2.5 cc and the longest 3-dimensional diameter was 28 mm. The median tumor had a 13.5 mm maximal extent beyond the magnetic resonance imaging contour and 80% of cancer volume from matched tumors was outside region of interest boundaries. Size estimation was most accurate in the axial plane and least accurate along the base-apex axis.\n\n\nCONCLUSIONS\nMagnetic resonance imaging consistently underestimates the size and extent of prostate tumors. Prostate cancer foci had an average diameter 11 mm longer and a volume 3 times greater than T2-weighted magnetic resonance imaging segmentations. These results may have important implications for the assessment and treatment of prostate cancer.",
"title": ""
},
{
"docid": "645f320514b0fa5a8b122c4635bc3df6",
"text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.",
"title": ""
},
{
"docid": "8f227f66fc7c86c19edae8036c571579",
"text": "Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the Web of Science and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as 3 alternatives to the JIF to assess journal impact (h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS generally results in more comprehensive citation coverage in the area of management and international business. The use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI. Among these are books, conference papers, non-US journals, and in general journals in the field of strategy and international business. The 3 alternative GS-based metrics showed strong correlations with the traditional JIF. As such, they provide academics and universities committed to JIFs with a good alternative for journals that are not ISI-indexed. However, we argue that these metrics provide additional advantages over the JIF and that the free availability of GS allows for a democratization of citation analysis as it provides every academic access to citation data regardless of their institution’s financial means.",
"title": ""
},
{
"docid": "0397514e0d4a87bd8b59d9b317f8c660",
"text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.",
"title": ""
},
{
"docid": "90ba7add9e8b265c787efd6ebddb1a58",
"text": "Program Synthesis by Sketching by Armando Solar-Lezama Doctor in Philosophy in Engineering-Electrical Engineering and Computer Science University of California, Berkeley Rastislav Bodik, Chair The goal of software synthesis is to generate programs automatically from highlevel speci cations. However, e cient implementations for challenging programs require a combination of high-level algorithmic insights and low-level implementation details. Deriving the low-level details is a natural job for a computer, but the synthesizer can not replace the human insight. Therefore, one of the central challenges for software synthesis is to establish a synergy between the programmer and the synthesizer, exploiting the programmer's expertise to reduce the burden on the synthesizer. This thesis introduces sketching, a new style of synthesis that o ers a fresh approach to the synergy problem. Previous approaches have relied on meta-programming, or variations of interactive theorem proving to help the synthesizer deduce an e cient implementation. The resulting systems are very powerful, but they require the programmer to master new formalisms far removed from traditional programming models. To make synthesis accessible, programmers must be able to provide their insight e ortlessly, using formalisms they already understand. In Sketching, insight is communicated through a partial program, a sketch that expresses the high-level structure of an implementation but leaves holes in place of the lowlevel details. This form of synthesis is made possible by a new SAT-based inductive synthesis procedure that can e ciently synthesize an implementation from a small number of test cases. This algorithm forms the core of a new counterexample guided inductive synthesis procedure (CEGIS) which combines the inductive synthesizer with a validation procedure to automatically generate test inputs and ensure that the generated program satis es its",
"title": ""
},
{
"docid": "5dee244ee673909c3ba3d3d174a7bf83",
"text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.",
"title": ""
},
{
"docid": "5d170dcd5d2c9c1f4e5645217444fd98",
"text": "In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations to help adapt to new tasks and domains. MTDNN extends the model proposed in Liu et al. (2015) by incorporating a pre-trained bidirectional transformer language model, known as BERT (Devlin et al., 2018). MT-DNN obtains new state-of-the-art results on ten NLU tasks, including SNLI, SciTail, and eight out of nine GLUE tasks, pushing the GLUE benchmark to 82.2% (1.8% absolute improvement). We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations. Our code and pre-trained models will be made publicly available.",
"title": ""
},
{
"docid": "f08b294c1107372d81c39f13ee2caa34",
"text": "The success of deep learning methodologies draws a huge attention to their applications in medical image analysis. One of the applications of deep learning is in segmentation of retinal vessel and severity classification of diabetic retinopathy (DR) from retinal funduscopic image. This paper studies U-Net model performance in segmenting retinal vessel with different settings of dropout and batch normalization and use it to investigate the effect of retina vessel in DR classification. Pre-trained Inception V1 network was used to classify the DR severity. Two sets of retinal images, with and without the presence of vessel, were created from MESSIDOR dataset. The vessel extraction process was done using the best trained U-Net on DRIVE dataset. Final analysis showed that retinal vessel is a good feature in classifying both severe and early cases of DR stage.",
"title": ""
},
{
"docid": "3f9f01e3b3f5ab541cbe78fb210cf744",
"text": "The reliable and effective localization system is the basis of Automatic Guided Vehicle (AGV) to complete given tasks automatically in warehouse environment. However, there are no obvious features that can be used for localization of AGV to be extracted in warehouse environment and it dose make it difficult to realize the localization of AGV. So in this paper, we concentrate on the problem of optimal landmarks placement in warehouse so as to improve the reliability of localization. Firstly, we take the practical warehouse environment into consideration and transform the problem of landmarks placement into an optimization problem which aims at maximizing the difference degree between each basic unit of localization. Then Genetic Algorithm (GA) is used to solve the optimization problem. Then we match the observed landmarks with the already known ones stored in the map and the Triangulation method is used to estimate the position of AGV after the matching has been done. Finally, experiments in a real warehouse environment validate the effectiveness and reliability of our method.",
"title": ""
},
{
"docid": "6ce2529ff446db2d647337f30773cdc9",
"text": "The physical demands in soccer have been studied intensively, and the aim of the present review is to provide an overview of metabolic changes during a game and their relation to the development of fatigue. Heart-rate and body-temperature measurements suggest that for elite soccer players the average oxygen uptake during a match is around 70% of maximum oxygen uptake (VO2max). A top-class player has 150 to 250 brief intense actions during a game, indicating that the rates of creatine-phosphate (CP) utilization and glycolysis are frequently high during a game, which is supported by findings of reduced muscle CP levels and severalfold increases in blood and muscle lactate concentrations. Likewise, muscle pH is lowered and muscle inosine monophosphate (IMP) elevated during a soccer game. Fatigue appears to occur temporarily during a game, but it is not likely to be caused by elevated muscle lactate, lowered muscle pH, or change in muscle-energy status. It is unclear what causes the transient reduced ability of players to perform maximally. Muscle glycogen is reduced by 40% to 90% during a game and is probably the most important substrate for energy production, and fatigue toward the end of a game might be related to depletion of glycogen in some muscle fibers. Blood glucose and catecholamines are elevated and insulin lowered during a game. The blood free-fatty-acid levels increase progressively during a game, probably reflecting an increasing fat oxidation compensating for the lowering of muscle glycogen. Thus, elite soccer players have high aerobic requirements throughout a game and extensive anaerobic demands during periods of a match leading to major metabolic changes, which might contribute to the observed development of fatigue during and toward the end of a game.",
"title": ""
},
{
"docid": "bf16ccf68804d05201ad7a6f0a2920fe",
"text": "The purpose of this paper is to review and discuss public performance management in general and performance appraisal and pay for performance specifically. Performance is a topic that is a popular catch-cry and performance management has become a new organizational ideology. Under the global economic crisis, almost every public and private organization is struggling with a performance challenge, one way or another. Various aspects of performance management have been extensively discussed in the literature. Many researchers and experts assert that sets of guidelines for design of performance management systems would lead to high performance (Kaplan and Norton, 1996, 2006). A long time ago, the traditional performance measurement was developed from cost and management accounting and such purely financial perspective of performance measures was perceived to be inappropriate so that multi-dimensional performance management was development in the 1970s (Radnor and McGuire, 2004).",
"title": ""
}
] |
scidocsrr
|
777bbe1278ca8be1d239feb3d34eceec
|
BSIF: Binarized statistical image features
|
[
{
"docid": "13cb08194cf7254932b49b7f7aff97d1",
"text": "When there are many people who don't need to expect something more than the benefits to take, we will suggest you to have willing to reach all benefits. Be sure and surely do to take this computer vision using local binary patterns that gives the best reasons to read. When you really need to get the reason why, this computer vision using local binary patterns book will probably make you feel curious.",
"title": ""
}
] |
[
{
"docid": "a9d516ede8966dde5e79ea1304bbedb9",
"text": "Successful implementation of Information Technology can be judged or predicted from the user acceptance. Technology acceptance model (TAM) is a model that is built to analyze and understand the factors that influence the acceptance of the use of technologies based on the user's perspective. In other words, TAM offers a powerful explanation related to acceptance of the technology and its behavior. TAM model has been applied widely to evaluate various information systems or information technology (IS/IT), but it is the lack of research related to the evaluation of the TAM model itself. This study aims to determine whether the model used TAM is still relevant today considering rapid development of information & communication technology (ICT). In other words, this study would like to test whether the TAM measurement indicators are valid and can represent each dimension of the model. The method used is quantitative method with factor analysis approach. The results showed that all indicators valid and can represent each dimension of TAM, those are perceived usefulness, perceived ease of use and behavioral intention to use. Thus the TAM model is still relevant used to measure the user acceptance of technology.",
"title": ""
},
{
"docid": "5aa14ba34672f4afa9c27f7f863d8c57",
"text": "Knowledge distillation is an effective approach to transferring knowledge from a teacher neural network to a student target network for satisfying the low-memory and fast running requirements in practice use. Whilst being able to create stronger target networks compared to the vanilla non-teacher based learning strategy, this scheme needs to train additionally a large teacher model with expensive computational cost. In this work, we present a Self-Referenced Deep Learning (SRDL) strategy. Unlike both vanilla optimisation and existing knowledge distillation, SRDL distils the knowledge discovered by the in-training target model back to itself to regularise the subsequent learning procedure therefore eliminating the need for training a large teacher model. SRDL improves the model generalisation performance compared to vanilla learning and conventional knowledge distillation approaches with negligible extra computational cost. Extensive evaluations show that a variety of deep networks benefit from SRDL resulting in enhanced deployment performance on both coarse-grained object categorisation tasks (CIFAR10, CIFAR100, Tiny ImageNet, and ImageNet) and fine-grained person instance identification tasks (Market-1501).",
"title": ""
},
{
"docid": "909ec68a644cfd1d338270ee67144c23",
"text": "We have constructed an optical tweezer using two lasers (830 nm and 1064 nm) combined with micropipette manipulation having sub-pN force sensitivity. Sample position is controlled within nanometer accuracy using XYZ piezo-electric stage. The position of the bead in the trap is monitored using single particle laser backscattering technique. The instrument is automated to operate in constant force, constant velocity or constant position measurement. We present data on single DNA force-extension, dynamics of DNA integration on membranes and optically trapped bead–cell interactions. A quantitative analysis of single DNA and protein mechanics, assembly and dynamics opens up new possibilities in nanobioscience.",
"title": ""
},
{
"docid": "cde1b5f21bdc05aa5a86aa819688d63c",
"text": "This paper presents two fuzzy portfolio selection models where the objective is to minimize the downside risk constrained by a given expected return. We assume that the rates of returns on securities are approximated as LR-fuzzy numbers of the same shape, and that the expected return and risk are evaluated by interval-valued means. We establish the relationship between those mean-interval definitions for a given fuzzy portfolio by using suitable ordering relations. Finally, we formulate the portfolio selection problem as a linear program when the returns on the assets are of trapezoidal form. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ea0e8b5bf62de6205bd993610f663f50",
"text": "Design Thinking has collected theories and best-practices to foster creativity and innovation in group processes. This is in particular valuable for sketchy and complex problems. Other disciplines can learn from this body-of-behaviors and values to tackle their complex problems. In this paper, using four Design Thinking qualities, we propose a framework to identify the level of Design Thinkingness in existing analytical software engineering tools: Q1) Iterative Creation Cycles, Q2) Human Integration in Design, Q3) Suitability for Heterogeneity, and Q4) Media Accessibility. We believe that our framework can also be used to transform tools in various engineering areas to support abductive and divergent thinking processes. We argue, based on insights gained from the successful transformation of classical business process modeling into tangible business process modeling. This was achieved by incorporating rapid prototyping, human integration, knowledge base heterogeneity and the media-models theory. The latter is given special attention as it allows us to break free from the limiting factors of the exiting analytic tools.",
"title": ""
},
{
"docid": "786a70f221a70038f930352e8022ae29",
"text": "We present IndoNet, a multilingual lexical knowledge base for Indian languages. It is a linked structure of wordnets of 18 different Indian languages, Universal Word dictionary and the Suggested Upper Merged Ontology (SUMO). We discuss various benefits of the network and challenges involved in the development. The system is encoded in Lexical Markup Framework (LMF) and we propose modifications in LMF to accommodate Universal Word Dictionary and SUMO. This standardized version of lexical knowledge base of Indian Languages can now easily be linked to similar global resources.",
"title": ""
},
{
"docid": "63c550438679c0353c2f175032a73369",
"text": "Large screens or projections in public and private settings have become part of our daily lives, as they enable the collaboration and presentation of information in many diverse ways. When discussing the shown information with other persons, we often point to a displayed object with our index finger or a laser pointer in order to talk about it. Although mobile phone-based interactions with remote screens have been investigated intensively in the last decade, none of them considered such direct pointing interactions for application in everyday tasks. In this paper, we present the concept and design space of PointerPhone which enables users to directly point at objects on a remote screen with their mobile phone and interact with them in a natural and seamless way. We detail the design space and distinguish three categories of interactions including low-level interactions using the mobile phone as a precise and fast pointing device, as well as an input and output device. We detail the category of widgetlevel interactions. Further, we demonstrate versatile high-level interaction techniques and show their application in a collaborative presentation scenario. Based on the results of a qualitative study, we provide design implications for application designs.",
"title": ""
},
{
"docid": "6d777bd24d9e869189c388af94384fa1",
"text": "OBJECTIVE\nThe aim of this study was to explore the efficacy of insulin-loaded trimethylchitosan nanoparticles on certain destructive effects of diabetes type one.\n\n\nMATERIALS AND METHODS\nTwenty-five male Wistar rats were randomly divided into three control groups (n=5) and two treatment groups (n=5). The control groups included normal diabetic rats without treatment and diabetic rats treated with the nanoparticles. The treatment groups included diabetic rats treated with the insulin-loaded trimethylchitosan nanoparticles and the diabetic rats treated with trade insulin. The experiment period was eight weeks and the rats were treated for the last two weeks.\n\n\nRESULT\nThe livers of the rats receiving both forms of insulin showed less severe microvascular steatosis and fatty degeneration, and ameliorated blood glucose, serum biomarkers, and oxidant/antioxidant parameters with no significant differences. The gene expression of pyruvate kinase could be compensated by both the treatment protocols and the new coated form of insulin could not significantly influence the gene expression of glucokinase (p<0.05). The result of the present study showed the potency of the nanoparticle form of insulin to attenuate hyperglycemia, oxidative stress, and inflammation in diabetes, which indicate the bioavailability of insulin-encapsulated trimethylchitosan nanoparticles.",
"title": ""
},
{
"docid": "376ea61271c36d1d8edbd869da910666",
"text": "Purpose – Many thought leaders are promoting information technology (IT) governance and its supporting practices as an approach to improve business/IT alignment. This paper aims to further explore this assumed positive relationship between IT governance practices and business/IT alignment. Design/methodology/approach – This paper explores the relationship between the use of IT governance practices and business/IT alignment, by creating a business/IT alignment maturity benchmark and qualitatively comparing the use of IT governance practices in the extreme cases. Findings – The main conclusion of the research is that all extreme case organisations are leveraging a broad set of IT governance practices, and that IT governance practices need to obtain at least a maturity level 2 (on a scale of 5) to positively influence business/IT alignment. Also, a list of 11 key enabling IT governance practices is identified. Research limitations/implications – This research adheres to the process theory, implying a limited definition of prediction. An important opportunity for future research lies in the domain of complementary statistical correlation research. Practical implications – This research identifies key IT governance practices that organisations can leverage to improve business/IT alignment. Originality/value – This research contributes to new theory building in the IT governance and alignment domain and provides practitioners with insight on how to implement IT governance in their organisations.",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "6cf7a5286a03190b0910380830968351",
"text": "In this paper, the mechanical and aerodynamic design, carbon composite production, hierarchical control system design and vertical flight tests of a new unmanned aerial vehicle, which is capable of VTOL (vertical takeoff and landing) like a helicopter and long range horizontal flight like an airplane, are presented. Real flight tests show that the aerial vehicle can successfully operate in VTOL mode. Kalman filtering is employed to obtain accurate roll and pitch angle estimations.",
"title": ""
},
{
"docid": "5ed8f3b58ae1320411f15a4d7c0f5634",
"text": "With the advent of the ubiquitous era, context-based music recommendation has become one of rapidly emerging applications. Context-based music recommendation requires multidisciplinary efforts including low level feature extraction, music mood classification and human emotion prediction. Especially, in this paper, we focus on the implementation issues of context-based mood classification and music recommendation. For mood classification, we reformulate it into a regression problem based on support vector regression (SVR). Through the use of the SVR-based mood classifier, we achieved 87.8% accuracy. For music recommendation, we reason about the user's mood and situation using both collaborative filtering and ontology technology. We implement a prototype music recommendation system based on this scheme and report some of the results that we obtained.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "2343f238e92a74e3f456b2215b18ad20",
"text": "Nonlinear activation function is one of the main building blocks of artificial neural networks. Hyperbolic tangent and sigmoid are the most used nonlinear activation functions. Accurate implementation of these transfer functions in digital networks faces certain challenges. In this paper, an efficient approximation scheme for hyperbolic tangent function is proposed. The approximation is based on a mathematical analysis considering the maximum allowable error as design parameter. Hardware implementation of the proposed approximation scheme is presented, which shows that the proposed structure compares favorably with previous architectures in terms of area and delay. The proposed structure requires less output bits for the same maximum allowable error when compared to the state-of-the-art. The number of output bits of the activation function determines the bit width of multipliers and adders in the network. Therefore, the proposed activation function results in reduction in area, delay, and power in VLSI implementation of artificial neural networks with hyperbolic tangent activation function.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "10706a3915da7a66696816af7bd1f638",
"text": "In this paper, we present a family of fluxgate magnetic sensors on printed circuit boards (PCBs), suitable for an electronic compass. This fabrication process is simple and inexpensive and uses commercially available thin ferromagnetic materials. We developed and analyzed the prototype sensors with software tools based on the finite-element method. We developed both singleand double-axis planar fluxgate magnetic sensors as well as front-end circuitry based on second-harmonic detection. Two amorphous magnetic materials, Vitrovac 6025X (25 mum thick) and Vitrovac 6025Z (20 mum thick), were used as the ferromagnetic core. We found that the same structures can be made with Metglas ferromagnetic core. The double-axis fluxgate magnetic sensor has a sensitivity of about 1.25 mV/muT with a linearity error of 1.5% full scale, which is suitable for detecting Earth's magnetic field (plusmn60 muT full-scale) in an electronic compass",
"title": ""
},
{
"docid": "8d9be82bfc32a4631f1b1f24e1d962a9",
"text": "Determine an optimal set of design parameter of PR whose DW fits a prescribed workspace as closely as possible is an important and foremost design task before manufacturing. In this paper, an optimal design method of a linear Delta robot (LDR) to obtain the prescribed cuboid dexterous workspace (PCDW) is proposed. The optical algorithms are based on the concept of performance chart. The performance chart shows the relationship between a criterion and design parameters graphically and globally. The kinematic problem is analyzed in brief to determine the design parameters and their relation. Two algorithms are designed to determine the maximal inscribed rectangle of dexterous workspace in the O-xy plane and plot the performance chart. As an applying example, a design result of the LDR with a prescribed cuboid dexterous workspace is presented. The optical results shown that every corresponding maximal inscribed rectangle can be obtained for every given RATE by the algorithm and the error of RATE is less than 0.05.The method and the results of this paper are very useful for the design and comparison of the parallel robot. Key-Words: Parallel Robot, Cuboid Dexterous Workspace, Optimal Design, performance chart ∗ This work is supported by Zhejiang Province Education Funded Grant #20051392.",
"title": ""
},
{
"docid": "ed34383cada585951e1dcc62445d08c2",
"text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.",
"title": ""
}
] |
scidocsrr
|
45a2867442af48a54aa14e21d04e1ad4
|
CitNetExplorer: A new software tool for analyzing and visualizing citation networks
|
[
{
"docid": "1c11672bab0fae36cbfde410ac902852",
"text": "To better understand the topic of this colloquium, we have created a series of databases related to knowledge domains (dynamic systems [small world/Milgram], information visualization [Tufte], co-citation [Small], bibliographic coupling [Kessler], and scientometrics [Scientometrics]). I have used a software package called HistCite which generates chronological maps of subject (topical) collections resulting from searches of the ISI Web of Science or ISI citation indexes (SCI, SSCI, and/or AHCI) on CD-ROM. When a marked list is created on WoS, an export file is created which contains all cited references for each source document captured. These bibliographic collections, saved as ASCII files, are processed by HistCite in order to generate chronological and other tables as well as historiographs which highlight the most-cited works in and outside the collection. HistCite also includes a module for detecting and editing errors or variations in cited references as well as a vocabulary analyzer which generates both ranked word lists and word pairs used in the collection. Ideally the system will be used to help the searcher quickly identify the most significant work on a topic and trace its year-by-year historical development. In addition to the collections mentioned above, historiographs based on collections of papers that cite the Watson-Crick 1953 classic paper identifying the helical structure of DNA were created. Both year-by-year as well as month-by-month displays of papers from 1953 to 1958 were necessary to highlight the publication activity of those years.",
"title": ""
},
{
"docid": "5e07328bf13a9dd2486e9dddbe6a3d8f",
"text": "We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.",
"title": ""
}
] |
[
{
"docid": "c5ee2a4e38dfa27bc9d77edcd062612f",
"text": "We perform transaction-level analyses of entrusted loans – the largest component of shadow banking in China. There are two types – affiliated and non-affiliated. The latter involve a much higher interest rate than the former and official bank loan rates, and largely flow into the real estate industry. Both involve firms with privileged access to cheap capital to channel funds to less privileged firms and increase when credit is tight. The pricing of entrusted loans, especially that of non-affiliated loans, incorporates fundamental and informational risks. Stock market reactions suggest that both affiliated and non-affiliated loans are fairly-compensated investments.",
"title": ""
},
{
"docid": "82cf154da3bc34c4311cc3ae1b0bfce3",
"text": "Literature in advertising and information systems suggests that advertising in both traditional media and the Internet is either easily ignored by the audience or is perceived with little value. However, these studies assumed that the audience was passive and failed to consider the motives of the users. In light of this, the present study measures consumers attitudes toward advertisements for different purposes/functions (brand building and directional) and different media (traditional and Internet-based). Literature suggests the following factors that contribute to consumers perceptions of ads: entertainment, irritation, informativeness, credibility, and demographic. We believe that interactivity is also a factor that contributes to consumers perceptions. By understanding consumers attitude towards advertising, designers and marketers can better strategize their advertising designs. A better understanding of interactivity can also help to improve the effectiveness of interactive media such as the Internet. A methodology for studying the factors that contribute to consumers perceptions of ads is proposed and implications for Internet-based advertising and e-commerce is discussed.",
"title": ""
},
{
"docid": "c89b94565b7071420017deae01295e23",
"text": "Using cross-sectional data from three waves of the Youth Tobacco Policy Study, which examines the impact of the UK's Tobacco Advertising and Promotion Act (TAPA) on adolescent smoking behaviour, we examined normative pathways between tobacco marketing awareness and smoking intentions. The sample comprised 1121 adolescents in Wave 2 (pre-ban), 1123 in Wave 3 (mid-ban) and 1159 in Wave 4 (post-ban). Structural equation modelling was used to assess the direct effect of tobacco advertising and promotion on intentions at each wave, and also the indirect effect, mediated through normative influences. Pre-ban, higher levels of awareness of advertising and promotion were independently associated with higher levels of perceived sibling approval which, in turn, was positively related to intentions. Independent paths from perceived prevalence and benefits fully mediated the effects of advertising and promotion awareness on intentions mid- and post-ban. Advertising awareness indirectly affected intentions via the interaction between perceived prevalence and benefits pre-ban, whereas the indirect effect on intentions of advertising and promotion awareness was mediated by the interaction of perceived prevalence and benefits mid-ban. Our findings indicate that policy measures such as the TAPA can significantly reduce adolescents' smoking intentions by signifying smoking to be less normative and socially unacceptable.",
"title": ""
},
{
"docid": "fe97095f2af18806e7032176c6ac5d89",
"text": "Targeted social engineering attacks in the form of spear phishing emails, are often the main gimmick used by attackers to infiltrate organizational networks and implant state-of-the-art Advanced Persistent Threats (APTs). Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76% in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28% without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails.",
"title": ""
},
{
"docid": "d25c4ed5656c5972591fb7da4f86be83",
"text": "Opinion mining or sentimental analysis plays important role in the data mining process. In the proposed method, opinions are classified using various statistical measures to provide ratings to help the sentimental analysis of big data. Experimental results demonstrate the efficiency of the proposed method to help in analysis of quality of product, marketers evaluation of success of a new product launched, determine which versions of a product or service are popular and identify demographics like or dislike of product features, etc.",
"title": ""
},
{
"docid": "773813311ca5cb2f68662faab7040678",
"text": "This paper presents a latent variable structured prediction model for discriminative supervised clustering of items called the Latent Left-linking Model (LM). We present an online clustering algorithm for LM based on a feature-based item similarity function. We provide a learning framework for estimating the similarity function and present a fast stochastic gradient-based learning technique. In our experiments on coreference resolution and document clustering, LM outperforms several existing online as well as batch supervised clustering techniques.",
"title": ""
},
{
"docid": "9a283f62dad38887bc6779c3ea61979d",
"text": "Recent evidence supports that alterations in hepatocyte-derived exosomes (HDE) may play a role in the pathogenesis of drug-induced liver injury (DILI). HDE-based biomarkers also hold promise to improve the sensitivity of existing in vitro assays for predicting DILI liability. Primary human hepatocytes (PHH) provide a physiologically relevant in vitro model to explore the mechanistic and biomarker potential of HDE in DILI. However, optimal methods to study exosomes in this culture system have not been defined. Here we use HepG2 and HepaRG cells along with PHH to optimize methods for in vitro HDE research. We compared the quantity and purity of HDE enriched from HepG2 cell culture medium by 3 widely used methods: ultracentrifugation (UC), OptiPrep density gradient ultracentrifugation (ODG), and ExoQuick (EQ)-a commercially available exosome precipitation reagent. Although EQ resulted in the highest number of particles, UC resulted in more exosomes as indicated by the relative abundance of exosomal CD63 to cellular prohibitin-1 as well as the comparative absence of contaminating extravesicular material. To determine culture conditions that best supported exosome release, we also assessed the effect of Matrigel matrix overlay at concentrations ranging from 0 to 0.25 mg/ml in HepaRG cells and compared exosome release from fresh and cryopreserved PHH from same donor. Sandwich culture did not impair exosome release, and freshly prepared PHH yielded a higher number of HDE overall. Taken together, our data support the use of UC-based enrichment from fresh preparations of sandwich-cultured PHH for future studies of HDE in DILI.",
"title": ""
},
{
"docid": "8cc3af1b9bb2ed98130871c7d5bae23a",
"text": "BACKGROUND\nAnimal experiments have convincingly demonstrated that prenatal maternal stress affects pregnancy outcome and results in early programming of brain functions with permanent changes in neuroendocrine regulation and behaviour in offspring.\n\n\nAIM\nTo evaluate the existing evidence of comparable effects of prenatal stress on human pregnancy and child development.\n\n\nSTUDY DESIGN\nData sources used included a computerized literature search of PUBMED (1966-2001); Psychlit (1987-2001); and manual search of bibliographies of pertinent articles.\n\n\nRESULTS\nRecent well-controlled human studies indicate that pregnant women with high stress and anxiety levels are at increased risk for spontaneous abortion and preterm labour and for having a malformed or growth-retarded baby (reduced head circumference in particular). Evidence of long-term functional disorders after prenatal exposure to stress is limited, but retrospective studies and two prospective studies support the possibility of such effects. A comprehensive model of putative interrelationships between maternal, placental, and fetal factors is presented.\n\n\nCONCLUSIONS\nApart from the well-known negative effects of biomedical risks, maternal psychological factors may significantly contribute to pregnancy complications and unfavourable development of the (unborn) child. These problems might be reduced by specific stress reduction in high anxious pregnant women, although much more research is needed.",
"title": ""
},
{
"docid": "57cbffa039208b85df59b7b3bc1718d5",
"text": "This paper provides an in-depth analysis of the technological and social factors that led to the successful adoption of groupware by a virtual team in a educational setting. Drawing on a theoretical framework based on the concept of technological frames, we conducted an action research study to analyse the chronological sequence of events in groupware adoption. We argue that groupware adoption can be conceptualised as a three-step process of expanding and aligning individual technological frames towards groupware. The first step comprises activities that bring knowledge of new technological opportunities to the participants. The second step involves facilitating the participants to articulate and evaluate their work practices and their use of tech© Scandinavian Journal of Information Systems, 2006, 18(2):29-68 nology. The third and final step deals with the participants' commitment to, and practical enactment of, groupware technology. The alignment of individual technological frames requires the articulation and re-evaluation of experience with collaborative practice and with the use of technology. One of the key findings is that this activity cannot take place at the outset of groupware adoption.",
"title": ""
},
{
"docid": "76cef1b6d0703127c3ae33bcf71cdef8",
"text": "Risks have a significant impact on a construction project’s performance in terms of cost, time and quality. As the size and complexity of the projects have increased, an ability to manage risks throughout the construction process has become a central element preventing unwanted consequences. How risks are shared between the project actors is to a large extent governed by the procurement option and the content of the related contract documents. Therefore, selecting an appropriate project procurement option is a key issue for project actors. The overall aim of this research is to increase the understanding of risk management in the different procurement options: design-bid-build contracts, designbuild contracts and collaborative form of partnering. Deeper understanding is expected to contribute to a more effective risk management and, therefore, a better project output and better value for both clients and contractors. The study involves nine construction projects recently performed in Sweden and comprises a questionnaire survey and a series of interviews with clients, contractors and consultants involved in these construction projects. The findings of this work show a lack of an iterative approach to risk management, which is a weakness in current procurement practices. This aspect must be addressed if the risk management process is to serve projects and, thus, their clients. The absence of systematic risk management is especially noted in the programme phase, where it arguably has the greatest potential impact. The production phase is where most interest and activity are to be found. As a matter of practice, the communication of risks between the actors simply does not work to the extent that it must if projects are to be delivered with certainty, irrespective of the form of procurement. A clear connection between the procurement option and risk management in construction projects has been found. Traditional design-bid-build contracts do not create opportunities for open discussion of project risks and joint risk management. A number of drivers of and obstacles to effective risk management have been explored in the study. Every actor’s involvement in dialogue, effective communication and information exchange, open attitudes and trustful relationship are the factors that support open discussion of project risks and, therefore, contribute to successful risk management. Based on the findings, a number of recommendations facilitating more effective risk management have been developed for the industry practitioners. Keywords--Risk Management, Risk Allocation, Construction Project, Construction Contract, Design-BidBuild, Design-Build, Partnering",
"title": ""
},
{
"docid": "0f71e64aaf081b6624f442cb95b2220c",
"text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.",
"title": ""
},
{
"docid": "52f912cd5a8def1122d7ce6ba7f47271",
"text": "System event logs have been frequently used as a valuable resource in data-driven approaches to enhance system health and stability. A typical procedure in system log analytics is to first parse unstructured logs, and then apply data analysis on the resulting structured data. Previous work on parsing system event logs focused on offline, batch processing of raw log files. But increasingly, applications demand online monitoring and processing. We propose an online streaming method Spell, which utilizes a longest common subsequence based approach, to parse system event logs. We show how to dynamically extract log patterns from incoming logs and how to maintain a set of discovered message types in streaming fashion. Evaluation results on large real system logs demonstrate that even compared with the offline alternatives, Spell shows its superiority in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "0cae4ea322daaaf33a42427b69e8ba9f",
"text": "Background--By leveraging cloud services, organizations can deploy their software systems over a pool of resources. However, organizations heavily depend on their business-critical systems, which have been developed over long periods. These legacy applications are usually deployed on-premise. In recent years, research in cloud migration has been carried out. However, there is no secondary study to consolidate this research. Objective--This paper aims to identify, taxonomically classify, and systematically compare existing research on cloud migration. Method--We conducted a systematic literature review (SLR) of 23 selected studies, published from 2010 to 2013. We classified and compared the selected studies based on a characterization framework that we also introduce in this paper. Results--The research synthesis results in a knowledge base of current solutions for legacy-to-cloud migration. This review also identifies research gaps and directions for future research. Conclusion--This review reveals that cloud migration research is still in early stages of maturity, but is advancing. It identifies the needs for a migration framework to help improving the maturity level and consequently trust into cloud migration. This review shows a lack of tool support to automate migration tasks. This study also identifies needs for architectural adaptation and self-adaptive cloud-enabled systems.",
"title": ""
},
{
"docid": "e2950089f76e1509ad2aa74ea5c738eb",
"text": "In this review the knowledge status of and future research options on a green gas supply based on biogas production by co-digestion is explored. Applications and developments of the (bio)gas supply in The Netherlands have been considered, whereafter literature research has been done into the several stages from production of dairy cattle manure and biomass to green gas injection into the gas grid. An overview of a green gas supply chain has not been made before. In this study it is concluded that on installation level (micro-level) much practical knowledge is available and on macro-level knowledge about availability of biomass. But on meso-level (operations level of a green gas supply) very little research has been done until now. Future research should include the modeling of a green gas supply chain on an operations level, i.e. questions must be answered as where to build digesters based on availability of biomass. Such a model should also advise on technology of upgrading depending on scale factors. Future research might also give insight in the usability of mixing (partly upgraded) biogas with natural gas. The preconditions for mixing would depend on composition of the gas, the ratio of gases to be mixed and the requirements on the mixture.",
"title": ""
},
{
"docid": "42dfa7988f31403dba1c390741aa164c",
"text": "This study explored friendship variables in relation to body image, dietary restraint, extreme weight-loss behaviors (EWEBs), and binge eating in adolescent girls. From 523 girls, 79 friendship cliques were identified using social network analysis. Participants completed questionnaires that assessed body image concerns, eating, friendship relations, and psychological family, and media variables. Similarity was greater for within than for between friendship cliques for body image concerns, dietary restraint, and EWLBs, but not for binge eating. Cliques high in body image concerns and dieting manifested these concerns in ways consistent with a high weight/shape-preoccupied subculture. Friendship attitudes contributed significantly to the prediction of individual body image concern and eating behaviors. Use of EWLBs by friends predicted an individual's own level of use.",
"title": ""
},
{
"docid": "8bcbb5d7ae6c57d60ff34abc1259349c",
"text": "Habitat remnants in urbanized areas typically conserve biodiversity and serve the recreation and urban open-space needs of human populations. Nevertheless, these goals can be in conflict if human activity negatively affects wildlife. Hence, when considering habitat remnants as conservation refuges it is crucial to understand how human activities and land uses affect wildlife use of those and adjacent areas. We used tracking data (animal tracks and den or bed sites) on 10 animal species and information on human activity and environmental factors associated with anthropogenic disturbance in 12 habitat fragments across San Diego County, California, to examine the relationships among habitat fragment characteristics, human activity, and wildlife presence. There were no significant correlations of species presence and abundance with percent plant cover for all species or with different land-use intensities for all species, except the opossum (Didelphis virginiana), which preferred areas with intensive development. Woodrats (Neotoma spp.) and cougars (Puma concolor) were associated significantly and positively and significantly and negatively, respectively, with the presence and prominence of utilities. Woodrats were also negatively associated with the presence of horses. Raccoons (Procyon lotor) and coyotes (Canis latrans) were associated significantly and negatively and significantly and positively, respectively, with plant bulk and permanence. Cougars and gray foxes (Urocyon cinereoargenteus) were negatively associated with the presence of roads. Roadrunners (Geococcyx californianus) were positively associated with litter. The only species that had no significant correlations with any of the environmental variables were black-tailed jackrabbits (Lepus californicus) and mule deer (Odocoileus hemionus). Bobcat tracks were observed more often than gray foxes in the study area and bobcats correlated significantly only with water availability, contrasting with results from other studies. Our results appear to indicate that maintenance of habitat fragments in urban areas is of conservation benefit to some animal species, despite human activity and disturbance, as long as the fragments are large.",
"title": ""
},
{
"docid": "25ca94db4d6a4a2f24831d78d198b129",
"text": "In recent years, Sentiment Analysis has become one of the most interesting topics in AI research due to its promising commercial benefits. An important step in a Sentiment Analysis system for text mining is the preprocessing phase, but it is often underestimated and not extensively covered in literature. In this work, our aim is to highlight the importance of preprocessing techniques and show how they can improve system accuracy. In particular, some different preprocessing methods are presented and the accuracy of each of them is compared with the others. The purpose of this comparison is to evaluate which techniques are effective. In this paper, we also present the reasons why the accuracy improves, by means of a precise analysis of each method.",
"title": ""
},
{
"docid": "951ad18af2b3c9b0ca06147b0c804f65",
"text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.",
"title": ""
},
{
"docid": "ad45d9a69112010f84ff8d0fae04596d",
"text": "PURPOSE\nWe document the postpubertal outcome of feminizing genitoplasty.\n\n\nMATERIALS AND METHODS\nA total of 14 girls, mean age 13.1 years, with congenital adrenal hyperplasia were assessed under anesthesia by a pediatric urologist, plastic/reconstructive surgeon and gynecologist. Of these patients 13 had previously undergone feminizing genitoplasty in early childhood at 4 different specialist centers in the United Kingdom.\n\n\nRESULTS\nThe outcome of clitoral surgery was unsatisfactory (clitoral atrophy or prominent glans) in 6 girls, including 3 whose genitoplasty had been performed by 3 different specialist pediatric urologists. Additional vaginal surgery was necessary for normal comfortable intercourse in 13 patients. Fibrosis and scarring were most evident in those who had undergone aggressive attempts at vaginal reconstruction in infancy.\n\n\nCONCLUSIONS\nThese disappointing results, even in the hands of specialists, highlight the importance of late followup and challenge the prevailing assumption that total correction can be achieved with a single stage operation in infancy. Although simple exteriorization of a low vagina can reasonably be combined with cosmetic correction of virilized external genitalia in infancy, we now believe that in some cases it may be best to defer definitive reconstruction of the intermediate or high vagina until after puberty. The psychological issues surrounding sexuality in these patients are inadequately researched and poorly understood.",
"title": ""
},
{
"docid": "e648aa29c191885832b4deee5af9b5b5",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] |
scidocsrr
|
35ecb6181280a474aa2de6c410750227
|
Parallelizing Skip Lists for In-Memory Multi-Core Database Systems
|
[
{
"docid": "5ea65d6e878d2d6853237a74dbc5a894",
"text": "We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays. In a decision-support context, our primary concerns are the lookup time, and the space occupied by the index structure. Our goal is to provide faster lookup times than binary search by paying attention to reference locality and cache behavior, without using substantial extra space. We propose a new indexing technique called “Cache-Sensitive Search Trees” (CSS-trees). Our technique stores a directory structure on top of a sorted array. Nodes in this directory have size matching the cache-line size of the machine. We store the directory in an array and do not store internal-node pointers; child nodes can be found by performing arithmetic on array offsets. We compare the algorithms based on their time and space requirements. We have implemented all of the techniques, and present a performance study on two popular modern machines. We demonstrate that with ∗This research was supported by a David and Lucile Packard Foundation Fellowship in Science and Engineering, by an NSF Young Investigator Award, by NSF grant number IIS-98-12014, and by NSF CISE award CDA-9625374. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. a small space overhead, we can reduce the cost of binary search on the array by more than a factor of two. We also show that our technique dominates B+-trees, T-trees, and binary search trees in terms of both space and time. A cache simulation verifies that the gap is due largely to cache misses.",
"title": ""
},
{
"docid": "f10660b168700e38e24110a575b5aafa",
"text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.",
"title": ""
},
{
"docid": "00f88387c8539fcbed2f6ec4f953438d",
"text": "We present Masstree, a fast key-value database designed for SMP machines. Masstree keeps all data in memory. Its main data structure is a trie-like concatenation of B+-trees, each of which handles a fixed-length slice of a variable-length key. This structure effectively handles arbitrary-length possiblybinary keys, including keys with long shared prefixes. +-tree fanout was chosen to minimize total DRAM delay when descending the tree and prefetching each tree node. Lookups use optimistic concurrency control, a read-copy-update-like technique, and do not write shared data structures; updates lock only affected nodes. Logging and checkpointing provide consistency and durability. Though some of these ideas appear elsewhere, Masstree is the first to combine them. We discuss design variants and their consequences.\n On a 16-core machine, with logging enabled and queries arriving over a network, Masstree executes more than six million simple queries per second. This performance is comparable to that of memcached, a non-persistent hash table server, and higher (often much higher) than that of VoltDB, MongoDB, and Redis.",
"title": ""
},
{
"docid": "45c006e52bdb9cfa73fd4c0ebf692dfe",
"text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.",
"title": ""
}
] |
[
{
"docid": "ddb36948e400c970309bd0886bfcfccb",
"text": "1 Introduction \"S pace\" and \"place\" are familiar words denoting common \"Sexperiences. We live in space. There is no space for an-< • / other building on the lot. The Great Plains look spacious. Place is security, space is freedom: we are attached to the one and long for the other. There is no place like home. What is home? It is the old homestead, the old neighborhood, home-town, or motherland. Geographers study places. Planners would like to evoke \"a sense of place.\" These are unexceptional ways of speaking. Space and place are basic components of the lived world; we take them for granted. When we think about them, however, they may assume unexpected meanings and raise questions we have not thought to ask. What is space? Let an episode in the life of the theologian Paul Tillich focus the question so that it bears on the meaning of space in experience. Tillich was born and brought up in a small town in eastern Germany before the turn of the century. The town was medieval in character. Surrounded by a wall and administered from a medieval town hall, it gave the impression of a small, protected, and self-contained world. To an imaginative child it felt narrow and restrictive. Every year, however young Tillich was able to escape with his family to the Baltic Sea. The flight to the limitless horizon and unrestricted space 3 4 Introduction of the seashore was a great event. Much later Tillich chose a place on the Atlantic Ocean for his days of retirement, a decision that undoubtedly owed much to those early experiences. As a boy Tillich was also able to escape from the narrowness of small-town life by making trips to Berlin. Visits to the big city curiously reminded him of the sea. Berlin, too, gave Tillich a feeling of openness, infinity, unrestricted space. 1 Experiences of this kind make us ponder anew the meaning of a word like \"space\" or \"spaciousness\" that we think we know well. What is a place? What gives a place its identity, its aura? These questions occurred to the physicists Niels Bohr and Werner Heisenberg when they visited Kronberg Castle in Denmark. Bohr said to Heisenberg: Isn't it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the …",
"title": ""
},
{
"docid": "a86dac3d0c47757ce8cad41499090b8e",
"text": "We propose a theory of regret regulation that distinguishes regret from related emotions, specifies the conditions under which regret is felt, the aspects of the decision that are regretted, and the behavioral implications. The theory incorporates hitherto scattered findings and ideas from psychology, economics, marketing, and related disciplines. By identifying strategies that consumers may employ to regulate anticipated and experienced regret, the theory identifies gaps in our current knowledge and thereby outlines opportunities for future research.",
"title": ""
},
{
"docid": "76cc47710ab6fa91446844368821c991",
"text": "Recommender systems (RSs) have been successfully applied to alleviate the problem of information overload and assist users' decision makings. Multi-criteria recommender systems is one of the RSs which utilizes users' multiple ratings on different aspects of the items (i.e., multi-criteria ratings) to predict user preferences. Traditional approaches simply treat these multi-criteria ratings as addons, and aggregate them together to serve for item recommendations. In this paper, we propose the novel approaches which treat criteria preferences as contextual situations. More specifically, we believe that part of multiple criteria preferences can be viewed as contexts, while others can be treated in the traditional way in multi-criteria recommender systems. We compare the recommendation performance among three settings: using all the criteria ratings in the traditional way, treating all the criteria preferences as contexts, and utilizing selected criteria ratings as contexts. Our experiments based on two real-world rating data sets reveal that treating criteria preferences as contexts can improve the performance of item recommendations, but they should be carefully selected. The hybrid model of using selected criteria preferences as contexts and the remaining ones in the traditional way is finally demonstrated as the overall winner in our experiments.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "3e9aa3bcc728f8d735f6b02e0d7f0502",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: Linda.Marion@drexel.edu. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "66432ab91b459c3de8e867c8214029d8",
"text": "Distributional hypothesis lies in the root of most existing word representation models by inferring word meaning from its external contexts. However, distributional models cannot handle rare and morphologically complex words very well and fail to identify some finegrained linguistic regularity as they are ignoring the word forms. On the contrary, morphology points out that words are built from some basic units, i.e., morphemes. Therefore, the meaning and function of such rare words can be inferred from the words sharing the same morphemes, and many syntactic relations can be directly identified based on the word forms. However, the limitation of morphology is that it cannot infer the relationship between two words that do not share any morphemes. Considering the advantages and limitations of both approaches, we propose two novel models to build better word representations by modeling both external contexts and internal morphemes in a jointly predictive way, called BEING and SEING. These two models can also be extended to learn phrase representations according to the distributed morphology theory. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrate that the proposed models can outperform state-of-the-art models significantly on both word and phrase representation learning.",
"title": ""
},
{
"docid": "44ff7fa960b3c91cd66c5fbceacfba3d",
"text": "God gifted sense of vision to the human being is an important aspect of our life. But there are some unfortunate people who lack the ability of visualizing things. The visually impaired have to face many challenges in their daily life. The problem gets worse when there is an obstacle in front of them. Blind stick is an innovative stick designed for visually disabled people for improved navigation. The paper presents a theoretical system concept to provide a smart ultrasonic aid for blind people. The system is intended to provide overall measures – Artificial vision and object detection. The aim of the overall system is to provide a low cost and efficient navigation aid for a visually impaired person who gets a sense of artificial vision by providing information about the environmental scenario of static and dynamic objects around them. Ultrasonic sensors are used to calculate distance of the obstacles around the blind person to guide the user towards the available path. Output is in the form of sequence of beep sound which the blind person can hear.",
"title": ""
},
{
"docid": "5a2be4e590d31b0cb553215f11776a15",
"text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.",
"title": ""
},
{
"docid": "ebb8e498650191ea148ce1b97f443b21",
"text": "Many learning algorithms use a metric defined over the input s ace as a principal tool, and their performance critically depends on the quality of this metric. We address the problem of learning metrics using side-information in the form of equi valence constraints. Unlike labels, we demonstrate that this type of side-information can sometim es be automatically obtained without the need of human intervention. We show how such side-inform ation can be used to modify the representation of the data, leading to improved clustering and classification. Specifically, we present the Relevant Component Analysis (R CA) algorithm, which is a simple and efficient algorithm for learning a Mahalanobis metric. W e show that RCA is the solution of an interesting optimization problem, founded on an informa tion theoretic basis. If dimensionality reduction is allowed within RCA, we show that it is optimally accomplished by a version of Fisher’s linear discriminant that uses constraints. Moreover, unde r certain Gaussian assumptions, RCA can be viewed as a Maximum Likelihood estimation of the within cl ass covariance matrix. We conclude with extensive empirical evaluations of RCA, showing its ad v ntage over alternative methods.",
"title": ""
},
{
"docid": "08731e24a7ea5e8829b03d79ef801384",
"text": "A new power-rail ESD clamp circuit designed with PMOS as main ESD clamp device has been proposed and verified in a 65nm 1.2V CMOS process. The new proposed design with adjustable holding voltage controlled by the ESD detection circuit has better immunity against mis-trigger or transient-induced latch-on event. The layout area and the standby leakage current of this new proposed design are much superior to that of traditional RC-based power-rail ESD clamp circuit with NMOS as main ESD clamp device.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "3cc84fda5e04ccd36f5b632d9da3a943",
"text": "We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.",
"title": ""
},
{
"docid": "c5dacb6e808c30b0e7c603c3ee93fe2b",
"text": "Deep learning presents many opportunities for image-based plant phenotyping. Here we consider the capability of deep convolutional neural networks to perform the leaf counting task. Deep learning techniques typically require large and diverse datasets to learn generalizable models without providing a priori an engineered algorithm for performing the task. This requirement is challenging, however, for applications in the plant phenotyping field, where available datasets are often small and the costs associated with generating new data are high. In this work we propose a new method for augmenting plant phenotyping datasets using rendered images of synthetic plants. We demonstrate that the use of high-quality 3D synthetic plants to augment a dataset can improve performance on the leaf counting task. We also show that the ability of the model to generate an arbitrary distribution of phenotypes mitigates the problem of dataset shift when training and testing on different datasets. Finally, we show that real and synthetic plants are significantly interchangeable when training a neural network on the leaf counting task.",
"title": ""
},
{
"docid": "62b8d1ecb04506794f81a47fccb63269",
"text": "This paper addresses the mode collapse for generative adversarial networks (GANs). We view modes as a geometric structure of data distribution in a metric space. Under this geometric lens, we embed subsamples of the dataset from an arbitrary metric space into the `2 space, while preserving their pairwise distance distribution. Not only does this metric embedding determine the dimensionality of the latent space automatically, it also enables us to construct a mixture of Gaussians to draw latent space random vectors. We use the Gaussian mixture model in tandem with a simple augmentation of the objective function to train GANs. Every major step of our method is supported by theoretical analysis, and our experiments on real and synthetic data confirm that the generator is able to produce samples spreading over most of the modes while avoiding unwanted samples, outperforming several recent GAN variants on a number of metrics and offering new features.",
"title": ""
},
{
"docid": "a5c58dbcbf2dc9c298f5fda2721f87a0",
"text": "The purpose of this study was to investigate how university students perceive their involvement in the cyberbullying phenomenon, and its impact on their well-being. Thus, this study presents a preliminary approach of how college students’ perceived involvement in acts of cyberbullying can be measured. Firstly, Exploratory Factor Analysis (N = 349) revealed a unidimensional structure of the four scales included in the Cyberbullying Inventory for College Students. Then, Item Response Theory (N = 170) was used to analyze the unidimensionality of each scale and the interactions between participants and items. Results revealed good item reliability and Cronbach’s a for each scale. Results also showed the potential of the instrument and how college students underrated their involvement in acts of cyberbullying. Additionally, aggression types, coping strategies and sources of help to deal with cyberbullying were identified and discussed. Lastly, age, gender and course-related issues were considered in the analysis. Implications for researchers and practitioners are discussed. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "54bf44e04920bdaa7388dbbbbd34a1a8",
"text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.",
"title": ""
},
{
"docid": "adeebdc680819ca992f9d53e4866122a",
"text": "Large numbers of black kites (Milvus migrans govinda) forage with house crows (Corvus splendens) at garbage dumps in many Indian cities. Such aggregation of many individuals results in aggressiveness where adoption of a suitable behavioral approach is crucial. We studied foraging behavior of black kites in dumping sites adjoining two major corporation markets of Kolkata, India. Black kites used four different foraging tactics which varied and significantly influenced foraging attempts and their success rates. Kleptoparasitism was significantly higher than autonomous foraging events; interspecific kleptoparasitism was highest in occurrence with a low success rate, while ‘autonomous-ground’ was least adopted but had the highest success rate.",
"title": ""
},
{
"docid": "ecd144226fdb065c2325a0d3131fd802",
"text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.",
"title": ""
},
{
"docid": "df27cb7c7ab82ef44aebfeb45d6c3cf1",
"text": "Nowadays, data is created by humans as well as automatically collected by physical things, which embed electronics, software, sensors and network connectivity. Together, these entities constitute the Internet of Things (IoT). The automated analysis of its data can provide insights into previously unknown relationships between things, their environment and their users, facilitating an optimization of their behavior. Especially the real-time analysis of data, embedded into physical systems, can enable new forms of autonomous control. These in turn may lead to more sustainable applications, reducing waste and saving resources. IoT’s distributed and dynamic nature, resource constraints of sensors and embedded devices as well as the amounts of generated data are challenging even the most advanced automated data analysis methods known today. In particular, the IoT requires a new generation of distributed analysis methods. Many existing surveys have strongly focused on the centralization of data in the cloud and big data analysis, which follows the paradigm of parallel high-performance computing. However, bandwidth and energy can be too limited for the transmission of raw data, or it is prohibited due to privacy constraints. Such communication-constrained scenarios require decentralized analysis algorithms which at least partly work directly on the generating devices. After listing data-driven IoT applications, in contrast to existing surveys, we highlight the differences between cloudbased and decentralized analysis from an algorithmic perspective. We present the opportunities and challenges of research on communication-efficient decentralized analysis algorithms. Here, the focus is on the difficult scenario of vertically partitioned data, which covers common IoT use cases. The comprehensive bibliography aims at providing readers with a good starting point for their own work.",
"title": ""
},
{
"docid": "731a3a94245b67df3e362ac80f41155f",
"text": "Opportunistic networking offers many appealing application perspectives from local social-networking applications to supporting communications in remote areas or in disaster and emergency situations. Yet, despite the increasing penetration of smartphones, opportunistic networking is not feasible with most popular mobile devices. There is still no support for WiFi Ad-Hoc and protocols such as Bluetooth have severe limitations (short range, pairing). We believe that WiFi Ad-Hoc communication will not be supported by most popular mobile OSes (i.e., iOS and Android) and that WiFi Direct will not bring the desired features. Instead, we propose WiFi-Opp, a realistic opportunistic setup relying on (i) open stationary APs and (ii) spontaneous mobile APs (i.e., smartphones in AP or tethering mode), a feature used to share Internet access, which we use to enable opportunistic communications. We compare WiFi-Opp to WiFi Ad-Hoc by replaying real-world contact traces and evaluate their performance in terms of capacity for content dissemination as well as energy consumption. While achieving comparable throughput, WiFi-Opp is up to 10 times more energy efficient than its Ad-Hoc counterpart. Eventually, a proof of concept demonstrates the feasibility of WiFi-Opp, which opens new perspectives for opportunistic networking.",
"title": ""
}
] |
scidocsrr
|
a31565af70a5a6229d4b9623366bda3f
|
Creativity: Self-Referential Mistaking, Not Negating
|
[
{
"docid": "db70e6c202dc2c7f72fab88057f274af",
"text": "Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analyzed in terms of how model-building observers infer from measurements the computational capabilities embedded in nonlinear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtlely, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data. This paper presents an overview of an inductive framework — hierarchical -machine reconstruction — in which the emergence of complexity is associated with the innovation of new computational model classes. Complexity metrics for detecting structure and quantifying emergence, along with an analysis of the constraints on the dynamics of innovation, are outlined. Illustrative examples are drawn from the onset of unpredictability in nonlinear systems, finitary nondeterministic processes, and cellular automata pattern recognition. They demonstrate how finite inference resources drive the innovation of new structures and so lead to the emergence of complexity.",
"title": ""
}
] |
[
{
"docid": "815cdf2829b60ff44b38878b16f17695",
"text": "Nowadays, Vending Machines are well known among Japan, Malaysia and Singapore. The quantity of machines in these countries is on the top worldwide. This is due to the modern lifestyles which require fast food processing with high quality. This paper describes the designing of multi select machine using Finite State Machine Model with Auto-Billing Features. Finite State Machine (FSM) modeling is the most crucial part in developing proposed model as this reduces the hardware. In this paper the process of four state (user Selection, Waiting for money insertion, product delivery and servicing) has been modeled using MEALY Machine Model. The proposed model is tested using Spartan 3 development board and its performance is compared with CMOS based machine.",
"title": ""
},
{
"docid": "2a68a1bcdd4b764f7981c76199f96cc9",
"text": "In this paper we present a method for logo detection in image collections and streams. The proposed method is based on features, extracted from reference logo images and test images. Extracted features are combined with respect to their similarity in their descriptors' space and afterwards with respect to their geometric consistency on the image plane. The contribution of this paper is a novel method for fast geometric consistency test. Using state of the art fast matching methods, it produces pairs of similar features between the test image and the reference logo image and then examines which pairs are forming a consistent geometry on both the test and the reference logo image. It is noteworthy that the proposed method is scale, rotation and translation invariant. The key advantage of the proposed method is that it exhibits a much lower computational complexity and better performance than the state of the art methods. Experimental results on large scale datasets are presented to support these statements.",
"title": ""
},
{
"docid": "07409cd81cc5f0178724297245039878",
"text": "In recent years, the number of sensor network deployments for real-life applications has rapidly increased and it is expected to expand even more in the near future. Actually, for a credible deployment in a real environment three properties need to be fulfilled, i.e., energy efficiency, scalability and reliability. In this paper we focus on IEEE 802.15.4 sensor networks and show that they can suffer from a serious MAC unreliability problem, also in an ideal environment where transmission errors never occur. This problem arises whenever power management is enabled - for improving the energy efficiency - and results in a very low delivery ratio, even when the number of nodes in the network is very low (e.g., 5). We carried out an extensive analysis, based on simulations and real measurements, to investigate the ultimate reasons of this problem. We found that it is caused by the default MAC parameter setting suggested by the 802.15.4 standard. We also found that, with a more appropriate parameter setting, it is possible to achieve the desired level of reliability (as well as a better energy efficiency). However, in some scenarios this is possible only by choosing parameter values formally not allowed by the standard.",
"title": ""
},
{
"docid": "bc0804d1fb9494d73f2b4ef39f0a5e78",
"text": "OBJECTIVE\nStudies have shown that stress can delay the healing of experimental punch biopsy wounds. This study examined the relationship between the healing of natural wounds and anxiety and depression.\n\n\nMETHODS\nFifty-three subjects (31 women and 22 men) were studied. Wound healing was rated using a five-point Likert scale. Anxiety and depression were measured using the Hospital Anxiety and Depression Scale (HAD), a well-validated psychometric questionnaire. Psychological and clinical wound assessments were each conducted with raters and subjects blinded to the results of the other assessment.\n\n\nRESULTS\nDelayed healing was associated with a higher mean HAD score (p = .0348). Higher HAD anxiety and depression scores (indicating \"caseness\") were also associated with delayed healing (p = .0476 and p = .0311, respectively). Patients scoring in the top 50% of total HAD scores were four times more likely to have delayed healing than those scoring in the bottom 50% (confidence interval = 1.06-15.08).\n\n\nCONCLUSIONS\nThe relationship between healing of chronic wounds and anxiety and depression as measured by the HAD was statistically significant. Further research in the form of a longitudinal study and/or an interventional study is proposed.",
"title": ""
},
{
"docid": "f97491ae5324d737aadc42e3c402d838",
"text": "Diese Arbeit verknüpft Lernziele, didaktische Methoden und Techniken zur Bearbeitung und Bewertung von Programmieraufgaben in E-Learning-Plattformen. Das Ziel ist dabei, sowohl eine Bewertungsgrundlage für den Einsatz einer Plattform für beliebige Lehrveranstaltungen der Programmierlehre zu schaffen als auch ein Gesamtkonzept für eine idealisierte E-Learning-Anwendung zu präsentieren. Dabei steht bewusst die Kompetenzbildung der Studierenden im Zentrum der Überlegungen – die dafür benötigte technische Unterstützung wird aus den didaktischen Methoden zur Vermittlung der Kompetenzen abgeleitet.",
"title": ""
},
{
"docid": "a5f7a243e68212e211d9d89da06ceae1",
"text": "A new technique to achieve a circularly polarized probe-fed single-layer microstrip-patch antenna with a wideband axial ratio is proposed. The antenna is a modified form of the conventional E-shaped patch, used to broaden the impedance bandwidth of a basic patch antenna. By letting the two parallel slots of the E patch be unequal, asymmetry is introduced. This leads to two orthogonal currents on the patch and, hence, circularly polarized fields are excited. The proposed technique exhibits the advantage of the simplicity of the E-shaped patch design, which requires only the slot lengths, widths, and position parameters to be determined. Investigations of the effect of various dimensions of the antenna have been carried out via parametric analysis. Based on these investigations, a design procedure for a circularly polarized E-shaped patch was developed. A prototype has been designed, following the suggested procedure for the IEEE 802.11big WLAN band. The performance of the fabricated antenna was measured and compared with simulation results. Various examples with different substrate thicknesses and material types are presented and compared with the recently proposed circularly polarized U-slot patch antennas.",
"title": ""
},
{
"docid": "42aca9ffdd5c0d2a2f310280d12afa1a",
"text": "Communication skills courses are an essential component of undergraduate and postgraduate training and effective communication skills are actively promoted by medical defence organisations as a means of decreasing litigation. This article discusses active listening, a difficult discipline for anyone to practise, and examines why this is particularly so for doctors. It draws together themes from key literature in the field of communication skills, and examines how these theories apply in general practice.",
"title": ""
},
{
"docid": "0f2b09447d0cf8189264eda201a5dd8e",
"text": "Owing to its critical role in human cognition, the neural basis of language has occupied the interest of neurologists, psychologists, and cognitive neuroscientists for over 150 years. The language system was initially conceptualized as a left hemisphere circuit with discrete comprehension and production centers. Since then, advances in neuroscience have allowed a much more complex and nuanced understanding of the neural organization of language to emerge. In the course of mapping this complicated architecture, one especially important discovery has been the degree to which the map itself can change. Evidence from lesion studies, neuroimaging, and neuromodulation research demonstrates that the representation of language in the brain is altered by injury of the normal language network, that it changes over the course of language recovery, and that it is influenced by successful treatment interventions. This special issue of RNN is devoted to plasticity in the language system and focuses on changes that occur in the setting of left hemisphere stroke, the most common cause of aphasia. Aphasia—the acquired loss of language ability—is one of the most common and debilitating cognitive consequences of stroke, affecting approximately 20–40% of stroke survivors and impacting",
"title": ""
},
{
"docid": "2d8f76cef3d0c11441bbc8f5487588cb",
"text": "Abstract. It seems natural to assume that the more It seems natural to assume that the more closely robots come to resemble people, the more likely they are to elicit the kinds of responses people direct toward each other. However, subtle flaws in appearance and movement only seem eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit a model of a human other but do not measure up to it. If so, a very humanlike robot may provide the best means of finding out what kinds of behavior are perceived as human, since deviations from a human other are more obvious. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that an uncanny robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability. An experiment, which borrows from the methods of terror management research, was performed to test this hypothesis. Across all questions subjects who were exposed to a still image of an uncanny humanlike robot had on average a heightened preference for worldview supporters and a diminished preference for worldview threats relative to the control group.",
"title": ""
},
{
"docid": "5bee27378a98ff5872f7ae5e899f81e2",
"text": "An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.",
"title": ""
},
{
"docid": "3f1a9a0e601187836177a54d5fa7cbeb",
"text": "For the last twenty years, different kinds of information systems are developed for different purposes, depending on the need of the business . In today’s business world, there are varieties of information systems such as transaction processing systems (TPS), office automation systems (OAS), management information systems (MIS), decision support system (DSS), and executive information systems (EIS), Expert System (ES) etc . Each plays a different role in organizational hierarchy and management operations. This study attempts to explain the role of each type of information systems in business organizations.",
"title": ""
},
{
"docid": "a814fedf9bedf31911f8db43b0d494a5",
"text": "A critical period for language learning is often defined as a sharp decline in learning outcomes with age. This study examines the relevance of the critical period for English speaking proficiency among immigrants in the US. It uses microdata from the 2000 US Census, a model of language acquisition, and a flexible specification of an estimating equation based on 64 age-at-migration dichotomous variables. Self-reported English speaking proficiency among immigrants declines more-or-less monotonically with age at migration, and this relationship is not characterized by any sharp decline or discontinuity that might be considered consistent with a “critical” period. The findings are robust across the various immigrant samples, and between the genders. (110 words).",
"title": ""
},
{
"docid": "97f2f0dd427c5f18dae178bc2fd620ba",
"text": "NOTICE The contents of this report reflect the views of the author, who is responsible for the facts and accuracy of the data presented herein. The contents do not necessarily reflect policy of the Department of Transportation. This report does not constitute a standard, specification, or regulation. Abstract This report summarizes the historical development of the resistance factors developed for the geotechnical foundation design sections of the AASHTO LRFD Bridge Design Specifications, and recommends how to specifically implement recent developments in resistance factors for geotechnical foundation design. In addition, recommendations regarding the load factor for downdrag loads, based on statistical analysis of available load test data and reliability theory, are provided. The scope of this report is limited to shallow and deep foundation geotechnical design at the strength limit state. 17. Forward With the advent of the AASHTO Load and Resistance Factor (LRFD) Bridge Design Specifications in 1992, there has been considerable focus on the geotechnical aspects of those specifications, since most geotechnical engineers are unfamiliar with LRFD concepts. This is especially true regarding the quantification of the level of safety needed for design. Up to the time of writing of this report, the geotechnical profession has typically used safety factors within an allowable stress design (ASD) framework (also termed working stress design, or WSD). For those agencies that use Load Factor Design (LFD), the safety factors for the foundation design are used in combination with factored loads in accordance with the AASHTO Standard Specifications for Highway Bridges (2002). The adaptation of geotechnical design and the associated safety factors to what would become the first edition of the AASHTO LRFD Bridge Design Specifications began in earnest with the publication of the results of NCHRP Project 24-4 as NCHRP Report 343 (Barker, et al., 1991). The details of the calibrations they conducted are provided in an unpublished Appendix to that report (Appendix A). This is the primary source of resistance factors for foundation design as currently published in AASHTO (2004). Since that report was published, changes have occurred in the specifications regarding load factors and design methodology that have required re-evaluation of the resistance factors. Furthermore, new studies have been or are being conducted that are yet to be implemented in the LRFD specifications. In 2002, the AASHTO Bridge Subcommittee initiated an effort, with the help of the Federal Highway Administration (FHWA), to rewrite the foundation design sections of the AASHTO …",
"title": ""
},
{
"docid": "127ef38020617fda8598971b3f10926f",
"text": "Web services are important for creating distributed applications on the Web. In fact, they're a key enabler for service-oriented architectures that focus on service reuse and interoperability. The World Wide Web Consortium (W3C) has recently finished work on two important standards for describing Web services the Web Services Description Language (WSDL) 2.0 and Semantic Annotations for WSDL and XML Schema (SAWSDL). Here, the authors discuss the latter, which is the first standard for adding semantics to Web service descriptions.",
"title": ""
},
{
"docid": "3956a033021add41b1f4e80864e3b196",
"text": "Recently, most of malicious web pages include obfuscated codes in order to circumvent the detection of signature-based detection systems .It is difficult to decide whether the sting is obfuscated because the shape of obfuscated strings are changed continuously. In this paper, we propose a novel methodology that can detect obfuscated strings in the malicious web pages. We extracted three metrics as rules for detecting obfuscated strings by analyzing patterns of normal and malicious JavaScript codes. They are N-gram, Entropy, and Word Size. N-gram checks how many each byte code is used in strings. Entropy checks distributed of used byte codes. Word size checks whether there is used very long string. Based on the metrics, we implemented a practical tool for our methodology and evaluated it using read malicious web pages. The experiment results showed that our methodology can detect obfuscated strings in web pages effectively.",
"title": ""
},
{
"docid": "56d4abc61377dc2afa3ded978d318646",
"text": "Clothoids, i.e. curves Z(s) in RI whoem curvatures xes) are linear fitting functions of arclength ., have been nued for some time for curve fitting purposes in engineering applications. The first part of the paper deals with some basic interpolation problems for lothoids and studies the existence and uniqueness of their solutions. The second part discusses curve fitting problems for clothoidal spines, i.e. C2-carves, which are composed of finitely many clothoids. An iterative method is described for finding a clothoidal spline Z(aJ passing through given Points Z1 cR 2 . i = 0,1L.. n+ 1, which minimizes the integral frX(S) 2 ds.",
"title": ""
},
{
"docid": "9581483f301b3522b88f6690b2668217",
"text": "AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method – specifically hypothesis testing – in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems’ behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this market’s potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap. 1 The Many Facets of AI Research Although AI is a sub-discipline of computer science, AI researchers do not exclusively use the scientific method in their work. For example, the methods used by early AI researchers often drew from logic, a subfield of mathematics, and are distinct from the scientific method we think of today. Indeed AI has adopted many techniques and approaches over time. In this section, we distinguish and explore the history of these ∗Equal contribution. methodologies with a particular emphasis on characterizing the evolving science of AI.",
"title": ""
},
{
"docid": "45f6bb33f098a61c4166e3b942501604",
"text": "Estimating human age automatically via facial image analysis has lots of potential real-world applications, such as human computer interaction and multimedia communication. However, it is still a challenging problem for the existing computer vision systems to automatically and effectively estimate human ages. The aging process is determined by not only the person's gene, but also many external factors, such as health, living style, living location, and weather conditions. Males and females may also age differently. The current age estimation performance is still not good enough for practical use and more effort has to be put into this research direction. In this paper, we introduce the age manifold learning scheme for extracting face aging features and design a locally adjusted robust regressor for learning and prediction of human ages. The novel approach improves the age estimation accuracy significantly over all previous methods. The merit of the proposed approaches for image-based age estimation is shown by extensive experiments on a large internal age database and the public available FG-NET database.",
"title": ""
},
{
"docid": "86b330069b20d410eb2186479fe7f500",
"text": "Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities,whose exploitation may severely affect their performance, and consequently limit their practical utility. Extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction, which has not yet been pursued in a systematic way. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choices micans infotech +91 90036 28940 +91 94435 11725 MICANS INFOTECH, NO: 8 , 100 FEET ROAD,PONDICHERRY. WWW.MICANSINFOTECH.COM ; MICANSINFOTECH@GMAIL.COM +91 90036 28940; +91 94435 11725 IEEE Projects 100% WORKING CODE + DOCUMENTATION+ EXPLAINATION – BEST PRICE LOW PRICE GUARANTEED",
"title": ""
}
] |
scidocsrr
|
b3d10d3125708a84dcb956d775f80f92
|
Non-inverting buck-boost power-factor-correction converter with wide input-voltage-range applications
|
[
{
"docid": "8d495d909cb2a93929b34d85c371693b",
"text": "1 This work is supported by Philips Research, Briarcliff Manor, NY, through Colorado Power Electronics Center Abstract – Single-switch step-up/step-down converters, such as the buck-boost, SEPIC and Cuk, have relatively high voltage and current stresses on components compared to the buck or the boost converter. A buck-boost converter with two independently controlled switches can work as a boost or as a buck converter depending on input-output conditions, and thus achieves lower stresses on components. Using the converter synthesis method from [1], families of two-switch buck-boost converters are generated, including several new converter topologies. The two-switch buck-boost converters are evaluated and compared in terms of component stresses in universal-input power-factor-corrector applications. Among them, one new two-switch converter is identified that has low inductor conduction losses (50% of the boost converter), low inductor volt-seconds (72% of the boost converter), and about the same switch conduction losses and voltage stresses as the boost converter.",
"title": ""
}
] |
[
{
"docid": "73bf9a956ea7a10648851c85ef740db0",
"text": "Printed atmospheric spark gaps as ESD-protection on PCBs are examined. At first an introduction to the physic behind spark gaps. Afterward the time lag (response time) vs. voltage is measured with high load impedance. The dependable clamp voltage (will be defined later) is measured as a function of the load impedance and the local field in the air gap is simulated with FIT simulation software. At last the observed results are discussed on the basic of the physic and the simulations.",
"title": ""
},
{
"docid": "836001910512e8bd7f71f4ac7448a6dd",
"text": "We have developed a high-speed 1310-nm Al-MQW buried-hetero laser having 29-GHz bandwidth (BW). The laser was used to compare 28-Gbaud four-level pulse-amplitude-modulation (PAM4) and 56-Gb/s nonreturn to zero (NRZ) transmission performance. In both cases, it was possible to meet the 10-km link budget, however, 56-Gb/s NRZ operation achieved a 2-dB better sensitivity, attributable to the wide BW of the directly modulated laser and the larger eye amplitude for the NRZ format. On the other hand, the advantages for 28-Gbaud PAM4 were the reduced BW requirement for both the transmitter and the receiver PIN diode, which enabled us to use a lower bias to the laser and a PIN with a higher responsivity, or conversely enable the possibility of high temperature operation with lower power consumption. Both formats showed a negative dispersion penalty compared to back-to-back sensitivity using a negative fiber dispersion of -60 ps/nm, which was expected from the observed chirp characteristics of the laser. The reliability study up to 11 600 h at 85 °C under accelerated conditions showed no decrease in the output power at a constant bias of 60 mA.",
"title": ""
},
{
"docid": "39d67fe0ea08adf64d1122d4c173a9af",
"text": "Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.",
"title": ""
},
{
"docid": "162ad6b8d48f5d6c76067d25b320a947",
"text": "Image Understanding is fundamental to systems that need to extract contents and infer concepts from images. In this paper, we develop an architecture for understanding images, through which a system can recognize the content and the underlying concepts of an image and, reason and answer questions about both using a visual module, a reasoning module, and a commonsense knowledge base. In this architecture, visual data combines with background knowledge and; iterates through visual and reasoning modules to answer questions about an image or to generate a textual description of an image. We first provide motivations of such a Deep Image Understanding architecture and then, we describe the necessary components it should include. We also introduce our own preliminary implementation of this architecture and empirically show how this more generic implementation compares with a recent end-to-end Neural approach on specific applications. We address the knowledge-representation challenge in such an architecture by representing an image using a directed labeled graph (called Scene Description Graph). Our implementation uses generic visual recognition techniques and commonsense reasoning1 to extract such graphs from images. Our experiments show that the extracted graphs capture the syntactic and semantic content of an image with reasonable accuracy.",
"title": ""
},
{
"docid": "67733a15509caa529f2dd6068461c91d",
"text": "We used broadband ferromagnetic resonance (FMR) spectroscopy to measure the second- and fourth-order perpendicular magnetic anisotropies in Ta/(t) Co<sub>60</sub>Fe<sub>20</sub>B<sub>20</sub>/MgO layers over a Co<sub>60</sub>Fe<sub>20</sub>B<sub>20</sub> thickness range of 5.0 nm ≥ t ≥ 0.8 nm. Fort > 1.0 nm, the easy axis is in the plane of the film, but when t <; 1.0 nm, the easy axis is directed perpendicular to the surface. However, the presence of a substantial higher order perpendicular anisotropy results in an easy cone state when t = 1.0 nm. Angular-dependent FMR measurements verify the presence of the easy cone state. Measurement of the spectroscopic g-factor via FMR for both the in-plane and out-of-plane geometries suggests a significant change in electronic and/or physical structure at t ≈ 1.0 nm thickness.",
"title": ""
},
{
"docid": "e766e5a45936c53767898c591e6126f8",
"text": "Video completion is a computer vision technique to recover the missing values in video sequences by filling the unknown regions with the known information. In recent research, tensor completion, a generalization of matrix completion for higher order data, emerges as a new solution to estimate the missing information in video with the assumption that the video frames are homogenous and correlated. However, each video clip often stores the heterogeneous episodes and the correlations among all video frames are not high. Thus, the regular tenor completion methods are not suitable to recover the video missing values in practical applications. To solve this problem, we propose a novel spatiallytemporally consistent tensor completion method for recovering the video missing data. Instead of minimizing the average of the trace norms of all matrices unfolded along each mode of a tensor data, we introduce a new smoothness regularization along video time direction to utilize the temporal information between consecutive video frames. Meanwhile, we also minimize the trace norm of each individual video frame to employ the spatial correlations among pixels. Different to previous tensor completion approaches, our new method can keep the spatio-temporal consistency in video and do not assume the global correlation in video frames. Thus, the proposed method can be applied to the general and practical video completion applications. Our method shows promising results in all evaluations on both 3D biomedical image sequence and video benchmark data sets. Video completion is the process of filling in missing pixels or replacing undesirable pixels in a video. The missing values in a video can be caused by many situations, e.g., the natural noise in video capture equipment, the occlusion from the obstacles in environment, segmenting or removing interested objects from videos. Video completion is of great importance to many applications such as video repairing and editing, movie post-production (e.g., remove unwanted objects), etc. Missing information recovery in images is called inpaint∗To whom all correspondence should be addressed. This work was partially supported by US NSF IIS-1117965, IIS-1302675, IIS-1344152. Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing, which is usually accomplished by inferring or guessing the missing information from the surrounding regions, i.e. the spatial information. Video completion can be considered as an extension of 2D image inpainting to 3D. Video completion uses the information from the past and the future frames to fill the pixels in the missing region, i.e. the spatiotemporal information, which has been getting increasing attention in recent years. In computer vision, an important application area of artificial intelligence, there are many video completion algorithms. The most representative approaches include video inpainting, analogous to image inpainting (Bertalmio, Bertozzi, and Sapiro 2001), motion layer video completion, which splits the video sequence into different motion layers and completes each motion layer separately (Shiratori et al. 2006), space-time video completion, which is based on texture synthesis and is good but slow (Wexler, Shechtman, and Irani 2004), and video repairing, which repairs static background with motion layers and repairs moving foreground using model alignment (Jia et al. 2004). Many video completion methods are less effective because the video is often treated as a set of independent 2D images. Although the temporal independence assumption simplifies the problem, losing temporal consistency in recovered pixels leads to the unsatisfactory performance. On the other hand, temporal information can improve the video completion results (Wexler, Shechtman, and Irani 2004; Matsushita et al. 2005), but to exploit it the computational speeds of most methods are significantly reduced. Thus, how to efficiently and effectively utilize both spatial and temporal information is a challenging problem in video completion. In most recent work, Liu et. al. (Liu et al. 2013) estimated the missing data in video via tensor completion which was generalized from matrix completion methods. In these methods, the rank or rank approximation (trace norm) is used, as a powerful tool, to capture the global information. The tensor completion method (Liu et al. 2013) minimizes the trace norm of a tensor, i.e. the average of the trace norms of all matrices unfolded along each mode. Thus, it assumes the video frames are highly correlated in the temporal direction. If the video records homogenous episodes and all frames describe the similar information, this assumption has no problem. However, one video clip usually includes multiple different episodes and the frames from different episodes Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "9e4adad2e248895d80f28cf6134f68c1",
"text": "Maltodextrin (MX) is an ingredient in high demand in the food industry, mainly for its useful physical properties which depend on the dextrose equivalent (DE). The DE has however been shown to be an inaccurate parameter for predicting the performance of the MXs in technological applications, hence commercial MXs were characterized by mass spectrometry (MS) to determine their molecular weight distribution (MWD) and degree of polymerization (DP). Samples were subjected to different water activities (aw). Water adsorption was similar at low aw, but radically increased with the DP at higher aw. The decomposition temperature (Td) showed some variations attributed to the thermal hydrolysis induced by the large amount of adsorbed water and the supplied heat. The glass transition temperature (Tg) linearly decreased with both, aw and DP. The microstructural analysis by X-ray diffraction showed that MXs did not crystallize with the adsorption of water, preserving their amorphous structure. The optical micrographs showed radical changes in the overall appearance of the MXs, indicating a transition from a glassy to a rubbery state. Based on these characterizations, different technological applications for the MXs were suggested.",
"title": ""
},
{
"docid": "616d20b1359cc1cf4fcfb1a0318d721e",
"text": "The Burj Khalifa Project is the tallest structure ever built by man; the tower is 828 meters tall and compromise of 162 floors above grade and 3 basement levels. Early integration of aerodynamic shaping and wind engineering played a major role in the architectural massing and design of this multi-use tower, where mitigating and taming the dynamic wind effects was one of the most important design criteria set forth at the onset of the project design. This paper provides brief description of the tower structural systems, focuses on the key issues considered in construction planning of the key structural components, and briefly outlines the execution of one of the most comprehensive structural health monitoring program in tall buildings.",
"title": ""
},
{
"docid": "462e3be75902bf8a39104c75ec2bea53",
"text": "A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of Mxq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=Mxqq(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.",
"title": ""
},
{
"docid": "3a2c37a96407b79f6ddd9d38f9b79741",
"text": "This paper proposes a network-aware resource management scheme that improves the quality of experience (QoE) for adaptive video streaming in CDNs and Information-Centric Networks (ICN) in general, and Dynamic Adaptive Streaming over HTTP (DASH) in particular. By utilizing the DASH manifest, the network (by way of a logically centralized controller) computes the available link resources and schedules the chunk dissemination to edge caches ahead of the end-user's requests. Our approach is optimized for multi-rate DASH videos. We implemented our resource management scheme, and demonstrated that in the scenario when network conditions evolve quickly, our approach can maintain smooth high quality playback. We show on actual video server data and in our own simulation environment that a significant reduction in peak bandwidth of 20% can be achieved using our approach.",
"title": ""
},
{
"docid": "a5a0e1b984eac30c225190c0cba63ab4",
"text": "The traditional intrusion detection system is not flexible in providing security in cloud computing because of the distributed structure of cloud computing. This paper surveys the intrusion detection and prevention techniques and possible solutions in Host Based and Network Based Intrusion Detection System. It discusses DDoS attacks in Cloud environment. Different Intrusion Detection techniques are also discussed namely anomaly based techniques and signature based techniques. It also surveys different approaches of Intrusion Prevention System.",
"title": ""
},
{
"docid": "16932e01fdea801f28ec6c4194f70352",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "60bdd255a19784ed2d19550222e61b69",
"text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.",
"title": ""
},
{
"docid": "050540ce54975f34b752ddb25b001bf4",
"text": "This paper describes a custom Low Power Motor Controller (LoPoMoCo) that was developed for a 34-axis robot system currently being designed for Minimally Invasive Surgery (MIS) of the upper airways. The robot system will includes three robot arms equipped with small snake-like mechanisms, which challenge the controller design due to their requirement for precise sensing and control of low motor currents. The controller hardware also provides accurate velocity estimate from incremental encoder feedback and can selectively be operated in speed or torque control mode. The experimental results demonstrate that the controller can measure applied loads with a resolution of , even though the transmission is nonbackdriveable. Although the controller was designed for this particular robot, it is applicable to other systems requiring torque monitoring capabilities.",
"title": ""
},
{
"docid": "a48278ee8a21a33ff87b66248c6b0b8a",
"text": "We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.",
"title": ""
},
{
"docid": "6aee20acd54b5d6f2399106075c9fee1",
"text": "BACKGROUND\nThe aim of this study was to compare the effectiveness of the ampicillin plus ceftriaxone (AC) and ampicillin plus gentamicin (AG) combinations for treating Enterococcus faecalis infective endocarditis (EFIE).\n\n\nMETHODS\nAn observational, nonrandomized, comparative multicenter cohort study was conducted at 17 Spanish and 1 Italian hospitals. Consecutive adult patients diagnosed of EFIE were included. Outcome measurements were death during treatment and at 3 months of follow-up, adverse events requiring treatment withdrawal, treatment failure requiring a change of antimicrobials, and relapse.\n\n\nRESULTS\nA larger percentage of AC-treated patients (n = 159) had previous chronic renal failure than AG-treated patients (n = 87) (33% vs 16%, P = .004), and AC patients had a higher incidence of cancer (18% vs 7%, P = .015), transplantation (6% vs 0%, P = .040), and healthcare-acquired infection (59% vs 40%, P = .006). Between AC and AG-treated EFIE patients, there were no differences in mortality while on antimicrobial treatment (22% vs 21%, P = .81) or at 3-month follow-up (8% vs 7%, P = .72), in treatment failure requiring a change in antimicrobials (1% vs 2%, P = .54), or in relapses (3% vs 4%, P = .67). However, interruption of antibiotic treatment due to adverse events was much more frequent in AG-treated patients than in those receiving AC (25% vs 1%, P < .001), mainly due to new renal failure (≥25% increase in baseline creatinine concentration; 23% vs 0%, P < .001).\n\n\nCONCLUSIONS\nAC appears as effective as AG for treating EFIE patients and can be used with virtually no risk of renal failure and regardless of the high-level aminoglycoside resistance status of E. faecalis.",
"title": ""
},
{
"docid": "65af21566422d9f0a11f07d43d7ead13",
"text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.",
"title": ""
},
{
"docid": "101bcd956dcdb0fff3ecf78aa841314a",
"text": "HCI research has increasingly examined how sensing technologies can help people capture and visualize data about their health-related behaviors. Yet, few systems help people reflect more fundamentally on the factors that influence behaviors such as physical activity (PA). To address this research gap, we take a novel approach, examining how such reflections can be stimulated through a medium that generations of families have used for reflection and teaching: storytelling. Through observations and interviews, we studied how 13 families interacted with a low-fidelity prototype, and their attitudes towards this tool. Our prototype used storytelling and interactive prompts to scaffold reflection on factors that impact children's PA. We contribute to HCI research by characterizing how families interacted with a story-driven reflection tool, and how such a tool can encourage critical processes for behavior change. Informed by the Transtheoretical Model, we present design implications for reflective informatics systems.",
"title": ""
},
{
"docid": "378c3b785db68bd5efdf1ad026c901ea",
"text": "Intrinsically switched tunable filters are switched on and off using the tuning elements that tune their center frequencies and/or bandwidths, without requiring an increase in the tuning range of the tuning elements. Because external RF switches are not needed, substantial improvements in insertion loss, linearity, dc power consumption, control complexity, size, and weight are possible compared to conventional approaches. An intrinsically switched varactor-tuned bandstop filter and bandpass filter bank are demonstrated here for the first time. The intrinsically switched bandstop filter prototype has a second-order notch response with more than 50 dB of rejection continuously tunable from 665 to 1000 MHz (50%) with negligible passband ripple in the intrinsic off state. The intrinsically switched tunable bandpass filter bank prototype, comprised of three third-order bandpass filters, has a constant 50-MHz bandwidth response continuously tunable from 740 to 1644 MHz (122%) with less than 5 dB of passband insertion loss and more than 40 dB of isolation between bands.",
"title": ""
}
] |
scidocsrr
|
32a2c438985fb1e2c9d3e19b35a3da50
|
Stochastic properties of the random waypoint mobility model: epoch length, direction distribution, and cell change rate
|
[
{
"docid": "d339f7d94334a2ccc256c29c63fd936f",
"text": "The random waypoint model is a frequently used mobility model for simulation–based studies of wireless ad hoc networks. This paper investigates the spatial node distribution that results from using this model. We show and interpret simulation results on a square and circular system area, derive an analytical expression of the expected node distribution in one dimension, and give an approximation for the two–dimensional case. Finally, the concept of attraction areas and a modified random waypoint model, the random borderpoint model, is analyzed by simulation.",
"title": ""
}
] |
[
{
"docid": "9a8133fbfe2c9422b6962dd88505a9e9",
"text": "The amino acid sequences of 301 glycosyl hydrolases and related enzymes have been compared. A total of 291 sequences corresponding to 39 EC entries could be classified into 35 families. Only ten sequences (less than 5% of the sample) could not be assigned to any family. With the sequences available for this analysis, 18 families were found to be monospecific (containing only one EC number) and 17 were found to be polyspecific (containing at least two EC numbers). Implications on the folding characteristics and mechanism of action of these enzymes and on the evolution of carbohydrate metabolism are discussed. With the steady increase in sequence and structural data, it is suggested that the enzyme classification system should perhaps be revised.",
"title": ""
},
{
"docid": "3b8e716e658176cebfbdb313c8cb22ac",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "8fd97add7e3b48bad9fd82dc01422e59",
"text": "Anaerobic nitrate-dependent Fe(II) oxidation is widespread in various environments and is known to be performed by both heterotrophic and autotrophic microorganisms. Although Fe(II) oxidation is predominantly biological under acidic conditions, to date most of the studies on nitrate-dependent Fe(II) oxidation were from environments of circumneutral pH. The present study was conducted in Lake Grosse Fuchskuhle, a moderately acidic ecosystem receiving humic acids from an adjacent bog, with the objective of identifying, characterizing and enumerating the microorganisms responsible for this process. The incubations of sediment under chemolithotrophic nitrate-dependent Fe(II)-oxidizing conditions have shown the enrichment of TM3 group of uncultured Actinobacteria. A time-course experiment done on these Actinobacteria showed a consumption of Fe(II) and nitrate in accordance with the expected stoichiometry (1:0.2) required for nitrate-dependent Fe(II) oxidation. Quantifications done by most probable number showed the presence of 1 × 104 autotrophic and 1 × 107 heterotrophic nitrate-dependent Fe(II) oxidizers per gram fresh weight of sediment. The analysis of microbial community by 16S rRNA gene amplicon pyrosequencing showed that these actinobacterial sequences correspond to ∼0.6% of bacterial 16S rRNA gene sequences. Stable isotope probing using 13CO2 was performed with the lake sediment and showed labeling of these Actinobacteria. This indicated that they might be important autotrophs in this environment. Although these Actinobacteria are not dominant members of the sediment microbial community, they could be of functional significance due to their contribution to the regeneration of Fe(III), which has a critical role as an electron acceptor for anaerobic microorganisms mineralizing sediment organic matter. To the best of our knowledge this is the first study to show the autotrophic nitrate-dependent Fe(II)-oxidizing nature of TM3 group of uncultured Actinobacteria.",
"title": ""
},
{
"docid": "2601ff3b4af85883017d8fb7e28e5faa",
"text": "The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.",
"title": ""
},
{
"docid": "73d3c622c98fba72ae2156df52c860d3",
"text": "We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of spacetime tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using neural networks.",
"title": ""
},
{
"docid": "bd376c939a5935838cbec64c55ff88ee",
"text": "We consider the problem of autonomous navigation in an unstr ctu ed outdoor environment. The goal is for a small outdoor robot to come into a ne w area, learn about and map its environment, and move to a given goal at modest spe ed (1 m/s). This problem is especially difficult in outdoor, off-road enviro nments, where tall grass, shadows, deadfall, and other obstacles predominate. Not su rpri ingly, the biggest challenge is acquiring and using a reliable map of the new are a. Although work in outdoor navigation has preferentially used laser rangefi d rs [13, 2, 6], we use stereo vision as the main sensor. Vision sensors allow us to u e more distant objects as landmarks for navigation, and to learn and use color and te xture models of the environment, in looking further ahead than is possible with range sensors alone. In this paper we show how to build a consistent, globally corr ect map in real time, using a combination of the following vision-based tec hniques:",
"title": ""
},
{
"docid": "81840452c52d61024ba5830437e6a2c4",
"text": "Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR). We are given a set of items, each with a positive real weight, and a set of knapsacks, each with a positive real capacity. In addition, for each item a set of knapsacks that can hold that item is specified. In a feasible assignment of items to knapsacks, each item is assigned to at most one knapsack, assignment restrictions are satisfied, and knapsack capacities are not exceeded. We consider the objectives of maximizing assigned weight and minimizing utilized capacity. We focus on obtaining approximate solutions in polynomial computational time. We show that simple greedy approaches yield 1/3-approximation algorithms for the objective of maximizing assigned weight. We give two different 1/2-approximation algorithms: the first one solves single knapsack problems successively and the second one is based on rounding the LP relaxation solution. For the bicriteria problem of minimizing utilized capacity subject to a minimum requirement on assigned weight, we give an (1/3,2)-approximation algorithm.",
"title": ""
},
{
"docid": "e0f797ff66a81b88bbc452e86864d7bc",
"text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.",
"title": ""
},
{
"docid": "89513d2cf137e60bf7f341362de2ba84",
"text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
"title": ""
},
{
"docid": "f0efa93a150ca1be1351277ea30e370b",
"text": "We describe an effort to train a RoboCup soccer-playing agent playing in the Simulation League using casebased reasoning. The agent learns (builds a case base) by observing the behaviour of existing players and determining the spatial configuration of the objects the existing players pay attention to. The agent can then use the case base to determine what actions it should perform given similar spatial configurations. When observing a simple goal-driven, rule-based, stateless agent, the trained player appears to imitate the behaviour of the original and experimental results confirm the observed behaviour. The process requires little human intervention and can be used to train agents exhibiting diverse behaviour in an automated manner.",
"title": ""
},
{
"docid": "15cde62b96f8c87bedb6f721befa3ae4",
"text": "To investigate the dispersion mechanism(s) of ternary dry powder inhaler (DPI) formulations by comparison of the interparticulate adhesions and in vitro performance of a number of carrier–drug–fines combinations. The relative levels of adhesion and cohesion between a lactose carrier and a number of drugs and fine excipients were quantified using the cohesion–adhesion balance (CAB) approach to atomic force microscopy. The in vitro performance of formulations produced using these materials was quantified and the particle size distribution of the aerosol clouds produced from these formulations determined by laser diffraction. Comparison between CAB ratios and formulation performance suggested that the improvement in performance brought about by the addition of fines to which the drug was more adhesive than cohesive might have been due to the formation of agglomerates of drug and fines particles. This was supported by aerosol cloud particle size data. The mechanism(s) underlying the improved performance of ternary formulations where the drug was more cohesive than adhesive to the fines was unclear. The performance of ternary DPI formulations might be increased by the preferential formation of drug–fines agglomerates, which might be subject to greater deagglomeration forces during aerosolisation than smaller agglomerates, thus producing better formulation performance.",
"title": ""
},
{
"docid": "c3838ee9c296364d2bea785556dfd2fb",
"text": "Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.",
"title": ""
},
{
"docid": "c11e1e156835d98707c383711f4e3953",
"text": "We present an approach for automatically generating provably correct abstractions from C source code that are useful for practical implementation verification. The abstractions are easier for a human verification engineer to reason about than the implementation and increase the productivity of interactive code proof. We guarantee soundness by automatically generating proofs that the abstractions are correct.\n In particular, we show two key abstractions that are critical for verifying systems-level C code: automatically turning potentially overflowing machine-word arithmetic into ideal integers, and transforming low-level C pointer reasoning into separate abstract heaps. Previous work carrying out such transformations has either done so using unverified translations, or required significant proof engineering effort.\n We implement these abstractions in an existing proof-producing specification transformation framework named AutoCorres, developed in Isabelle/HOL, and demonstrate its effectiveness in a number of case studies. We show scalability on multiple OS microkernels, and we show how our changes to AutoCorres improve productivity for total correctness by porting an existing high-level verification of the Schorr-Waite algorithm to a low-level C implementation with minimal effort.",
"title": ""
},
{
"docid": "be220ab28653645e5186a8cefc120215",
"text": "OBJECTIVE\nBoluses are used in high-energy radiotherapy in order to overcome the skin sparing effect. In practice though, commonly used flat boluses fail to make a perfect contact with the irregular surface of the patient's skin, resulting in air gaps. Hence, we fabricated a customized bolus using a 3-dimensional (3D) printer and evaluated its feasibility for radiotherapy.\n\n\nMETHODS\nWe designed two kinds of bolus for production on a 3D printer, one of which was the 3D printed flat bolus for the Blue water phantom and the other was a 3D printed customized bolus for the RANDO phantom. The 3D printed flat bolus was fabricated to verify its physical quality. The resulting 3D printed flat bolus was evaluated by assessing dosimetric parameters such as D1.5 cm, D5 cm, and D10 cm. The 3D printed customized bolus was then fabricated, and its quality and clinical feasibility were evaluated by visual inspection and by assessing dosimetric parameters such as Dmax, Dmin, Dmean, D90%, and V90%.\n\n\nRESULTS\nThe dosimetric parameters of the resulting 3D printed flat bolus showed that it was a useful dose escalating material, equivalent to a commercially available flat bolus. Analysis of the dosimetric parameters of the 3D printed customized bolus demonstrated that it is provided good dose escalation and good contact with the irregular surface of the RANDO phantom.\n\n\nCONCLUSIONS\nA customized bolus produced using a 3D printer could potentially replace commercially available flat boluses.",
"title": ""
},
{
"docid": "07295446da02d11750e05f496be44089",
"text": "As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment. 1 Motivation and Problem Statement In this paper, we discuss our work on grounding natural language–interpreting human language into semantically informed structures in the context of robotic perception and actuation. To this end, we explore the question of interpreting natural language commands so they can be executed by a robot, specifically in the context of following route instructions through a map. Natural language (NL) is a rich, intuitive mechanism by which humans can interact with systems around them, offering sufficient signal to support robot task planning. Human route instructions include complex language constructs, which robots must be able to execute without being given a fully specified world model such as a map. Our goal is to investigate whether it is possible to learn a parser that produces · All authors are affiliated with the University of Washington, Seattle, USA. · Email: {cynthia,eherbst,lsz,fox}@cs.washington.edu",
"title": ""
},
{
"docid": "0c177af9c2fffa6c4c667d1b4a4d3d79",
"text": "In the last decade, a large number of different software component models have been developed, with different aims and using different principles and technologies. This has resulted in a number of models which have many similarities, but also principal differences, and in many cases unclear concepts. Component-based development has not succeeded in providing standard principles, as has, for example, object-oriented development. In order to increase the understanding of the concepts and to differentiate component models more easily, this paper identifies, discusses, and characterizes fundamental principles of component models and provides a Component Model Classification Framework based on these principles. Further, the paper classifies a large number of component models using this framework.",
"title": ""
},
{
"docid": "4b7eb2b8f4d4ec135ab1978b4811eca4",
"text": "This paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared with existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.",
"title": ""
},
{
"docid": "7462d739a80bf654d6f9df78b4a6e6e3",
"text": "Multi-class pattern classification has many applications including text document classification, speech recognition, object recognition, etc. Multi-class pattern classification using neural networks is not a trivial extension from two-class neural networks. This paper presents a comprehensive and competitive study in multi-class neural learning with focuses on issues including neural network architecture, encoding schemes, training methodology and training time complexity. Our study includes multi-class pattern classification using either a system of multiple neural networks or a single neural network, and modeling pattern classes using one-against-all, one-against-one, one-againsthigher-order, and P-against-Q. We also discuss implementations of these approaches and analyze training time complexity associated with each approach. We evaluate six different neural network system architectures for multi-class pattern classification along the dimensions of imbalanced data, large number of pattern classes, large vs. small training data through experiments conducted on well-known benchmark data. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df114f9765d4c0bba7371c243bad8608",
"text": "CAPTCHAs are automated tests to tell computers and humans apart. They are designed to be easily solvable by humans, but unsolvable by machines. With Convolutional Neural Networks these tests can also be solved automatically. However, the strength of CNNs relies on the training data that the classifier is learnt on and especially on the size of the training set. Hence, it is intractable to solve the problem with CNNs in case of insufficient training data. We propose an Active Deep Learning strategy that makes use of the ability to gain new training data for free without any human intervention which is possible in the special case of CAPTCHAs. We discuss how to choose the new samples to re-train the network and present results on an auto-generated CAPTCHA dataset. Our approach dramatically improves the performance of the network if we initially have only few labeled training data.",
"title": ""
}
] |
scidocsrr
|
513ecae3dde0ac74c17e01d0aad02629
|
Automatic program repair with evolutionary computation
|
[
{
"docid": "c15492fea3db1af99bc8a04bdff71fdc",
"text": "The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.",
"title": ""
},
{
"docid": "552545ea9de47c26e1626efc4a0f201e",
"text": "For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the \"alphabet\" used to describe those systems.",
"title": ""
},
{
"docid": "f8742208fef05beb86d77f1d5b5d25ef",
"text": "The latest book on Genetic Programming, Poli, Langdon and McPhee’s (with contributions from John R. Koza) A Field Guide to Genetic Programming represents an exciting landmark with the authors choosing to make their work freely available by publishing using a form of the Creative Commons License[1]. In so doing they have created a must-read resource which is, to use their words, ’aimed at both newcomers and old-timers’. The book is freely available from the authors companion website [2] and Lulu.com [3] in both pdf and html form. For those who desire the more traditional page turning exercise, inexpensive printed copies can be ordered from Lulu.com. The Field Guides companion website also provides a link to the TinyGP code printed over eight pages of Appendix B, and a Discussion Group centered around the book. The book is divided into four parts with fourteen chapters and two appendices. Part I introduces the basics of Genetic Programming, Part II overviews more advanced topics, Part III highlights some of the real world applications and discusses issues facing the GP researcher or practitioner, while Part IV contains two appendices, the first introducing some key resources and the second appendix describes the TinyGP code. The pdf and html forms of the book have an especially useful feature, providing links to the articles available on-line at the time of publication, and to bibtex entries of the GP Bibliography. Following an overview of the book in chapter 1, chapter 2 introduces the basic concepts of GP focusing on the tree representation, initialisation, selection, and the search operators. Chapter 3 is centered around the preparatory steps in applying GP to a problem, which is followed by an outline of a sample run of GP on a simple instance of symbolic regression in Chapter 4. Overall these chapters provide a compact and useful introduction to GP. The first of the Advanced GP chapters in Part II looks at alternative strategies for initialisation and the search operators for tree-based GP. An overview of Modular, Grammatical and Developmental GP is provided in Chapter 6. While the chapter title",
"title": ""
},
{
"docid": "2b471e61a6b95221d9ca9c740660a726",
"text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.",
"title": ""
}
] |
[
{
"docid": "11a28e11ba6e7352713b8ee63291cd9c",
"text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.",
"title": ""
},
{
"docid": "054b5be56ae07c58b846cf59667734fc",
"text": "Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc. Such systems use a large number of cameras to triangulate the position of optical markers. The marker positions are estimated with high accuracy. However, especially when tracking articulated bodies, a fraction of the markers in each timestep is missing from the reconstruction. In this paper, we propose to use a neural network approach to learn how human motion is temporally and spatially correlated, and reconstruct missing markers positions through this model. We experiment with two different models, one LSTM-based and one time-window-based. Both methods produce state-of-the-art results, while working online, as opposed to most of the alternative methods, which require the complete sequence to be known. The implementation is publicly available at https://github.com/Svitozar/NN-for-Missing-Marker-Reconstruction.",
"title": ""
},
{
"docid": "0f25a4cd8a0a94f6666caadb6d4be3d3",
"text": "The tradeoff between the switching energy and electro-thermal robustness is explored for 1.2-kV SiC MOSFET, silicon power MOSFET, and 900-V CoolMOS body diodes at different temperatures. The maximum forward current for dynamic avalanche breakdown is decreased with increasing supply voltage and temperature for all technologies. The CoolMOS exhibited the largest latch-up current followed by the SiC MOSFET and silicon power MOSFET; however, when expressed as current density, the SiC MOSFET comes first followed by the CoolMOS and silicon power MOSFET. For the CoolMOS, the alternating p and n pillars of the superjunctions in the drift region suppress BJT latch-up during reverse recovery by minimizing lateral currents and providing low-resistance paths for carriers. Hence, the temperature dependence of the latch-up current for CoolMOS was the lowest. The switching energy of the CoolMOS body diode is the largest because of its superjunction architecture which means the drift region have higher doping, hence more reverse charge. In spite of having a higher thermal resistance, the SiC MOSFET has approximately the same latch-up current while exhibiting the lowest switching energy because of the least reverse charge. The silicon power MOSFET exhibits intermediate performance on switching energy with lowest dynamic latching current.",
"title": ""
},
{
"docid": "c02fb121399e1ed82458fb62179d2560",
"text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.",
"title": ""
},
{
"docid": "44e7ba0be5275047587e9afd22f1de2a",
"text": "Dialogue state tracking plays an important role in statistical dialogue management. Domain-independent rule-based approaches are attractive due to their efficiency, portability and interpretability. However, recent rule-based models are still not quite competitive to statistical tracking approaches. In this paper, a novel framework is proposed to formulate rule-based models in a general way. In the framework, a rule is considered as a special kind of polynomial function satisfying certain linear constraints. Under some particular definitions and assumptions, rule-based models can be seen as feasible solutions of an integer linear programming problem. Experiments showed that the proposed approach can not only achieve competitive performance compared to statistical approaches, but also have good generalisation ability. It is one of the only two entries that outperformed all the four baselines in the third Dialog State Tracking Challenge.",
"title": ""
},
{
"docid": "42bc10578e76a0d006ee5d11484b1488",
"text": "In this paper, we present a wrapper-based acoustic group feature selection system for the INTERSPEECH 2015 Computational Paralinguistics Challenge (ComParE) 2015, Eating Condition (EC) Sub-challenge. The wrapper-based method has two components: the feature subset evaluation and the feature space search. The feature subset evaluation is performed using Support Vector Machine (SVM) classifiers. The wrapper method combined with complex algorithms such as SVM is computationally intensive. To address this, the feature space search uses Best Incremental Ranked Subset (BIRS), a fast and efficient algorithm. Moreover, we investigate considering the feature space in meaningful groups rather than individually. The acoustic feature space is partitioned into groups with each group representing a Low Level Descriptor (LLD). This partitioning reduces the time complexity of the search algorithm and makes the problem more tractable while attempting to gain insight into the relevant acoustic feature groups. Our wrapper-based system achieves improvement over the challenge baseline on the EC Sub-challenge test set using a variant of BIRS algorithm and LLD groups.",
"title": ""
},
{
"docid": "9f32b1e95e163c96ebccb2596a2edb8d",
"text": "This paper is devoted to the control of a cable driven redundant parallel manipulator, which is a challenging problem due the optimal resolution of its inherent redundancy. Additionally to complicated forward kinematics, having a wide workspace makes it difficult to directly measure the pose of the end-effector. The goal of the controller is trajectory tracking in a large and singular free workspace, and to guarantee that the cables are always under tension. A control topology is proposed in this paper which is capable to fulfill the stringent positioning requirements for these type of manipulators. Closed-loop performance of various control topologies are compared by simulation of the closed-loop dynamics of the KNTU CDRPM, while the equations of parallel manipulator dynamics are implicit in structure and only special integration routines can be used for their integration. It is shown that the proposed joint space controller is capable to satisfy the required tracking performance, despite the inherent limitation of task space pose measurement.",
"title": ""
},
{
"docid": "4bf6c59cdd91d60cf6802ae99d84c700",
"text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.",
"title": ""
},
{
"docid": "19c5d5563e41fac1fd29833662ad0b6c",
"text": "This paper discusses our contribution to the third RTE Challenge – the SALSA RTE system. It builds on an earlier system based on a relatively deep linguistic analysis, which we complement with a shallow component based on word overlap. We evaluate their (combined) performance on various data sets. However, earlier observations that the combination of features improves the overall accuracy could be replicated only partly.",
"title": ""
},
{
"docid": "17cc2f4ae2286d36748b203492d406e6",
"text": "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.",
"title": ""
},
{
"docid": "04644fb390a5d3690295551491f63167",
"text": "Massive graphs, such as online social networks and communication networks, have become common today. To efficiently analyze such large graphs, many distributed graph computing systems have been developed. These systems employ the \"think like a vertex\" programming paradigm, where a program proceeds in iterations and at each iteration, vertices exchange messages with each other. However, using Pregel's simple message passing mechanism, some vertices may send/receive significantly more messages than others due to either the high degree of these vertices or the logic of the algorithm used. This forms the communication bottleneck and leads to imbalanced workload among machines in the cluster. In this paper, we propose two effective message reduction techniques: (1)vertex mirroring with message combining, and (2)an additional request-respond API. These techniques not only reduce the total number of messages exchanged through the network, but also bound the number of messages sent/received by any single vertex. We theoretically analyze the effectiveness of our techniques, and implement them on top of our open-source Pregel implementation called Pregel+. Our experiments on various large real graphs demonstrate that our message reduction techniques significantly improve the performance of distributed graph computation.",
"title": ""
},
{
"docid": "ce3cd1edffb0754e55658daaafe18df6",
"text": "Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached. ⃰ Max Planck Institute for Research on Collective Goods, Bonn",
"title": ""
},
{
"docid": "b49698c3df4e432285448103cda7f2dd",
"text": "Acoustic emission (AE)-signal-based techniques have recently been attracting researchers' attention to rotational machine health monitoring and diagnostics due to the advantages of the AE signals over the extensively used vibration signals. Unlike vibration-based methods, the AE-based techniques are in their infant stage of development. From the perspective of machine health monitoring and fault detection, developing an AE-based methodology is important. In this paper, a methodology for rotational machine health monitoring and fault detection using empirical mode decomposition (EMD)-based AE feature quantification is presented. The methodology incorporates a threshold-based denoising technique into EMD to increase the signal-to-noise ratio of the AE bursts. Multiple features are extracted from the denoised signals and then fused into a single compressed AE feature. The compressed AE features are then used for fault detection based on a statistical method. A gear fault detection case study is conducted on a notional split-torque gearbox using AE signals to demonstrate the effectiveness of the methodology. A fault detection performance comparison using the compressed AE features with the existing EMD-based AE features reported in the literature is also conducted.",
"title": ""
},
{
"docid": "7e08ddffc3a04c6dac886e14b7e93907",
"text": "The paper introduces a penalized matrix estimation procedure aiming at solutions which are sparse and low-rank at the same time. Such structures arise in the context of social networks or protein interactions where underlying graphs have adjacency matrices which are block-diagonal in the appropriate basis. We introduce a convex mixed penalty which involves `1-norm and trace norm simultaneously. We obtain an oracle inequality which indicates how the two effects interact according to the nature of the target matrix. We bound generalization error in the link prediction problem. We also develop proximal descent strategies to solve the optimization problem efficiently and evaluate performance on synthetic and real data sets.",
"title": ""
},
{
"docid": "43184dfe77050618402900bc309203d5",
"text": "A prototype of Air Gap RLSA has been designed and simulated using hybrid air gap and FR4 dielectric material. The 28% wide bandwidth has been recorded through this approach. A 12.35dBi directive gain also recorded from the simulation. The 13.3 degree beamwidth of the radiation pattern is sufficient for high directional application. Since the proposed application was for Point to Point Link, this study concluded the Air Gap RLSA is a new candidate for this application.",
"title": ""
},
{
"docid": "2488c17b39dd3904e2f17448a8519817",
"text": "Young healthy participants spontaneously use different strategies in a virtual radial maze, an adaptation of a task typically used with rodents. Functional magnetic resonance imaging confirmed previously that people who used spatial memory strategies showed increased activity in the hippocampus, whereas response strategies were associated with activity in the caudate nucleus. Here, voxel based morphometry was used to identify brain regions covarying with the navigational strategies used by individuals. Results showed that spatial learners had significantly more gray matter in the hippocampus and less gray matter in the caudate nucleus compared with response learners. Furthermore, the gray matter in the hippocampus was negatively correlated to the gray matter in the caudate nucleus, suggesting a competitive interaction between these two brain areas. In a second analysis, the gray matter of regions known to be anatomically connected to the hippocampus, such as the amygdala, parahippocampal, perirhinal, entorhinal and orbitofrontal cortices were shown to covary with gray matter in the hippocampus. Because low gray matter in the hippocampus is a risk factor for Alzheimer's disease, these results have important implications for intervention programs that aim at functional recovery in these brain areas. In addition, these data suggest that spatial strategies may provide protective effects against degeneration of the hippocampus that occurs with normal aging.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "b23d7f18a7abcaa6d3984ef7ca0609e0",
"text": "FFT algorithm is the popular software design for spectrum analyzer, but doesnpsilat work well for parallel hardware system due to complex calculation and huge memory requirement. Observing the key components of a spectrum analyzer are the intensities for respective frequencies, we propose a Goertzel algorithm to directly extract the intensity factors for respective frequency components in the input signal. Goertzel algorithm dispenses with the memory for z-1 and z-2 processing, and only needs two multipliers and three adders for real number calculation. In this paper, we present the spectrum extraction algorithm and implement a spectrum extractor with high speed and low area consumption in a FPGA (field programmable gate array) chip. It proves the feasibility of implementing a handheld concurrent multi-channel real-time spectrum analysis IP into a low gate counts and low power consumption CPLD (complex programmable logic device) chip.",
"title": ""
},
{
"docid": "1a4d07d9a48668f7fa3bcf301c25f7f2",
"text": "A novel low-loss planar dielectric waveguide is proposed. It is based on a high-permittivity dielectric slab parallel to a metal ground. The guiding channel is limited at the sides by a number of air holes which are lowering the effective permittivity. A mode with the electric field primarily parallel to the ground plane is used, similar to the E11x mode of an insulated image guide. A rather thick gap layer between the ground and the high-permittivity slab makes this mode to show the highest effective permittivity. The paper discusses the mode dispersion behaviour and presents measured characteristics of a power divider circuit operating at a frequency of about 8 GHz. Low leakage of about 14% is observed at the discontinuities forming the power divider. Using a compact dipole antenna structure, excitation efficiency of more than 90% is obtained.",
"title": ""
},
{
"docid": "3fc2ec702c66501de0eea9f5f0cac511",
"text": "Emotional eating is a change in consumption of food in response to emotional stimuli, and has been linked in negative physical and psychological outcomes. Observers have noticed over the years a correlation between emotions, mood and food choice, in ways that vary from strong and overt to subtle and subconscious. Specific moods such as anger, fear, sadness and joy have been found to affect eating responses and eating itself can play a role in influencing one’s emotions. With such an obvious link between emotions and eating behavior, the research over the years continues to delve further into the phenomenon. This includes investigating individuals of different weight categories, as well as children, adolescents and parenting styles. EXPLORING THE ASSOCIATION BETWEEN EMOTIONS AND EATING BEHAVIOR v",
"title": ""
}
] |
scidocsrr
|
03c192db794d741241a84ccd46c5ba9b
|
Learning time-series shapelets
|
[
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
}
] |
[
{
"docid": "058515182c568c8df202542f28c15203",
"text": "Plant diseases have turned into a dilemma as it can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and classification of plant leaf diseases. The developed processing scheme consists of four main steps, first a color transformation structure for the input RGB image is created, then the green pixels are masked and removed using specific threshold value followed by segmentation process, the texture statistics are computed for the useful segments, finally the extracted features are passed through the classifier. The proposed algorithm’s efficiency can successfully detect and classify the examined diseases with an accuracy of 94%. Experimental results on a database of about 500 plant leaves confirm the robustness of the proposed approach.",
"title": ""
},
{
"docid": "9d7a441731e9d0c62dd452ccb3d19f7b",
"text": " In many countries, especially in under developed and developing countries proper health care service is a major concern. The health centers are far and even the medical personnel are deficient when compared to the requirement of the people. For this reason, health services for people who are unhealthy and need health monitoring on regular basis is like impossible. This makes the health monitoring of healthy people left far more behind. In order for citizens not to be deprived of the primary care it is always desirable to implement some system to solve this issue. The application of Internet of Things (IoT) is wide and has been implemented in various areas like security, intelligent transport system, smart cities, smart factories and health. This paper focuses on the application of IoT in health care system and proposes a novel architecture of making use of an IoT concept under fog computing. The proposed architecture can be used to acknowledge the underlying problem of deficient clinic-centric health system and change it to smart patientcentric health system.",
"title": ""
},
{
"docid": "472946ba2e62d3d8a0a42c7e908bf18f",
"text": "BACKGROUND\nAntidepressants, aiming at monoaminergic neurotransmission, exhibit delayed onset of action, limited efficacy, and poor compliance. Glutamatergic neurotransmission is involved in depression. However, it is unclear whether enhancement of the N-methyl-D-aspartate (NMDA) subtype glutamate receptor can be a treatment for depression.\n\n\nMETHODS\nWe studied sarcosine, a glycine transporter-I inhibitor that potentiates NMDA function, in animal models and in depressed patients. We investigated its effects in forced swim test, tail suspension test, elevated plus maze test, novelty-suppressed feeding test, and chronic unpredictable stress test in rats and conducted a 6-week randomized, double-blinded, citalopram-controlled trial in 40 patients with major depressive disorder. Clinical efficacy and side effects were assessed biweekly, with the main outcomes of Hamilton Depression Rating Scale, Global Assessment of Function, and remission rate. The time course of response and dropout rates was also compared.\n\n\nRESULTS\nSarcosine decreased immobility in the forced swim test and tail suspension test, reduced the latency to feed in the novelty-suppressed feeding test, and reversed behavioral deficits caused by chronic unpredictable stress test, which are characteristics for an antidepressant. In the clinical study, sarcosine substantially improved scores of Hamilton Depression Rating Scale, Clinical Global Impression, and Global Assessment of Function more than citalopram treatment. Sarcosine-treated patients were much more likely and quicker to remit and less likely to drop out. Sarcosine was well tolerated without significant side effects.\n\n\nCONCLUSIONS\nOur preliminary findings suggest that enhancing NMDA function can improve depression-like behaviors in rodent models and in human depression. Establishment of glycine transporter-I inhibition as a novel treatment for depression waits for confirmation by further proof-of-principle studies.",
"title": ""
},
{
"docid": "f48712851095fa3b33898c38ebcfaa95",
"text": "Most existing image-based crop disease recognition algorithms rely on extracting various kinds of features from leaf images of diseased plants. They have a common limitation as the features selected for discriminating leaf images are usually treated as equally important in the classification process. We propose a novel cucumber disease recognition approach which consists of three pipelined procedures: segmenting diseased leaf images by K-means clustering, extracting shape and color features from lesion information, and classifying diseased leaf images using sparse representation (SR). A major advantage of this approach is that the classification in the SR space is able to effectively reduce the computation cost and improve the recognition performance. We perform a comparison with four other feature extraction based methods using a leaf image dataset on cucumber diseases. The proposed approach is shown to be effective in recognizing seven major cucumber diseases with an overall recognition rate of 85.7%, higher than those of the other methods. 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cf4509b8d2b458f608a7e72165cdf22b",
"text": "Nowadays, blockchain is becoming a synonym for distributed ledger technology. However, blockchain is only one of the specializations in the field and is currently well-covered in existing literature, but mostly from a cryptographic point of view. Besides blockchain technology, a new paradigm is gaining momentum: directed acyclic graphs. The contribution presented in this paper is twofold. Firstly, the paper analyzes distributed ledger technology with an emphasis on the features relevant to distributed systems. Secondly, the paper analyses the usage of directed acyclic graph paradigm in the context of distributed ledgers, and compares it with the blockchain-based solutions. The two paradigms are compared using representative implementations: Bitcoin, Ethereum and Nano. We examine representative solutions in terms of the applied data structures for maintaining the ledger, consensus mechanisms, transaction confirmation confidence, ledger size, and scalability.",
"title": ""
},
{
"docid": "0c7512ac95d72436e31b9b05199eefdd",
"text": "Usable security has unique usability challenges bec ause the need for security often means that standard human-comput er-in eraction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in s electing better passwords, thus increasing security by expanding th e effective password space. In click-based graphical passwords, poorly chosen passwords lead to the emergence of hotspots – portions of the image where users are more likely to select cli ck-points, allowing attackers to mount more successful diction ary attacks. We use persuasion to influence user choice in click -based graphical passwords, encouraging users to select mo re random, and hence more secure, click-points. Our approach i s to introduce persuasion to the Cued Click-Points graphical passw ord scheme (Chiasson, van Oorschot, Biddle, 2007) . Our resulting scheme significantly reduces hotspots while still maintain ing its usability.",
"title": ""
},
{
"docid": "b912b32d9f1f4e7a5067450b98870a71",
"text": "As of May 2013, 56 percent of American adults had a smartphone, and most of them used it to access the Internet. One-third of smartphone users report that their phone is the primary way they go online. Just as the Internet changed retailing in the late 1990s, many argue that the transition to mobile, sometimes referred to as “Web 3.0,” will have a similarly disruptive effect (Brynjolfsson et al. 2013). In this paper, we aim to document some early effects of how mobile devices might change Internet and retail commerce. We present three main findings based on an analysis of eBay’s mobile shopping application and core Internet platform. First, and not surprisingly, the early adopters of mobile e-commerce applications appear",
"title": ""
},
{
"docid": "dc98ddb6033ca1066f9b0ba5347a3d0c",
"text": "Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. The result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. This work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.",
"title": ""
},
{
"docid": "5700ba2411f9b4e4ed59c8c5839dc87d",
"text": "Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g., MSR-RF: C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.",
"title": ""
},
{
"docid": "365a236fee3cfda081d7d8ab2b31d4a2",
"text": "Defining software requirements is a complex and difficult process, which often leads to costly project failures. Requirements emerge from a collaborative and interactive negotiation process that involves heterogeneous stakeholders (people involved in an elicitation process such as users, analysts, developers, and customers). Practical experience shows that prioritizing requirements is not as straightforward task as the literature suggests. A process for prioritizing requirements must not only be simple and fast, but it must obtain trustworthy results. The objective of this paper is to provide a classification framework to characterize prioritization proposals. We highlight differences among eleven selected approaches by emphasizing their most important features.",
"title": ""
},
{
"docid": "12cd45e8832650d620695d4f5680148f",
"text": "OBJECTIVE\nCurrent systems to evaluate outcomes from tissue-engineered cartilage (TEC) are sub-optimal. The main purpose of our study was to demonstrate the use of second harmonic generation (SHG) microscopy as a novel quantitative approach to assess collagen deposition in laboratory made cartilage constructs.\n\n\nMETHODS\nScaffold-free cartilage constructs were obtained by condensation of in vitro expanded Hoffa's fat pad derived stromal cells (HFPSCs), incubated in the presence or absence of chondrogenic growth factors (GF) during a period of 21 d. Cartilage-like features in constructs were assessed by Alcian blue staining, transmission electron microscopy (TEM), SHG and two-photon excited fluorescence microscopy. A new scoring system, using second harmonic generation microscopy (SHGM) index for collagen density and distribution, was adapted to the existing \"Bern score\" in order to evaluate in vitro TEC.\n\n\nRESULTS\nSpheroids with GF gave a relative high Bern score value due to appropriate cell morphology, cell density, tissue-like features and proteoglycan content, whereas spheroids without GF did not. However, both TEM and SHGM revealed striking differences between the collagen framework in the spheroids and native cartilage. Spheroids required a four-fold increase in laser power to visualize the collagen matrix by SHGM compared to native cartilage. Additionally, collagen distribution, determined as the area of tissue generating SHG signal, was higher in spheroids with GF than without GF, but lower than in native cartilage.\n\n\nCONCLUSION\nSHG represents a reliable quantitative approach to assess collagen deposition in laboratory engineered cartilage, and may be applied to improve currently established scoring systems.",
"title": ""
},
{
"docid": "85a541f5d83b3de1695a5c994a2be21f",
"text": "1Department of Occupational Therapy, Faculty of Rehabilitation, Tehran University of Medical Sciences, Tehran, 2Department of Epidemiology and Biostatistics, Faculty of Health, Isfahan University of Medical Sciences, Isfahan, 3Department of Paediatrics, Faculty of Medicine, Baqiyatollah University of Medical Sciences, and 4Department of Physiotherapy, Faculty of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran. Reprint requests and correspondence to: Dr. Leila Dehghan, Department of Occupational Therapy, Faculty of Rehabilitation, Tehran University of Medical Sciences, Piche Shemiran, Tehran, Iran. E-mail: ldehghan@tums.ac.ir EFFECT OF THE BOBATH TECHNIQUE, CONDUCTIVE EDUCATION AND EDUCATION TO PARENTS IN ACTIVITIES OF DAILY LIVING IN CHILDREN WITH CEREBRAL PALSY IN IRAN",
"title": ""
},
{
"docid": "65cae0002bcff888d6514aa2d375da40",
"text": "We study the problem of finding efficiently computable non-degenerate multilinear maps from G1 to G2, where G1 and G2 are groups of the same prime order, and where computing discrete logarithms in G1 is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with n > 2",
"title": ""
},
{
"docid": "8ec871d495cf8d796654015896e2dcd2",
"text": "Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers’ actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology—traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach.",
"title": ""
},
{
"docid": "9af703a47d382926698958fba88c1e1a",
"text": "Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non-security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "5807ace0e7e4e9a67c46f29a3f2e70e3",
"text": "In this work we present a pedestrian navigation system for indoor environments based on the dead reckoning positioning method, 2D barcodes, and data from accelerometers and magnetometers. All the sensing and computing technologies of our solution are available in common smart phones. The need to create indoor navigation systems arises from the inaccessibility of the classic navigation systems, such as GPS, in indoor environments.",
"title": ""
},
{
"docid": "8a708ec1187ecb2fe9fa929b46208b34",
"text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.",
"title": ""
},
{
"docid": "97838cc3eb7b31d49db6134f8fc81c84",
"text": "We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.",
"title": ""
},
{
"docid": "0765510720f450736135efd797097450",
"text": "In this paper we discuss the re-orientation of human-computer interaction as an aesthetic field. We argue that mainstream approaches lack of general openness and ability to assess experience aspects of interaction, but that this can indeed be remedied. We introduce the concept of interface criticism as a way to turn the conceptual re-orientation into handles for practical design, and we present and discuss an interface criticism guide.",
"title": ""
}
] |
scidocsrr
|
0a9850db7c80e1ec31309807d1b7b512
|
Monocular Visual-Inertial SLAM-Based Collision Avoidance Strategy for Fail-Safe UAV Using Fuzzy Logic Controllers - Comparison of Two Cross-Entropy Optimization Approaches
|
[
{
"docid": "b0d91cac5497879ea87bdf9034f3fd6d",
"text": "This paper presents an open-source indoor navigation system for quadrotor micro aerial vehicles(MAVs), implemented in the ROS framework. The system requires a minimal set of sensors including a planar laser range-finder and an inertial measurement unit. We address the issues of autonomous control, state estimation, path-planning, and teleoperation, and provide interfaces that allow the system to seamlessly integrate with existing ROS navigation tools for 2D SLAM and 3D mapping. All components run in real time onboard the MAV, with state estimation and control operating at 1 kHz. A major focus in our work is modularity and abstraction, allowing the system to be both flexible and hardware-independent. All the software and hardware components which we have developed, as well as documentation and test data, are available online.",
"title": ""
}
] |
[
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "77d616dc746e74db02215dcf2fdb6141",
"text": "It is almost a quarter of a century since the launch in 1968 of NASA's Pioneer 9 spacecraft on the first mission into deep-space that relied on coding to enhance communications on the critical downlink channel. [The channel code used was a binary convolutional code that was decoded with sequential decoding--we will have much to say about this code in the sequel.] The success of this channel coding system had repercussions that extended far beyond NASA's space program. It is no exaggeration to say that the Pioneer 9 mission provided communications engineers with the first incontrovertible demonstration of the practical utility of channel coding techniques and thereby paved the way for the successful application of coding to many other channels.",
"title": ""
},
{
"docid": "b4d7a8b6b24c85af9f62105194087535",
"text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.",
"title": ""
},
{
"docid": "cf9c23f046ca788d3e8927246568098b",
"text": "This study examined psychological well-being and coping in parents of children with ASD and parents of typically developing children. 73 parents of children with ASD and 63 parents of typically developing children completed a survey. Parents of children with ASD reported significantly more parenting stress symptoms (i.e., negative parental self-views, lower satisfaction with parent-child bond, and experiences of difficult child behaviors), more depression symptoms, and more frequent use of Active Avoidance coping, than parents of typically developing children. Parents of children with ASD did not differ significantly in psychological well-being and coping when compared as according to child's diagnosis. Study results reinforced the importance of addressing well-being and coping needs of parents of children with ASD.",
"title": ""
},
{
"docid": "11e220528f9d4b6a51cdb63268934586",
"text": "The function of DIRCM (directed infrared countermeasures) jamming is to cause the missile to miss its intended target by disturbing the seeker tracking process. The DIRCM jamming uses the pulsing flashes of infrared (IR) energy and its frequency, phase and intensity have the influence on the missile guidance system. In this paper, we analyze the DIRCM jamming effect of the spin-scan reticle seeker. The simulation results show that the jamming effect is greatly influenced by frequency, phase and intensity of the jammer signal.",
"title": ""
},
{
"docid": "322fd3b0c6c833bac9598b510dc40b98",
"text": "Quality assessment is an indispensable technique in a large body of media applications, i.e., photo retargeting, scenery rendering, and video summarization. In this paper, a fully automatic framework is proposed to mimic how humans subjectively perceive media quality. The key is a locality-preserved sparse encoding algorithm that accurately discovers human gaze shifting paths from each image or video clip. In particular, we first extract local image descriptors from each image/video, and subsequently project them into the so-called perceptual space. Then, a nonnegative matrix factorization (NMF) algorithm is proposed that represents each graphlet by a linear and sparse combination of the basis ones. Since each graphlet is visually/semantically similar to its neighbors, a locality-preserved constraint is encoded into the NMF algorithm. Mathematically, the saliency of each graphlet is quantified by the norm of its sparse codes. Afterward, we sequentially link them into a path to simulate human gaze allocation. Finally, a probabilistic quality model is learned based on such paths extracted from a collection of photos/videos, which are marked as high quality ones via multiple Flickr users. Comprehensive experiments have demonstrated that: 1) our quality model outperforms many of its competitors significantly, and 2) the learned paths are on average 89.5% consistent with real human gaze shifting paths.",
"title": ""
},
{
"docid": "e0f202362b9c51d92f268261a96bc11e",
"text": "Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. Although many generalizations and extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods corresponds to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov's technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.",
"title": ""
},
{
"docid": "2720f2aa50ddfc9150d6c2718f4433d3",
"text": "This paper describes InP/InGaAs double heterojunction bipolar transistor (HBT) technology that uses SiN/SiO2 sidewall spacers. This technology enables the formation of ledge passivation and narrow base metals by i-line lithography. With this process, HBTs with various emitter sizes and emitter-base (EB) spacings can be fabricated on the same wafer. The impact of the emitter size and EB spacing on the current gain and high-frequency characteristics is investigated. The reduction of the current gain is <;5% even though the emitter width decreases from 0.5 to 0.25 μm. A high current gain of over 40 is maintained even for a 0.25-μm emitter HBT. The HBTs with emitter widths ranging from 0.25 to 0.5 μm also provide peak ft of over 430 GHz. On the other hand, peak fmax greatly increases from 330 to 464 GHz with decreasing emitter width from 0.5 to 0.25 μm. These results indicate that the 0.25-μm emitter HBT with the ledge passivaiton exhibits balanced high-frequency performance (ft = 452 GHz and fmax = 464 GHz), while maintaining a current gain of over 40.",
"title": ""
},
{
"docid": "e59f4a08d0c7c789a5d83e7d7dc9ec3a",
"text": "In this paper, we present a new approach for audio tampering detection based on microphone classification. The underlying algorithm is based on a blind channel estimation, specifically designed for recordings from mobile devices. It is applied to detect a specific type of tampering, i.e., to detect whether footprints from more than one microphone exist within a given content item. As will be shown, the proposed method achieves an accuracy above 95% for AAC, MP3 and PCM-encoded recordings.",
"title": ""
},
{
"docid": "2512c057299a86d3e461a15b67377944",
"text": "Compressive sensing (CS) is an alternative to Shan-non/Nyquist sampling for the acquisition of sparse or compressible signals. Instead of taking periodic samples, CS measures inner products with M random vectors, where M is much smaller than the number of Nyquist-rate samples. The implications of CS are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems, and sensor networks. In this paper, we propose and study a wideband compressive radio receiver (WCRR) architecture that can efficiently acquire and track FM and other narrowband signals that live within a wide frequency bandwidth. The receiver operates below the Nyquist rate and has much lower complexity than either a traditional sampling system or CS recovery system. Our methods differ from most standard approaches to the problem of CS recovery in that we do not assume that the signals of interest are confined to a discrete set of frequencies, and we do not rely on traditional recovery methods such as l1-minimization. Instead, we develop a simple detection system that identifies the support of the narrowband FM signals and then applies compressive filtering techniques based on discrete prolate spheroidal sequences to cancel interference and isolate the signals. Lastly, a compressive phase-locked loop (PLL) directly recovers the FM message signals.",
"title": ""
},
{
"docid": "eec7a9a6859e641c3cc0ade73583ef5c",
"text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.",
"title": ""
},
{
"docid": "b7957cc83988e0be2da64f6d9837419c",
"text": "Description: A revision of the #1 text in the Human Computer Interaction field, Interaction Design, the third edition is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design and ubiquitous computing. The authors are acknowledged leaders and educators in their field, with a strong global reputation. They bring depth of scope to the subject in this new edition, encompassing the latest technologies and devices including social networking, Web 2.0 and mobile devices. The third edition also adds, develops and updates cases, examples and questions to bring the book in line with the latest in Human Computer Interaction. Interaction Design offers a cross-disciplinary, practical and process-oriented approach to Human Computer Interaction, showing not just what principles ought to apply to Interaction Design, but crucially how they can be applied. The book focuses on how to design interactive products that enhance and extend the way people communicate, interact and work. Motivating examples are included to illustrate both technical, but also social and ethical issues, making the book approachable and adaptable for both Computer Science and non-Computer Science users. Interviews with key HCI luminaries are included and provide an insight into current and future trends.",
"title": ""
},
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
},
{
"docid": "b96e2dba118d89942990337df26c7b20",
"text": "This paper introduces a high-speed all-hardware scale-invariant feature transform (SIFT) architecture with parallel and pipeline technology for real-time extraction of image features. The task-level parallel and pipeline structure are exploited between the hardware blocks, and the data-level parallel and pipeline architecture are exploited inside each block. Two identical random access memories are adopted with ping-pong operation to execute the key point detection module and the descriptor generation module in task-level parallelism. With speeding up the key point detection module of SIFT, the descriptor generation module has become the bottleneck of the system's performance; therefore, this paper proposes an optimized descriptor generation algorithm. A novel window-dividing method is proposed with square subregions arranged in 16 directions, and the descriptors are generated by reordering the histogram instead of window rotation. Therefore, the main orientation detection block and descriptor generation block run in parallel instead of interactively. With the optimized algorithm cooperating with pipeline structure inside each block, we not only improve the parallelism of the algorithm, but also avoid floating data calculation to save hardware consumption. Thus, the descriptor generation module leads the speed almost 15 times faster than a recent solution. The proposed system was implemented on field programmable gate array and the overall time to extract SIFT features for an image having 512×512 pixels is only 6.55 ms (sufficient for real-time applications), and the number of feature points can reach up to 2900.",
"title": ""
},
{
"docid": "11962ec2381422cfac77ad543b519545",
"text": "In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable—beyond running the base learner itself, it only requires computing the top singular vector of a certain n×d matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with 1% corruptions, we achieved 7.4% test error, compared to 13.4%− 20.5% for the baselines, and 3% error on the uncorrupted dataset. Similarly, on the drug design dataset, with 10% corruptions, we achieved 1.42 mean-squared error test error, compared to 1.51-2.33 for the baselines, and 1.23 error on the uncorrupted dataset.",
"title": ""
},
{
"docid": "8f750438e7d78873fd33174d2e347ea5",
"text": "This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.",
"title": ""
},
{
"docid": "827493ff47cff1defaeafff2ef180dce",
"text": "We present a static analysis algorithm for detecting security vulnerabilities in PHP, a popular server-side scripting language for building web applications. Our analysis employs a novel three-tier architecture to capture information at decreasing levels of granularity at the intrablock, intraprocedural, and interprocedural level. This architecture enables us to handle dynamic features unique to scripting languages such as dynamic typing and code inclusion, which have not been adequately addressed by previous techniques. We demonstrate the effectiveness of our approach by running our tool on six popular open source PHP code bases and finding 105 previously unknown security vulnerabilities, most of which we believe are remotely exploitable.",
"title": ""
},
{
"docid": "d1b6007cfb2f8d6227817ab482758bc5",
"text": "Patient Health Monitoring is the one of the field that is rapidly growing very fast nowadays with the advancement of technologies many researchers have come with differentdesigns for patient health monitoring systems as per the technological development. With the widespread of internet, Internet of things is among of the emerged field recently in which many have been able to incorporate it into different applications. In this paper we introduce the system called Iot based patient health monitoring system using LabVIEW and Wireless Sensor Network (WSN).The system will be able to take patients physiological parameters and transmit it wirelessly via Xbees, displays the sensor data onLabVIEW and publish on webserver to enable other health care givers from far distance to visualize, control and monitor continuously via internet connectivity.",
"title": ""
},
{
"docid": "a3cd3ec70b5d794173db36cb9a219403",
"text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)",
"title": ""
}
] |
scidocsrr
|
ec9c10e81b972a103b15041f17c2c8e9
|
Individual Tree Delineation in Windbreaks Using Airborne-Laser-Scanning Data and Unmanned Aerial Vehicle Stereo Images
|
[
{
"docid": "a0c37bb6608f51f7095d6e5392f3c2f9",
"text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589",
"title": ""
}
] |
[
{
"docid": "24d77eb4ea6ecaa44e652216866ab8c8",
"text": "In the development of smart cities across the world VANET plays a vital role for optimized route between source and destination. The VANETs is based on infra-structure less network. It facilitates vehicles to give information about safety through vehicle to vehicle communication (V2V) or vehicle to infrastructure communication (V2I). In VANETs wireless communication between vehicles so attackers violate authenticity, confidentiality and privacy properties which further effect security. The VANET technology is encircled with security challenges these days. This paper presents overview on VANETs architecture, a related survey on VANET with major concern of the security issues. Further, prevention measures of those issues, and comparative analysis is done. From the survey, found out that encryption and authentication plays an important role in VANETS also some research direction defined for future work.",
"title": ""
},
{
"docid": "faf25bfda6d078195b15f5a36a32673a",
"text": "In high performance VLSI circuits, the power consumption is mainly related to signal transition, charging and discharging of parasitic capacitance in transistor during switching activity. Adiabatic switching is a reversible logic to conserve energy instead of dissipating power reuses it. In this paper, low power multipliers and compressor are designed using adiabatic logic. Compressors are the basic components in many applications like partial product summation in multipliers. The Vedic multiplier is designed using the compressor and the power result is analysed. The designs are implemented and the power results are obtained using TANNER EDA 12.0 tool. This paper presents a novel scheme for analysis of low power multipliers using adiabatic logic in inverter and in the compressor. The scheme is optimized for low power as well as high speed implementation over reported scheme. K e y w o r d s : A d i a b a t i c l o g i c , C o m p r e s s o r , M u l t i p l i e r s .",
"title": ""
},
{
"docid": "adf69030a68ed3bf6fc4d008c50ac5b5",
"text": "Many patients with low back and/or pelvic girdle pain feel relief after application of a pelvic belt. External compression might unload painful ligaments and joints, but the exact mechanical effect on pelvic structures, especially in (active) upright position, is still unknown. In the present study, a static three-dimensional (3-D) pelvic model was used to simulate compression at the level of anterior superior iliac spine and the greater trochanter. The model optimised forces in 100 muscles, 8 ligaments and 8 joints in upright trunk, pelvis and upper legs using a criterion of minimising maximum muscle stress. Initially, abdominal muscles, sacrotuberal ligaments and vertical sacroiliac joints (SIJ) shear forces mainly balanced a trunk weight of 500N in upright position. Application of 50N medial compression force at the anterior superior iliac spine (equivalent to 25N belt tension force) deactivated some dorsal hip muscles and reduced the maximum muscle stress by 37%. Increasing the compression up to 100N reduced the vertical SIJ shear force by 10% and increased SIJ compression force with 52%. Shifting the medial compression force of 100N in steps of 10N to the greater trochanter did not change the muscle activation pattern but further increased SIJ compression force by 40% compared to coxal compression. Moreover, the passive ligament forces were distributed over the sacrotuberal, the sacrospinal and the posterior ligaments. The findings support the cause-related designing of new pelvic belts to unload painful pelvic ligaments or muscles in upright posture.",
"title": ""
},
{
"docid": "d3ec3eeb5e56bdf862f12fe0d9ffe71c",
"text": "This paper will communicate preliminary findings from applied research exploring how to ensure that serious games are cost effective and engaging components of future training solutions. The applied research is part of a multimillion pound program for the Department of Trade and Industry, and involves a partnership between UK industry and academia to determine how bespoke serious games should be used to best satisfy learning needs in a range of contexts. The main objective of this project is to produce a minimum of three serious games prototypes for clients from different sectors (e.g., military, medical and business) each prototype addressing a learning need or learning outcome that helps solve a priority business problem or fulfill a specific training need. This paper will describe a development process that aims to encompass learner specifics and targeted learning outcomes in order to ensure that the serious game is successful. A framework for describing game-based learning scenarios is introduced, and an approach to the analysis that effectively profiles the learner within the learner group with respect to game-based learning is outlined. The proposed solution also takes account of relevant findings from serious games research on particular learner groups that might support the selection and specification of a game. A case study on infection control will be used to show how this approach to the analysis is being applied for a healthcare issue.",
"title": ""
},
{
"docid": "9e5cd32f56abf7ff9d98847970394236",
"text": "This paper presents the results of a detailed study of the singular configurations of 3planar parallel mechanisms with three identical legs. Only prismatic and revolute jo are considered. From the point of view of singularity analysis, there are ten diffe architectures. All of them are examined in a compact and systematic manner using p screw theory. The nature of each possible singular configuration is discussed an singularity loci for a constant orientation of the mobile platform are obtained. For so architectures, simplified designs with easy to determine singularities are identified. @DOI: 10.1115/1.1582878 #",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "8d8e7c9777f02c6a4a131f21a66ee870",
"text": "Teaching agile practices is becoming a priority in Software engineering curricula as a result of the increasing use of agile methods (AMs) such as Scrum in the software industry. Limitations in time, scope, and facilities within academic contexts hinder students’ hands-on experience in the use of professional AMs. To enhance students’ exposure to Scrum, we have developed Virtual Scrum, an educational virtual world that simulates a Scrum-based team room through virtual elements such as blackboards, a Web browser, document viewers, charts, and a calendar. A preliminary version of Virtual Scrum was tested with a group of 45 students running a capstone project with and without Virtual Scrum support. Students’ feedback showed that Virtual Scrum is a viable and effective tool to implement the different elements in a Scrum team room and to perform activities throughout the Scrum process. 2013 Wiley Periodicals, Inc. Comput Appl Eng Educ 23:147–156, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21588",
"title": ""
},
{
"docid": "68470cd075d9c475b5ff93578ff7e86d",
"text": "Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is being able to recognize feelings in the conversation partner and reply accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficult by the paucity of large-scale publicly available datasets both for emotion and relevant dialogues. This work proposes a new task for empathetic dialogue generation and EMPATHETICDIALOGUES, a dataset of 25k conversations grounded in emotional contexts to facilitate training and evaluating dialogue systems. Our experiments indicate that models explicitly leveraging emotion predictions from previous utterances are perceived to be more empathetic by human evaluators, while improving on other metrics as well (e.g. perceived relevance of responses, BLEU scores).",
"title": ""
},
{
"docid": "0b50ec58f82b7ac4ad50eb90425b3aea",
"text": "OBJECTIVES\nThe study aimed (1) to examine if there are equivalent results in terms of union, alignment and elbow functionally comparing single- to dual-column plating of AO/OTA 13A2 and A3 distal humeral fractures and (2) if there are more implant-related complications in patients managed with bicolumnar plating compared to single-column plate fixation.\n\n\nDESIGN\nThis was a multi-centred retrospective comparative study.\n\n\nSETTING\nThe study was conducted at two academic level 1 trauma centres.\n\n\nPATIENTS/PARTICIPANTS\nA total of 105 patients were identified to have surgical management of extra-articular distal humeral fractures Arbeitsgemeinschaft für Osteosynthesefragen/Orthopaedic Trauma Association (AO/OTA) 13A2 and AO/OTA 13A3).\n\n\nINTERVENTION\nPatients were treated with traditional dual-column plating or a single-column posterolateral small-fragment pre-contoured locking plate used as a neutralisation device with at least five screws in the short distal segment.\n\n\nMAIN OUTCOME MEASUREMENTS\nThe patients' elbow functionality was assessed in terms of range of motion, union and alignment. In addition, the rate of complications between the groups including radial nerve palsy, implant-related complications (painful prominence and/or ulnar nerve neuritis) and elbow stiffness were compared.\n\n\nRESULTS\nPatients treated with single-column plating had similar union rates and alignment. However, single-column plating resulted in a significantly better range of motion with less complications.\n\n\nCONCLUSIONS\nThe current study suggests that exposure/instrumentation of only the lateral column is a reliable and preferred technique. This technique allows for comparable union rates and alignment with increased elbow functionality and decreased number of complications.",
"title": ""
},
{
"docid": "0db28b5ec56259c8f92f6cc04d4c2601",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9da6883a9fe700aeb84208efbf0a56a3",
"text": "With the increasing demand for more energy efficient buildings, the construction industry is faced with the challenge to ensure that the energy efficiency predicted during the design is realised once a building is in use. There is, however, significant evidence to suggest that buildings are not performing as well as expected and initiatives such as PROBE and CarbonBuzz aim to illustrate the extent of this so called „Performance Gap‟. This paper discusses the underlying causes of discrepancies between detailed energy modelling predictions and in-use performance of occupied buildings (after the twelve month liability period). Many of the causal factors relate to the use of unrealistic input parameters regarding occupancy behaviour and facilities management in building energy models. In turn, this is associated with the lack of feedback to designers once a building has been constructed and occupied. This paper aims to demonstrate how knowledge acquired from Post-Occupancy Evaluation (POE) can be used to produce more accurate energy performance models. A case study focused specifically on lighting, small power and catering equipment in a high density office building is presented. Results show that by combining monitored data with predictive energy modelling, it was possible to increase the accuracy of the model to within 3% of actual electricity consumption values. Future work will seek to use detailed POE data to develop a set of evidence based benchmarks for energy consumption in office buildings. It is envisioned that these benchmarks will inform designers on the impact of occupancy and management on the actual energy consumption of buildings. Moreover, it should enable the use of more realistic input parameters in energy models, bringing the predicted figures closer to reality.",
"title": ""
},
{
"docid": "ced98c32f887001d40e783ab7b294e1a",
"text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.",
"title": ""
},
{
"docid": "f1fe8a9d2e4886f040b494d76bc4bb78",
"text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.",
"title": ""
},
{
"docid": "02d518721f8ab3c4b2abb854c9111267",
"text": "BACKGROUND\nDue to the excessive and pathologic effects of depression and anxiety, it is important to identify the role of protective factors, such as effective coping and social support. This study examined the associations between perceived social support and coping styles with depression and anxiety levels.\n\n\nMATERIALS AND METHODS\nThis cross sectional study was part of the Study on the Epidemiology of Psychological, Alimentary Health and Nutrition project. A total 4658 individuals aged ≥20 years was selected by cluster random sampling. Subjects completed questionnaires, which were used to describe perceived social support, coping styles, depression and anxiety. t-test, Chi-square test, pearson's correlation and Logistic regression analysis were used in data analyses.\n\n\nRESULTS\nThe results of Logistic regression analysis showed after adjusting demographic characteristics for odd ratio of anxiety, active copings such as positive re-interpretation and growth with odds ratios; 95% confidence interval: 0.82 (0.76, 0.89), problem engagement (0.92 [0.87, 0.97]), acceptance (0.82 [0.74, 0.92]) and also among perceived social supports, family (0.77 [0.71, 0.84]) and others (0.84 [0.76, 0.91]) were protective. In addition to, for odd ratio of depression, active copings such as positive re-interpretation and growth (0.74 [0.69, 0.79]), problem engagement (0.89 [0.86, 0.93]), and support seeking (0.96 [0.93, 0.99]) and all of social support types (family [0.75 (0.70, 0.80)], friends [0.90 (0.85, 0.95)] and others [0.80 (0.75, 0.86)]) were protective. Avoidance was risk factor for both of anxiety (1.19 [1.12, 1.27]) and depression (1.22 [1.16, 1.29]).\n\n\nCONCLUSION\nThis study shows active coping styles and perceived social supports particularly positive re-interpretation and family social support are protective factors for depression and anxiety.",
"title": ""
},
{
"docid": "eb8087d0f30945d45a0deb02b7f7bb53",
"text": "The use of teams, especially virtual teams, is growing significantly in corporations, branches of the government and nonprofit organizations. However, despite this prevalence, little is understood in terms of how to best train these teams for optimal performance. Team training is commonly cited as a factor for increasing team performance, yet, team training is often applied in a haphazard and brash manner, if it is even applied at all. Therefore, this paper attempts to identify the flow of a training model for virtual teams. Rooted in transactive memory systems, this theoretical model combines the science of encoding, storing and retrieving information with the science of team training.",
"title": ""
},
{
"docid": "210e9bc5f2312ca49438e6209ecac62e",
"text": "Image classification has become one of the main tasks in the field of computer vision technologies. In this context, a recent algorithm called CapsNet that implements an approach based on activity vectors and dynamic routing between capsules may overcome some of the limitations of the current state of the art artificial neural networks (ANN) classifiers, such as convolutional neural networks (CNN). In this paper, we evaluated the performance of the CapsNet algorithm in comparison with three well-known classifiers (Fisherfaces, LeNet, and ResNet). We tested the classification accuracy on four datasets with a different number of instances and classes, including images of faces, traffic signs, and everyday objects. The evaluation results show that even for simple architectures, training the CapsNet algorithm requires significant computational resources and its classification performance falls below the average accuracy values of the other three classifiers. However, we argue that CapsNet seems to be a promising new technique for image classification, and further experiments using more robust computation resources and refined CapsNet architectures may produce better outcomes.",
"title": ""
},
{
"docid": "19ea89fc23e7c4d564e4a164cfc4947a",
"text": "OBJECTIVES\nThe purpose of this study was to evaluate the proximity of the mandibular molar apex to the buccal bone surface in order to provide anatomic information for apical surgery.\n\n\nMATERIALS AND METHODS\nCone-beam computed tomography (CBCT) images of 127 mandibular first molars and 153 mandibular second molars were analyzed from 160 patients' records. The distance was measured from the buccal bone surface to the root apex and the apical 3.0 mm on the cross-sectional view of CBCT.\n\n\nRESULTS\nThe second molar apex and apical 3 mm were located significantly deeper relative to the buccal bone surface compared with the first molar (p < 0.01). For the mandibular second molars, the distance from the buccal bone surface to the root apex was significantly shorter in patients over 70 years of age (p < 0.05). Furthermore, this distance was significantly shorter when the first molar was missing compared to nonmissing cases (p < 0.05). For the mandibular first molars, the distance to the distal root apex of one distal-rooted tooth was significantly greater than the distance to the disto-buccal root apex (p < 0.01). In mandibular second molar, the distance to the apex of C-shaped roots was significantly greater than the distance to the mesial root apex of non-C-shaped roots (p < 0.01).\n\n\nCONCLUSIONS\nFor apical surgery in mandibular molars, the distance from the buccal bone surface to the apex and apical 3 mm is significantly affected by the location, patient age, an adjacent missing anterior tooth, and root configuration.",
"title": ""
},
{
"docid": "941df83e65700bc2e5ee7226b96e4f54",
"text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.",
"title": ""
},
{
"docid": "91f36db08fdc766d5dc86007dc7a02ad",
"text": "In the last few years communication technology has been improved, which increase the need of secure data communication. For this, many researchers have exerted much of their time and efforts in an attempt to find suitable ways for data hiding. There is a technique used for hiding the important information imperceptibly, which is Steganography. Steganography is the art of hiding information in such a way that prevents the detection of hidden messages. The process of using steganography in conjunction with cryptography, called as Dual Steganography. This paper tries to elucidate the basic concepts of steganography, its various types and techniques, and dual steganography. There is also some of research works done in steganography field in past few years.",
"title": ""
}
] |
scidocsrr
|
8887ddb4d570631146afc215538570ef
|
Adaptive Algorithms for Acoustic Echo Cancellation: A Review
|
[
{
"docid": "0991b582ad9fcc495eb534ebffe3b5f8",
"text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.",
"title": ""
}
] |
[
{
"docid": "9003a12f984d2bf2fd84984a994770f0",
"text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.",
"title": ""
},
{
"docid": "282ace724b3c9a2e8b051499ba5e4bfe",
"text": "Fog computing, being an extension to cloud computing has addressed some issues found in cloud computing by providing additional features, such as location awareness, low latency, mobility support, and so on. Its unique features have also opened a way toward security challenges, which need to be focused for making it bug-free for the users. This paper is basically focusing on overcoming the security issues encountered during the data outsourcing from fog client to fog node. We have added Shibboleth also known as security and cross domain access control protocol between fog client and fog node for improved and secure communication between the fog client and fog node. Furthermore to prove whether Shibboleth meets the security requirement needed to provide the secure outsourcing. We have also formally verified the protocol against basic security properties using high level Petri net.",
"title": ""
},
{
"docid": "bc1d4ce838971d6a04d5bf61f6c3f2d8",
"text": "This paper presents a novel network slicing management and orchestration architectural framework. A brief description of business scenarios and potential customers of network slicing is provided, illustrating the need for ordering network services with very different requirements. Based on specific customer goals (of ordering and building an end-to-end network slice instance) and other requirements gathered from industry and standardization associations, a solution is proposed enabling the automation of end-to-end network slice management and orchestration in multiple resource domains. This architecture distinguishes between two main design time and runtime components: Network Slice Design and Multi-Domain Orchestrator, belonging to different competence service areas with different players in these domains, and proposes the required interfaces and data structures between these components.",
"title": ""
},
{
"docid": "fff85feeef18f7fa99819711e47e2d39",
"text": "This paper presents a robotic vehicle that can be operated by the voice commands given from the user. Here, we use the speech recognition system for giving &processing voice commands. The speech recognition system use an I.C called HM2007, which can store and recognize up to 20 voice commands. The R.F transmitter and receiver are used here, for the wireless transmission purpose. The micro controller used is AT89S52, to give the instructions to the robot for its operation. This robotic car can be able to avoid vehicle collision , obstacle collision and it is very secure and more accurate. Physically disabled persons can use these robotic cars and they can be used in many industries and for many applications Keywords—SpeechRecognitionSystem,AT89S52 micro controller, R. F. Transmitter and Receiver.",
"title": ""
},
{
"docid": "13fed0d1099638f536c5a950e3d54074",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are skipping a question, please include it on your PDF/photo, but leave the question blank and tag it appropriately on Gradescope. This includes extra credit problems. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices. 1. [23 points] Uniform convergence You are hired by CNN to help design the sampling procedure for making their electoral predictions for the next presidential election in the (fictitious) country of Elbania. The country of Elbania is organized into states, and there are only two candidates running in this election: One from the Elbanian Democratic party, and another from the Labor Party of Elbania. The plan for making our electorial predictions is as follows: We'll sample m voters from each state, and ask whether they're voting democrat. We'll then publish, for each state, the estimated fraction of democrat voters. In this problem, we'll work out how many voters we need to sample in order to ensure that we get good predictions with high probability. One reasonable goal might be to set m large enough that, with high probability, we obtain uniformly accurate estimates of the fraction of democrat voters in every state. But this might require surveying very many people, which would be prohibitively expensive. So, we're instead going to demand only a slightly lower degree of accuracy. Specifically, we'll say that our prediction for a state is \" highly inaccurate \" if the estimated fraction of democrat voters differs from the actual fraction of democrat voters within that state by more than a tolerance factor γ. CNN knows that their viewers will tolerate some small number of states' estimates being highly inaccurate; however, their credibility would be damaged if they reported highly inaccurate estimates for too many states. So, rather than …",
"title": ""
},
{
"docid": "34be7f7bef24df9c51ee43d360a462c5",
"text": "Rasterization hardware provides interactive frame rates for rendering dynamic scenes, but lacks the ability of ray tracing required for efficient global illumination simulation. Existing ray tracing based methods yield high quality renderings but are far too slow for interactive use. We present a new parallel global illumination algorithm that perfectly scales, has minimal preprocessing and communication overhead, applies highly efficient sampling techniques based on randomized quasi-Monte Carlo integration, and benefits from a fast parallel ray tracing implementation by shooting coherent groups of rays. Thus a performance is achieved that allows for applying arbitrary changes to the scene, while simulating global illumination including shadows from area light sources, indirect illumination, specular effects, and caustics at interactive frame rates. Ceasing interaction rapidly provides high quality renderings.",
"title": ""
},
{
"docid": "6b48f3791d5af0c6bea607360b6ebb9e",
"text": "Despite recent progress in computer vision, fine-grained interpretation of satellite images remains challenging because of a lack of labeled training data. To overcome this limitation, we propose using Wikipedia as a previously untapped source of rich, georeferenced textual information with global coverage. We construct a novel large-scale, multi-modal dataset by pairing geo-referenced Wikipedia articles with satellite imagery of their corresponding locations. To prove the efficacy of this dataset, we focus on the African continent and train a deep network to classify images based on labels extracted from articles. We then fine-tune the model on a humanannotated dataset and demonstrate that this weak form of supervision can drastically reduce the quantity of humanannotated labels and time required for downstream tasks.",
"title": ""
},
{
"docid": "1c075aac5462cf6c6251d6c9c1a679c0",
"text": "Why You Can’t Find a Taxi in the Rain and Other Labor Supply Lessons from Cab Drivers In a seminal paper, Camerer, Babcock, Loewenstein, and Thaler (1997) find that the wage elasticity of daily hours of work New York City (NYC) taxi drivers is negative and conclude that their labor supply behavior is consistent with target earning (having reference dependent preferences). I replicate and extend the CBLT analysis using data from all trips taken in all taxi cabs in NYC for the five years from 2009-2013. Using the model of expectations-based reference points of Koszegi and Rabin (2006), I distinguish between anticipated and unanticipated daily wage variation and present evidence that only a small fraction of wage variation (about 1/8) is unanticipated so that reference dependence (which is relevant only in response to unanticipated variation) can, at best, play a limited role in determining labor supply. The overall pattern in my data is clear: drivers tend to respond positively to unanticipated as well as anticipated increases in earnings opportunities. This is consistent with the neoclassical optimizing model of labor supply and does not support the reference dependent preferences model. I explore heterogeneity across drivers in their labor supply elasticities and consider whether new drivers differ from more experienced drivers in their behavior. I find substantial heterogeneity across drivers in their elasticities, but the estimated elasticities are generally positive and only rarely substantially negative. I also find that new drivers with smaller elasticities are more likely to exit the industry while drivers who remain learn quickly to be better optimizers (have positive labor supply elasticities that grow with experience). JEL Classification: J22, D01, D03",
"title": ""
},
{
"docid": "d59a2c1673d093584c5f19212d6ba520",
"text": "Introduction and Motivation Today, a majority of data is fundamentally distributed in nature. Data for almost any task is collected over a broad area, and streams in at a much greater rate than ever before. In particular, advances in sensor technology and miniaturization have led to the concept of the sensor network: a (typically wireless) collection of sensing devices collecting detailed data about their surroundings. A fundamental question arises: how to query and monitor this rich new source of data? Similar scenarios emerge within the context of monitoring more traditional, wired networks, and in other emerging models such as P2P networks and grid-based computing. The prevailing paradigm in database systems has been understanding management of centralized data: how to organize, index, access, and query data that is held centrally on a single machine or a small number of closely linked machines. In these distributed scenarios, the axiom is overturned: now, data typically streams into remote sites at high rates. Here, it is not feasible to collect the data in one place: the volume of data collection is too high, and the capacity for data communication relatively low. For example, in battery-powered wireless sensor networks, the main drain on battery life is communication, which is orders of magnitude more expensive than computation or sensing. This establishes a fundamental concept for distributed stream monitoring: if we can perform more computational work within the network to reduce the communication needed, then we can significantly improve the value of our network, by increasing its useful life and extending the range of computation possible over the network. We consider two broad classes of approaches to such in-network query processing, by analogy to query types in traditional DBMSs. In the one shot model, a query is issued by a user at some site, and must be answered based on the current state of data in the network. We identify several possible approaches to this problem. For simple queries, partial computation of the result over a tree can reduce the data transferred significantly. For “holistic” queries, such as medians, count distinct and so on, clever composable summaries give a compact way to accurately approximate query answers. Lastly, careful modeling of correlations between measurements and other trends in the data can further reduce the number of sensors probed. In the continuous model, a query is placed by a user which re-",
"title": ""
},
{
"docid": "022a2f42669fdb337cfb4646fed9eb09",
"text": "A mobile agent with the task to classify its sensor pattern has to cope with ambiguous information. Active recognition of three-dimensional objects involves the observer in a search for discriminative evidence, e.g., by change of its viewpoint. This paper defines the recognition process as a sequential decision problem with the objective to disambiguate initial object hypotheses. Reinforcement learning provides then an efficient method to autonomously develop near-optimal decision strategies in terms of sensorimotor mappings. The proposed system learns object models from visual appearance and uses a radial basis function (RBF) network for a probabilistic interpretation of the two-dimensional views. The information gain in fusing successive object hypotheses provides a utility measure to reinforce actions leading to discriminative viewpoints. The system is verified in experiments with 16 objects and two degrees of freedom in sensor motion. Crucial improvements in performance are gained using the learned in contrast to random camera placements. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "62e2ebbd0b32106f578e71b7494ea321",
"text": "The goal of text categorization is to classify documents into a certain number of predefined categories. The previous works in this area have used a large number of labeled training doculnents for supervised learning. One problem is that it is difficult to create the labeled training documents. While it is easy to collect the unlabeled documents, it is not so easy to manually categorize them for creating traiuing documents. In this paper, we propose an unsupervised learning method to overcome these difficulties. The proposed lnethod divides the documents into sentences, and categorizes each sentence using keyword lists of each category and sentence simihuity measure. And then, it uses the categorized sentences for refining. The proposed method shows a similar degree of performance, compared with the traditional supervised learning inethods. Therefore, this method can be used in areas where low-cost text categorization is needed. It also can be used for creating training documents.",
"title": ""
},
{
"docid": "4c0c6373c40bd42417fa2890fc80986b",
"text": "Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods were used in the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem severely limits both the expressiveness of possible regularity conditions and the variety of provably convergent Plug-and-Play denoising operators. In this paper, we introduce the concept of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of regularity operators without the need for an optimization formulation. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways, including as a fixed point problem that generalizes the ADMM approach used in the Plug-and-Play method. We present the Douglas-Rachford (DR) algorithm for computing the CE solution as a fixed point and prove the convergence of this algorithm under conditions that include denoising operators that do not arise from optimization problems and that may not be nonexpansive. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of the DR algorithm and demonstrate this method on a sparse interpolation problem using electron microscopy data.",
"title": ""
},
{
"docid": "b4284204ae7d9ef39091a651583b3450",
"text": "Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time t, which summarizes all sensory information available at time t, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.",
"title": ""
},
{
"docid": "5a8729b6b08e79e7c27ddf779b0a5267",
"text": "Electric solid propellants are an attractive option for space propulsion because they are ignited by applied electric power only. In this work, the behavior of pulsed microthruster devices utilizing such a material is investigated. These devices are similar in function and operation to the pulsed plasma thruster, which typically uses Teflon as propellant. A Faraday probe, Langmuir triple probe, residual gas analyzer, pendulum thrust stand and high speed camera are utilized as diagnostic devices. These thrusters are made in batches, of which a few devices were tested experimentally in vacuum environments. Results indicate a plume electron temperature of about 1.7 eV, with an electron density between 10 and 10 cm. According to thermal equilibrium and adiabatic expansion calculations, these relatively hot electrons are mixed with ~2000 K neutral and ion species, forming a non-equilibrium gas. From time-of-flight analysis, this gas mixture plume has an effective velocity of 1500-1650 m/s on centerline. The ablated mass of this plume is 215 μg on average, of which an estimated 0.3% is ionized species while 45±11% is ablated at negligible relative speed. This late-time ablation occurs on a time scale three times that of the 0.5 ms pulse discharge, and does not contribute to the measured 0.21 mN-s impulse per pulse. Similar values have previously been measured in pulsed plasma thrusters. These observations indicate the electric solid propellant material in this configuration behaves similar to Teflon in an electrothermal pulsed plasma",
"title": ""
},
{
"docid": "4d4a413931365904cd460249448f3bf4",
"text": "For gaining proficiency in physical human-robot interaction (pHRI), it is crucial for engineering students to be provided with the opportunity to physically interact with and gain hands-on experience on design and control of force-feedback robotic devices. We present a single degree of freedom educational robot that features series elastic actuation and relies on closed loop force control to achieve the desired level of safety and transparency during physical interactions. The proposed device complements the existing impedance-type Haptic Paddle designs by demonstrating the challenges involved in the synergistic design and control of admittance-type devices. We present integration of this device into pHRI education, by providing guidelines for the use of the device to allow students to experience the performance trade-offs inherent in force control systems, due to the non-collocation between the force sensor and the actuator. These exercises enable students to modify the mechanical design in addition to the controllers, by assigning different levels of stiffness values to the compliant element, and characterize the effects of these design choices on the closed-loop force control performance of the device. We also report initial evaluations of the efficacy of the device for",
"title": ""
},
{
"docid": "41defd4d4926625cdb617e8482bf3177",
"text": "Common perception regards the nucleus as a densely packed object with higher refractive index (RI) and mass density than the surrounding cytoplasm. Here, the volume of isolated nuclei is systematically varied by electrostatic and osmotic conditions as well as drug treatments that modify chromatin conformation. The refractive index and dry mass of isolated nuclei is derived from quantitative phase measurements using digital holographic microscopy (DHM). Surprisingly, the cell nucleus is found to have a lower RI and mass density than the cytoplasm in four different cell lines and throughout the cell cycle. This result has important implications for conceptualizing light tissue interactions as well as biological processes in cells.",
"title": ""
},
{
"docid": "102077708fb1623c44c3b23d02387dd4",
"text": "Machine leaning apps require heavy computations, especially with the use of the deep neural network (DNN), so an embedded device with limited hardware cannot run the apps by itself. One solution for this problem is to offload DNN computations from the client to a nearby edge server. Existing approaches to DNN offloading with edge servers either specialize the edge server for fixed, specific apps, or customize the edge server for diverse apps, yet after migrating a large VM image that contains the client's back-end software system. In this paper, we propose a new and simple approach to offload DNN computations in the context of web apps. We migrate the current execution state of a web app from the client to the edge server just before executing a DNN computation, so that the edge server can execute the DNN computation with its powerful hardware. Then, we migrate the new execution state from the edge server to the client so that the client can continue to execute the app. We can save the execution state of the web app in the form of another web app called the snapshot, which immensely simplifies saving and restoring the execution state with a small overhead. We can offload any DNN app to any generic edge server, equipped with a browser and our offloading system. We address some issues related to offloading DNN apps such as how to send the DNN model and how to improve the privacy of user data. We also discuss how to install our offloading system on the edge server on demand. Our experiment with real DNN-based web apps shows that snapshot-based offloading achieves a promising performance result, comparable to running the app entirely on the server.",
"title": ""
},
{
"docid": "ee31719bce1b770e5347b7aa3189d94a",
"text": "Signature-based intrusion detection systems use a set of attack descriptions to analyze event streams, looking for evidence of malicious behavior. If the signatures are expressed in a well-defined language, it is possible to analyze the attack signatures and automatically generate events or series of events that conform to the attack descriptions. This approach has been used in tools whose goal is to force intrusion detection systems to generate a large number of detection alerts. The resulting “alert storm” is used to desensitize intrusion detection system administrators and hide attacks in the event stream. We apply a similar technique to perform testing of intrusion detection systems. Signatures from one intrusion detection system are used as input to an event stream generator that produces randomized synthetic events that match the input signatures. The resulting event stream is then fed to a number of different intrusion detection systems and the results are analyzed. This paper presents the general testing approach and describes the first prototype of a tool, called Mucus, that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system. The paper describes preliminary cross-testing experiments with both an open-source and a commercial tool and reports the results. An evasion attack that was discovered as a result of analyzing the test results is also presented.",
"title": ""
},
{
"docid": "0222814440107fe89c13a790a6a3833e",
"text": "This paper presents a third method of generation and detection of a single-sideband signal. The method is basically different from either the conventional filter or phasing method in that no sharp cutoff filters or wide-band 90° phase-difference networks are needed. This system is especially suited to keeping the signal energy confined to the desired bandwidth. Any unwanted sideband occupies the same band as the desired sideband, and the unwanted sideband in the usual sense is not present.",
"title": ""
},
{
"docid": "e91c18f5509e05471d20d4e28e03b014",
"text": "This paper describes the design of a broadside circularly polarized uniform circular array based on curved planar inverted F-antenna elements. Circular polarization (CP) is obtained by exploiting the sequential rotation technique and implementing it with a series feed network. The proposed structure is first introduced, and some geometrical considerations are derived. Second, the array radiation body is designed taking into account the mutual coupling among antenna elements. Third, the series feed network usually employed for four-antenna element arrays is analyzed and extended to three and more than four antennas exploiting the special case of equal power distribution. The array is designed with three-, four-, five-, and six-antenna elements, and dimensions, impedance bandwidth (defined for <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}\\leq -10$ </tex-math></inline-formula> dB), axial ratio (AR) bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\text {AR}\\leq 3$ </tex-math></inline-formula> dB), gain, beamwidth, front-to-back ratio, and cross-polarization level are compared. Arrays with three and five elements are also prototyped to benchmark the numerical analysis results, finding good correspondence.",
"title": ""
}
] |
scidocsrr
|
fd6aaf8004e09273035614855bae2869
|
Combining Words and Speech Prosody for Automatic Topic Segmentation
|
[
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "e1315cfdc9c1a33b7b871c130f34d6ce",
"text": "TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.",
"title": ""
}
] |
[
{
"docid": "61fd52ce6d91dcde173ee65e80167814",
"text": "We present a simple nearest-neighbor (NN) approach that synthesizes highfrequency photorealistic images from an “incomplete” signal such as a lowresolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags. 12x12 Input (x8) Our Approach (a) Low-Resolution to High-Resolution Surface Normal Map Our Approach (b) Normals-to-RGB Edges Our Approach (c) Edges-to-RGB (d) Edges-to-RGB (Multiple Outputs) (e) Normals-to-RGB (Multiple Outputs) (d) Edges-to-Shoes (Multiple Outputs) (e) Edges-to-Handbags (Multiple Outputs) Figure 1: Our approach generates photorealistic output for various “incomplete” signals such as a low resolution image, a surface normal map, and edges/boundaries for human faces, cats, dogs, shoes, and handbags. Importantly, our approach can easily generate multiple outputs for a given input which was not possible in previous approaches (Isola et al., 2016) due to mode-collapse problem. Best viewed in electronic format.",
"title": ""
},
{
"docid": "1c8a3500d9fbd7e6c10dfffc06157d74",
"text": "The issue of privacy protection in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we put forward a framework to assess the capacity of privacy protection solutions to hide distinguishing facial information and to conceal identity. We then conduct rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by privacy protection techniques. Results show the ineffectiveness of naïve privacy protection techniques such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.",
"title": ""
},
{
"docid": "fd9411cfa035139010be0935d9e52865",
"text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.",
"title": ""
},
{
"docid": "7317ba76ddba2933cdf01d8284fd687e",
"text": "In most of the cases, scientists depend on previous literature which is relevant to their research fields for developing new ideas. However, it is not wise, nor possible, to track all existed publications because the volume of literature collection grows extremely fast. Therefore, researchers generally follow, or cite merely a small proportion of publications which they are interested in. For such a large collection, it is rather interesting to forecast which kind of literature is more likely to attract scientists' response. In this paper, we use the citations as a measurement for the popularity among researchers and study the interesting problem of Citation Count Prediction (CCP) to examine the characteristics for popularity. Estimation of possible popularity is of great significance and is quite challenging. We have utilized several features of fundamental characteristics for those papers that are highly cited and have predicted the popularity degree of each literature in the future. We have implemented a system which takes a series of features of a particular publication as input and produces as output the estimated citation counts of that article after a given time period. We consider several regression models to formulate the learning process and evaluate their performance based on the coefficient of determination (R-square). Experimental results on a real-large data set show that the best predictive model achieves a mean average predictive performance of 0.740 measured in R-square, which significantly outperforms several alternative algorithms.",
"title": ""
},
{
"docid": "09d1fa9a1f9af3e9560030502be1d976",
"text": "Academic Center for Computing and Media Studies, Kyoto University Graduate School of Informatics, Kyoto University Yoshidahonmachi, Sakyo-ku, Kyoto, Japan forest@i.kyoto-u.ac.jp, maeta@ar.media.kyoto-u.ac.jp, yamakata@dl.kuis.kyoto-u.ac.jp, sasada@ar.media.kyoto-u.ac.jp Abstract In this paper, we present our attempt at annotating procedural texts with a flow graph as a representation of understanding. The domain we focus on is cooking recipe. The flow graphs are directed acyclic graphs with a special root node corresponding to the final dish. The vertex labels are recipe named entities, such as foods, tools, cooking actions, etc. The arc labels denote relationships among them. We converted 266 Japanese recipe texts into flow graphs manually. 200 recipes are randomly selected from a web site and 66 are of the same dish. We detail the annotation framework and report some statistics on our corpus. The most typical usage of our corpus may be automatic conversion from texts to flow graphs which can be seen as an entire understanding of procedural texts. With our corpus, one can also try word segmentation, named entity recognition, predicate-argument structure analysis, and coreference resolution.",
"title": ""
},
{
"docid": "0a3feaa346f4fd6bfc0bbda6ba92efc6",
"text": "We present Magic Finger, a small device worn on the fingertip, which supports always-available input. Magic Finger inverts the typical relationship between the finger and an interactive surface: with Magic Finger, we instrument the user's finger itself, rather than the surface it is touching. Magic Finger senses touch through an optical mouse sensor, enabling any surface to act as a touch screen. Magic Finger also senses texture through a micro RGB camera, allowing contextual actions to be carried out based on the particular surface being touched. A technical evaluation shows that Magic Finger can accurately sense 22 textures with an accuracy of 98.9%. We explore the interaction design space enabled by Magic Finger, and implement a number of novel interaction techniques that leverage its unique capabilities.",
"title": ""
},
{
"docid": "a7beddd461e9eba954e947d5c71debe8",
"text": "This paper presents an approach to the problem of paraphrase identification in English and Indian languages using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Traditional machine learning approaches used features that involved using resources such as POS taggers, dependency parsers, etc. for English. The lack of similar resources for Indian languages has been a deterrent to the advancement of paraphrase detection task in Indian languages. Deep learning helps in overcoming the shortcomings of traditional machine Learning techniques. In this paper, three approaches have been proposed, a simple CNN that uses word embeddings as input, a CNN that uses WordNet scores as input and RNN based approach with both LSTM and bi-directional LSTM.",
"title": ""
},
{
"docid": "8767787aaa4590acda7812411135c168",
"text": "Automatic annotation of images is one of the fundamental problems in computer vision applications. With the increasing amount of freely available images, it is quite possible that the training data used to learn a classifier has different distribution from the data which is used for testing. This results in degradation of the classifier performance and highlights the problem known as domain adaptation. Framework for domain adaptation typically requires a classification model which can utilize several classifiers by combining their results to get the desired accuracy. This work proposes depth-based and iterative depth-based fusion methods which are basically rank-based fusion methods and utilize rank of the predicted labels from different classifiers. Two frameworks are also proposed for domain adaptation. The first framework uses traditional machine learning algorithms, while the other works with metric learning as well as transfer learning algorithm. Motivated from ImageCLEF’s 2014 domain adaptation task, these frameworks with the proposed fusion methods are validated and verified by conducting experiments on the images from five domains having varied distributions. Bing, Caltech, ImageNet, and PASCAL are used as source domains and the target domain is SUN. Twelve object categories are chosen from these domains. The experimental results show the performance improvement not only over the baseline system, but also over the winner of the ImageCLEF’s 2014 domain adaptation challenge.",
"title": ""
},
{
"docid": "ba75caedb1c9e65f14c2764157682bdf",
"text": "Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose WeaklySupervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object’s discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "7550ec8917588a6adb629e3d1beabd76",
"text": "This paper describes the algorithm for deriving the total column ozone from spectral radiances and irradiances measured by the Ozone Monitoring Instrument (OMI) on the Earth Observing System Aura satellite. The algorithm is based on the differential optical absorption spectroscopy technique. The main characteristics of the algorithm as well as an error analysis are described. The algorithm has been successfully applied to the first available OMI data. First comparisons with ground-based instruments are very encouraging and clearly show the potential of the method.",
"title": ""
},
{
"docid": "848dd074e4615ea5ecb164c96fac6c63",
"text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.",
"title": ""
},
{
"docid": "0ef4cf0b46b43670a3d9554aba6e2d89",
"text": "lthough banks’ lending activities draw the attention of supervisors, lawmakers, researchers, and the press, a very substantial and growing portion of the industry’s total revenue is received in the form of fee income. The amount of fee, or noninterest, income earned by the banking sector suggests that the significance of payments services has been understated or overlooked. A lack of good information about the payments area may partly explain the failure to gauge the size of this business line correctly. In reports to supervisory agencies, banking organizations provide data relating primarily to their safety and soundness. By the design of the reports, banks transmit information on profitability, capital, and the size and condition of the loan portfolio. Limited information can be extracted from regulatory reports on individual business lines; in fact, these reports imply that banks receive just 7 percent of their net revenue from payments services. A narrow definition of payments, or transactions, services may also contribute to a poor appreciation of this banking function. While checking accounts are universally recognized as a payments service, credit cards, corporate trust accounts, and securities processing should also be treated as parts of a bank’s payments business. The common but limited definition of the payments area reflects the tight focus of banking research on lending and deposit taking. In theoretical studies, economists explain the prominence of commercial banks in the financial sector in terms of these two functions. First, by developing their skills in screening applicants, monitoring borrowers, and obtaining repayment, commercial banks became the dominant lender to relatively small-sized borrowers. Second, because investors demand protection against the risk that they may need liquidity earlier than anticipated, bank deposits are a special and highly useful financial instrument. While insightful, neither rationale explains why A",
"title": ""
},
{
"docid": "8ef51eeb7705a1369103a36f60268414",
"text": "Cloud computing is a new way of delivering computing resources, not a new technology. Computing services ranging from data storage and processing to software, such as email handling, are now available instantly, commitment-free and on-demand. Since we are in a time of belt-tightening, this new economic model for computing has found fertile ground and is seeing massive global investment. According to IDC’s analysis, the worldwide forecast for cloud services in 2009 will be in the order of $17.4bn. The estimation for 2013 amounts to $44.2bn, with the European market ranging from €971m in 2008 to €6,005m in 2013 .",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "f7ce06365e2c74ccbf8dcc04277cfb9d",
"text": "In this paper, we propose an enhanced method for detecting light blobs (LBs) for intelligent headlight control (IHC). The main function of the IHC system is to automatically convert high-beam headlights to low beam when vehicles are found in the vicinity. Thus, to implement the IHC, it is necessary to detect preceding or oncoming vehicles. Generally, this process of detecting vehicles is done by detecting LBs in the images. Previous works regarding LB detection can largely be categorized into two approaches by the image type they use: low-exposure (LE) images or autoexposure (AE) images. While they each have their own strengths and weaknesses, the proposed method combines them by integrating the use of the partial region of the AE image confined by the lane detection information and the LE image. Consequently, the proposed method detects headlights at various distances and taillights at close distances using LE images while handling taillights at distant locations by exploiting the confined AE images. This approach enhances the performance of detecting the distant LBs while maintaining low false detections.",
"title": ""
},
{
"docid": "1a747f8474841b6b99184487994ad6a2",
"text": "This paper discusses the effects of multivariate correlation analysis on the DDoS detection and proposes an example, a covariance analysis model for detecting SYN flooding attacks. The simulation results show that this method is highly accurate in detecting malicious network traffic in DDoS attacks of different intensities. This method can effectively differentiate between normal and attack traffic. Indeed, this method can detect even very subtle attacks only slightly different from the normal behaviors. The linear complexity of the method makes its real time detection practical. The covariance model in this paper to some extent verifies the effectiveness of multivariate correlation analysis for DDoS detection. Some open issues still exist in this model for further research.",
"title": ""
},
{
"docid": "a329c114a101a7968b67c3cd179b27f6",
"text": "The detection of text lines, as a first processing step, is critical in all text recognition systems. State-of-the-art methods to locate lines of text are based on handcrafted heuristics fine-tuned by the image processing community's experience. They succeed under certain constraints; for instance the background has to be roughly uniform. We propose to use more “agnostic” Machine Learning-based approaches to address text line location. The main motivation is to be able to process either damaged documents, or flows of documents with a high variety of layouts and other characteristics. A new method is presented in this work, inspired by the latest generation of optical models used for text recognition, namely Recurrent Neural Networks. As these models are sequential, a column of text lines in our application plays here the same role as a line of characters in more traditional text recognition settings. A key advantage of the proposed method over other data-driven approaches is that compiling a training dataset does not require labeling line boundaries: only the number of lines are required for each paragraph. Experimental results show that our approach gives similar or better results than traditional handcrafted approaches, with little engineering efforts and less hyper-parameter tuning.",
"title": ""
},
{
"docid": "919dc4727575e2ce0419d31b03ddfbf3",
"text": "In wireless ad hoc networks, although defense strategies such as intrusion detection systems (IDSs) can be deployed at each mobile node, significant constraints are imposed in terms of the energy expenditure of such systems. In this paper, we propose a game theoretic framework to analyze the interactions between pairs of attacking/defending nodes using a Bayesian formulation. We study the achievable Nash equilibrium for the attacker/defender game in both static and dynamic scenarios. The dynamic Bayesian game is a more realistic model, since it allows the defender to consistently update his belief on his opponent's maliciousness as the game evolves. A new Bayesian hybrid detection approach is suggested for the defender, in which a lightweight monitoring system is used to estimate his opponent's actions, and a heavyweight monitoring system acts as a last resort of defense. We show that the dynamic game produces energy-efficient monitoring strategies for the defender, while improving the overall hybrid detection power.",
"title": ""
},
{
"docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc",
"text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.",
"title": ""
}
] |
scidocsrr
|
21e0f18f34267496c1b1f96dcdc63e8b
|
Review of Image Processing Technique for Glaucoma Detection
|
[
{
"docid": "830a585529981bd5b61ac5af3055d933",
"text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"title": ""
},
{
"docid": "8a9b118ba8e3546ef70670ea45e8988f",
"text": "The retinal fundus photograph is widely used in the diagnosis and treatment of various eye diseases such as diabetic retinopathy and glaucoma. Medical image analysis and processing has great significance in the field of medicine, especially in non-invasive treatment and clinical study. Normally fundus images are manually graded by specially trained clinicians in a time-consuming and resource-intensive process. A computer-aided fundus image analysis could provide an immediate detection and characterisation of retinal features prior to specialist inspection. This paper describes a novel method to automatically localise one such feature: the optic disk. The proposed method consists of two steps: in the first step, a circular region of interest is found by first isolating the brightest area in the image by means of morphological processing, and in the second step, the Hough transform is used to detect the main circular feature (corresponding to the optical disk) within the positive horizontal gradient image within this region of interest. Initial results on a database of fundus images show that the proposed method is effective and favourable in relation to comparable techniques.",
"title": ""
}
] |
[
{
"docid": "60ea2144687d867bb4f6b21e792a8441",
"text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.",
"title": ""
},
{
"docid": "a5ac7aa3606ebb683d4d9de5dcd89856",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "df02dafb455e2b68035cf8c150e28a0a",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "fc77be5db198932d6cb34e334a4cdb4b",
"text": "This thesis investigates how data mining algorithms can be used to predict Bodily Injury Liability Insurance claim payments based on the characteristics of the insured customer’s vehicle. The algorithms are tested on real data provided by the organizer of the competition. The data present a number of challenges such as high dimensionality, heterogeneity and missing variables. The problem is addressed using a combination of regression, dimensionality reduction, and classification techniques. Questa tesi si propone di esaminare come alcune tecniche di data mining possano essere usate per predirre l’ammontare dei danni che un’ assicurazione dovrebbe risarcire alle persone lesionate a partire dalle caratteristiche del veicolo del cliente assicurato. I dati utilizzati sono reali e la loro analisi presenta diversi ostacoli dati dalle loro grandi dimensioni, dalla loro eterogeneitá e da alcune variabili mancanti. ll problema é stato affrontato utilizzando una combinazione di tecniche di regressione, di riduzione di dimensionalitá e di classificazione.",
"title": ""
},
{
"docid": "cb00e564a81ace6b75e776f1fe41fb8f",
"text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30",
"title": ""
},
{
"docid": "5bbd4675eb1b408895f29340c3cd074a",
"text": "We performed underground real-time tests to obtain alpha particle-induced soft error rates (α-SER) with high accuracies for SRAMs with 180 nm – 90 nm technologies and studied the scaling trend of α-SERs. In order to estimate the maximum permissive rate of alpha emission from package resin, the α-SER was compared to the neutron-induced soft error rate (n-SER) obtained from accelerated tests. We found that as devices are scaled down, the α-SER increased while the n-SER slightly decreased, and that the α-SER could be greater than the n-SER in 90 nm technology even when the ultra-low-alpha (ULA) grade, with the alpha emission rate ≫ 1 × 10<sup>−3</sup> cm<sup>−2</sup>h<sup>−1</sup>, was used for package resin. We also performed computer simulations to estimate scaling trends of both α-SER and n-SER up to 45 nm technologies, and noticed that the α-SER decreased from 65 nm technology while the n-SER increased from 45 nm technology due to direct ionization from the protons generated in the n + Si nuclear reaction.",
"title": ""
},
{
"docid": "1a8df1f14f66c0ff09679ea5bbfc2c36",
"text": "Making strategic decision on new manufacturing technology investments is difficult. New technologies are usually costly, affected by numerous factors, and the potential benefits are often hard to justify prior to implementation. Traditionally, decisions are made based upon intuition and past experience, sometimes with the support of multicriteria decision support tools. However, these approaches do not retain and reuse knowledge, thus managers are not able to make effective use of their knowledge and experience of previously completed projects to help with the prioritisation of future projects. In this paper, a hybrid intelligent system integrating case-based reasoning (CBR) and the fuzzy ARTMAP (FAM) neural network model is proposed to support managers in making timely and optimal manufacturing technology investment decisions. The system comprises a case library that holds the details of past technology investment projects. Each project proposal is characterised by a set of features determined by human experts. The FAM network is then employed to match the features of a new proposal with those from historical cases. Similar cases are retrieved and adapted, and information on these cases can be utilised as an input to prioritisation of new projects. A case study is conducted to illustrate the applicability and effectiveness of the approach, with the results presented and analysed. Implications of the proposed approach are discussed, and suggestions for further work are outlined. r 2005 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "4f2e9ff72d6e273877a978600e6fbd40",
"text": "Fraud isn't new, but in the eyes of many experts, phishing and crimeware threaten to topple society's overall stability because they erode trust in its underlying computational infrastructure. Most people agree that phishing and crimeware must be fought, but to do so effectively, we must fully understand both types of threat; that starts by quantifying how and when people fall for deceit. In this article, we look closer at how to perform fraud experiments. Researchers typically use three approaches to quantify fraud: surveys, in-lab experiments, and naturalistic experiments.",
"title": ""
},
{
"docid": "926db14af35f9682c28a64e855fb76e5",
"text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",
"title": ""
},
{
"docid": "c88c4097b0cf90031bbf3778d25bb87a",
"text": "In this paper we introduce a new data set consisting of user comments posted to the website of a German-language Austrian newspaper. Professional forum moderators have annotated 11,773 posts according to seven categories they considered crucial for the efficient moderation of online discussions in the context of news articles. In addition to this taxonomy and annotated posts, the data set contains one million unlabeled posts. Our experimental results using six methods establish a first baseline for predicting these categories. The data and our code are available for research purposes from https://ofai.github.io/million-post-corpus.",
"title": ""
},
{
"docid": "df4883ac490f3a27b2dbc310867a3534",
"text": "We present OpenLambda, a new, open-source platform for building next-generation web services and applications in the burgeoning model of serverless computation. We describe the key aspects of serverless computation, and present numerous research challenges that must be addressed in the design and implementation of such systems. We also include a brief study of current web applications, so as to better motivate some aspects of serverless application construction.",
"title": ""
},
{
"docid": "b13d4d5253a116153778d0f343bf76d7",
"text": "OBJECTIVES\nThe purpose of this study was to investigate the effect of dynamic soft tissue mobilisation (STM) on hamstring flexibility in healthy male subjects.\n\n\nMETHODS\nForty five males volunteered to participate in a randomised, controlled single blind design study. Volunteers were randomised to either control, classic STM, or dynamic STM intervention. The control group was positioned prone for 5 min. The classic STM group received standard STM techniques performed in a neutral prone position for 5 min. The dynamic STM group received all elements of classic STM followed by distal to proximal longitudinal strokes performed during passive, active, and eccentric loading of the hamstring. Only specific areas of tissue tightness were treated during the dynamic phase. Hamstring flexibility was quantified as hip flexion angle (HFA) which was the difference between the total range of straight leg raise and the range of pelvic rotation. Pre- and post-testing was conducted for the subjects in each group. A one-way ANCOVA followed by pairwise post-hoc comparisons was used to determine whether change in HFA differed between groups. The alpha level was set at 0.05.\n\n\nRESULTS\nIncrease in hamstring flexibility was significantly greater in the dynamic STM group than either the control or classic STM groups with mean (standard deviation) increase in degrees in the HFA measures of 4.7 (4.8), -0.04 (4.8), and 1.3 (3.8), respectively.\n\n\nCONCLUSIONS\nDynamic soft tissue mobilisation (STM) significantly increased hamstring flexibility in healthy male subjects.",
"title": ""
},
{
"docid": "7d0105cace2150b0e76ef4b5585772ad",
"text": "Peer-to-peer (P2P) accommodation rentals continue to grow at a phenomenal rate. Examining how this business model affects the competitive landscape of accommodation services is of strategic importance to hotels and tourism destinations. This study explores the competitive edge of P2P accommodation in comparison to hotels by extracting key content and themes from online reviews to explain the key service attributes sought by guests. The results from text analytics using terminology extraction and word co-occurrence networks indicate that even though guests expect similar core services such as clean rooms and comfortable beds, different attributes support the competitive advantage of hotels and P2P rentals. While conveniences offered by hotels are unparalleled by P2P accommodation, the latter appeal to consumers driven by experiential and social motivations. Managerial implications for hotels and P2P accommodation",
"title": ""
},
{
"docid": "7808ed17e6e7fa189e6b33922573af56",
"text": "The communication needs of Earth observation satellites is steadily increasing. Within a few years, the data rate of such satellites will exceed 1 Gbps, the angular resolution of sensors will be less than 1 μrad, and the memory size of onboard data recorders will be beyond 1 Tbytes. Compared to radio frequency links, optical communications in space offer various advantages such as smaller and lighter equipment, higher data rates, limited risk of interference with other communications systems, and the effective use of frequency resources. This paper describes and compares the major features of radio and optical frequency communications systems in space and predicts the needs of future satellite communications.",
"title": ""
},
{
"docid": "2e6af4ea3a375f67ce5df110a31aeb85",
"text": "Controlled power system separation, which separates the transmission system into islands in a controlled manner, is considered the final resort against a blackout under severe disturbances, e.g., cascading events. Three critical problems of controlled separation are where and when to separate and what to do after separation, which are rarely studied together. They are addressed in this paper by a proposed unified controlled separation scheme based on synchrophasors. The scheme decouples the three problems by partitioning them into sub-problems handled strategically in three time stages: the Offline Analysis stage determines elementary generator groups, optimizes potential separation points in between, and designs post-separation control strategies; the Online Monitoring stage predicts separation boundaries by modal analysis on synchrophasor data; the Real-time Control stage calculates a synchrophasor-based separation risk index for each boundary to predict the time to perform separation. The proposed scheme is demonstrated on a 179-bus power system by case studies.",
"title": ""
},
{
"docid": "0c5ebaaf0fd85312428b5d6b7479bfb6",
"text": "BACKGROUND\nPovidone-iodine solution is an antiseptic that is used worldwide as surgical paint and is considered to have a low irritant potential. Post-surgical severe irritant dermatitis has been described after the misuse of this antiseptic in the surgical setting.\n\n\nMETHODS\nBetween January 2011 and June 2013, 27 consecutive patients with post-surgical contact dermatitis localized outside of the surgical incision area were evaluated. Thirteen patients were also available for patch testing.\n\n\nRESULTS\nAll patients developed dermatitis the day after the surgical procedure. Povidone-iodine solution was the only liquid in contact with the skin of our patients. Most typical lesions were distributed in a double lumbar parallel pattern, but they were also found in a random pattern or in areas where a protective pad or an occlusive medical device was glued to the skin. The patch test results with povidone-iodine were negative.\n\n\nCONCLUSIONS\nPovidone-iodine-induced post-surgical dermatitis may be a severe complication after prolonged surgical procedures. As stated in the literature and based on the observation that povidone-iodine-induced contact irritant dermatitis occurred in areas of pooling or occlusion, we speculate that povidone-iodine together with occlusion were the causes of the dermatitis epidemic that occurred in our surgical setting. Povidone-iodine dermatitis is a problem that is easily preventable through the implementation of minimal routine changes to adequately dry the solution in contact with the skin.",
"title": ""
},
{
"docid": "8822138c493df786296c02315bea5802",
"text": "Photodefinable Polyimides (PI) and polybenz-oxazoles (PBO) which have been widely used for various electronic applications such as buffer coating, interlayer dielectric and protection layer usually need high temperature cure condition over 300 °C to complete the cyclization and achieve good film properties. In addition, PI and PBO are also utilized recently for re-distribution layer of wafer level package. In this application, lower temperature curability is strongly required in order to prevent the thermal damage of the semi-conductor device and the other packaging material. Then, to meet this requirement, we focused on pre-cyclized polyimide with phenolic hydroxyl groups since this polymer showed the good solubility to aqueous TMAH and there was no need to apply high temperature cure condition. As a result of our study, the positive-tone photodefinable material could be obtained by using DNQ and combination of epoxy cross-linker enabled to enhance the chemical and PCT resistance of the cured film made even at 170 °C. Furthermore, the adhesion to copper was improved probably due to secondary hydroxyl groups which were generated from reacted epoxide groups. In this report, we introduce our concept of novel photodefinable positive-tone polyimide for low temperature cure.",
"title": ""
},
{
"docid": "4abdc5883ccd6b4b218ce2d86da0784d",
"text": "Crowd-based events, such as football matches, are considered generators of crime. Criminological research on the influence of football matches has consistently uncovered differences in spatial crime patterns, particularly in the areas around stadia. At the same time, social media data mining research on football matches shows a high volume of data created during football events. This study seeks to build on these two research streams by exploring the spatial relationship between crime events and nearby Twitter activity around a football stadium, and estimating the possible influence of tweets for explaining the presence or absence of crime in the area around a football stadium on match days. Aggregated hourly crime data and geotagged tweets for the same area around the stadium are analysed using exploratory and inferential methods. Spatial clustering, spatial statistics, text mining as well as a hurdle negative binomial logistic regression for spatiotemporal explanations are utilized in our analysis. Findings indicate a statistically significant spatial relationship between three crime types (criminal damage, theft and handling, and violence against the person) and tweet patterns, and that such a relationship can be used to explain future incidents of crime.",
"title": ""
},
{
"docid": "2acc2ab831aa2bc7ebe7047223ba1a30",
"text": "The seemingly unshakeable accuracy of Moore's law - which states that the speed of computers; as measured by the number of transistors that can be placed on a single chip, will double every year or two - has been credited with being the engine of the electronics revolution, and is regarded as the premier example of a self-fulfilling prophecy and technological trajectory in both the academic and popular press. Although many factors have kept Moore's law as an industry benchmark, it is the entry of foreign competition that seems to have played a critical role in maintaining the pace of Moore's law in the early VLSI transition. Many different kinds of chips used many competing logic families. DRAMs and microprocessors became critical to the semiconductor industry, yet were unknown during the original formulation of Moore's law",
"title": ""
},
{
"docid": "c273cdd1dc3e1ab52fa48d033d0c3dd4",
"text": "This paper discusses concurrent design and analysis of the first 8.5 kV electrostatic discharge (ESD) protected single-pole ten-throw (SP10T) transmit/receive (T/R) switch for quad-band (0.85/0.9/1.8/1.9 GHz) GSM and multiple-band WCDMA smartphones. Implemented in a 0.18 μm SOI CMOS, this SP10T employs a series-shunt topology for the time-division duplex (TDD) transmitting (Tx) and receiving (Rx), and frequency-division duplex (FDD) transmitting/receiving (TRx) branches to handle the high GSM transmitter power. The measured P0.1 dB, insertion loss and Tx-Rx isolation in the lower/upper bands are 36.4/34.2 dBm, 0.48/0.81 dB and 43/40 dB, respectively, comparable to commercial products with no/little ESD protection in high-cost SOS and GaAs technologies. Feed-forward capacitor (FFC) and AC-floating bias techniques are used to further improve the linearity. An ESD-switch co-design technique is developed that enables simultaneous whole-chip design optimization for both ESD protection and SP10T circuits.",
"title": ""
}
] |
scidocsrr
|
525f62c1cada29f217b073884f2e88a4
|
Aliasing Detection and Reduction in Plenoptic Imaging
|
[
{
"docid": "59c83aa2f97662c168316f1a4525fd4d",
"text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.",
"title": ""
}
] |
[
{
"docid": "7ea55980a5cd5fce415a24170b027d38",
"text": "We propose a mathematical model to assess the effects of irradiated (or transgenic) male insects introduction in a previously infested region. The release of sterile male insects aims to displace gradually the natural (wild) insect from the habitat. We discuss the suitability of this release technique when applied to peri-domestically adapted Aedes aegypti mosquitoes which are transmissors of Yellow Fever and Dengue disease.",
"title": ""
},
{
"docid": "24c6f0454bad7506a600483434914be0",
"text": "Query answers from on-line databases can easily be corrupted by hackers or malicious database publishers. Thus it is important to provide mechanisms which allow clients to trust the results from on-line queries. Authentic publication allows untrusted publishers to answer securely queries from clients on behalf of trusted off-line data owners. Publishers validate answers using hard-to-forge verification objects VOs), which clients can check efficiently. This approach provides greater scalability, by making it easy to add more publishers, and better security, since on-line publishers do not need to be trusted. To make authentic publication attractive, it is important for the VOs to be small, efficient to compute, and efficient to verify. This has lead researchers to develop independently several different schemes for efficient VO computation based on specific data structures. Our goal is to develop a unifying framework for these disparate results, leading to a generalized security result. In this paper we characterize a broad class of data structures which we call Search DAGs, and we develop a generalized algorithm for the construction of VOs for Search DAGs. We prove that the VOs thus constructed are secure, and that they are efficient to compute and verify. We demonstrate how this approach easily captures existing work on simple structures such as binary trees, multi-dimensional range trees, tries, and skip lists. Once these are shown to be Search DAGs, the requisite security and efficiency results immediately follow from our general theorems. Going further, we also use Search DAGs to produce and prove the security of authenticated versions of two complex data models for efficient multi-dimensional range searches. This allows efficient VOs to be computed (size O(log N + T)) for typical one- and two-dimensional range queries, where the query answer is of size T and the database is of size N. We also show I/O-efficient schemes to construct the VOs. For a system with disk blocks of size B, we answer one-dimensional and three-sided range queries and compute the VOs with O(logB N + T/B) I/O operations using linear size data structures.",
"title": ""
},
{
"docid": "1c058d6a648b2190500340f762eeff78",
"text": "An ever-increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow, and super resolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design, and implementation, as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power, area, and I/O efficiency. The manufactured device provides up to 196 GOp/s on 3.09 $\\text {mm}^{2}$ of silicon in UMC 65-nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.",
"title": ""
},
{
"docid": "8de530a30b8352e36b72f3436f47ffb2",
"text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.",
"title": ""
},
{
"docid": "4f5b76f7954779bf48da0ecf458d093f",
"text": "A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a log-likelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and is extended to incorporate a smooth intensity variation and nonlinear registration with tissue probability maps. A strategy for optimising the model parameters is described, along with the requisite partial derivatives of the objective function.",
"title": ""
},
{
"docid": "b96836da7518ceccace39347f06067c6",
"text": "A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.",
"title": ""
},
{
"docid": "e264903ee2759f638dcd60a715cbb994",
"text": "Bioinspired hardware holds the promise of low-energy, intelligent, and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for biomedical prosthesis. However, one of the major challenges of fabricating bioinspired hardware is building ultrahigh-density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions (MTJs) are well suited for this purpose because of their multiple tunable functionalities. One such functionality, nonvolatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bioinspired computing include tunable fast nonlinear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nanodevices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bioinspired architectures that include one or several types of spintronic nanodevices. In this paper, we show how spintronics can be used for bioinspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronics complementary metal-oxide-semiconductor (CMOS) bioinspired hardware.",
"title": ""
},
{
"docid": "a41bb1fe5670cc865bf540b34848f45f",
"text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.",
"title": ""
},
{
"docid": "ceebc0d380be2b2f5e76da5f9f006530",
"text": "This paper addresses the issue of motion estimation on image sequences. The standard motion equation used to compute the apparent motion of image irradiance patterns is an invariance brightness based hypothesis called the optical flow constraint. Other equations can be used, in particular the extended optical flow constraint, which is a variant of the optical flow constraint, inspired by the fluid mechanic mass conservation principle. In this paper, we propose a physical interpretation of this extended optical flow equation and a new model unifying the optical flow and the extended optical flow constraints. We present results obtained for synthetic and meteorological images.",
"title": ""
},
{
"docid": "af0328c3a271859d31c0e3993db7105e",
"text": "The increasing bandwidth demand in data centers and telecommunication infrastructures had prompted new electrical interface standards capable of operating up to 56Gb/s per-lane. The CEI-56G-VSR-PAM4 standard [1] defines PAM-4 signaling at 56Gb/s targeting chip-to-module interconnect. Figure 6.3.1 shows the measured S21 of a channel resembling such interconnects and the corresponding single-pulse response after TX-FIR and RX CTLE. Although the S21 is merely ∼10dB at 14GHz, the single-pulse response exhibits significant reflections from impedance discontinuities, mainly between package and PCB traces. These reflections are detrimental to PAM-4 signaling and cannot be equalized effectively by RX CTLE and/or a few taps of TX feed-forward equalization. This paper presents the design of a PAM-4 receiver using 10-tap direct decision-feedback equalization (DFE) targeting such VSR channels.",
"title": ""
},
{
"docid": "d8d95a9bccc8234fd444e14c96a4cfa5",
"text": "This paper presents a highly integrated, high performance four channel linear transimpedance amplifier (TIA) RFIC with a footprint of 2mmx3.5mm towards next generation 100G/400G miniaturized coherent receivers. A TIA of such form may become indispensable as the size, complexity and cost of receivers continue to reduce. The design has been realized in a 130nm SiGe BiCMOS process for a low cost, high performance solution towards long- haul/metro applications. The TIA is capable of providing control functions either digitally through an on-chip 4-wire serial-peripheral interface (SPI) or in analog mode. Analog mode is provided as an alternative control for real-time control and monitoring. To provide high input dynamic range, a variable gain control block is integrated for each channel, which can be used in automatic or manual mode. The TIA has a differential input, differential output configuration that exhibits state-of-the-art THD of <;0.9% up to 500mVpp output voltage swing for input currents up to 2mApp and high isolation > 40dB between adjacent channels. A high transimpedance gain (Zt) up to ~7KΩ with a large dynamic range up to 37dB and variable bandwidth up to 34GHz together with low average input noise density of 20pA/√Hz has been achieved. To the authors' knowledge, these metrics combined with diverse functionality and high integration have not been exhibited so far. This paper intends to report a state-of-the-art high-baud rate TIA and provide insight into possibilities for further integration.",
"title": ""
},
{
"docid": "2bfd884e92a26d017a7854be3dfb02e8",
"text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.",
"title": ""
},
{
"docid": "4cf2c80fe55f2b41816f23895b64a29c",
"text": "Visual question answering is fundamentally compositional in nature—a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural “modules” into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.",
"title": ""
},
{
"docid": "73015dbfed8e1ed03965779a93e14190",
"text": "The DataMiningGrid system has been designed to meet the requirements of modern and distributed data mining scenarios. Based on the Globus Toolkit and other open technology and standards, the DataMiningGrid system provides tools and services facilitating the grid-enabling of data mining applications without any intervention on the application side. Critical features of the system include flexibility, extensibility, scalability, efficiency, conceptual simplicity and ease of use. The system has been developed and evaluated on the basis of a diverse set of use cases from different sectors in science and technology. The DataMiningGrid software is freely available under Apache License 2.0. c © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b29e611c608a824009cf4ffea8892aa9",
"text": "The purpose of this study was to analyze characteristics of individuals working in the profession of neuropsychology in Latin America in order to understand their background, professional training, current work situation, assessment and diagnostic procedures used, rehabilitation techniques employed, population targeted, teaching responsibilities, and research activities. A total of 808 professionals working in neuropsychology from 17 countries in Latin America completed an online survey between July 2013 and January 2014. The majority of participants were female and the mean age was 36.76 years (range 21-74 years). The majority of professionals working in neuropsychology in Latin America have a background in psychology, with some additional specialized training and supervised clinical practice. Over half work in private practice, universities, or private clinics and are quite satisfied with their work. Those who identify themselves as clinicians primarily work with individuals with learning problems, ADHD, mental retardation, TBI, dementia, and stroke. The majority respondents cite the top barrier in the use of neuropsychological instruments to be the lack of normative data for their countries. The top perceived barriers to the field include: lack of academic training programs, lack of clinical training opportunities, lack of willingness to collaborate between professionals, and lack of access to neuropsychological instruments. There is a need in Latin America to increase regulation, improve graduate curriculums, enhance existing clinical training, develop professional certification programs, validate existing neuropsychological tests, and create new, culturally-relevant instruments.",
"title": ""
},
{
"docid": "3dee885a896e9864ff06b546d64f6df1",
"text": "BACKGROUND\nThe 12-item Short Form Health Survey (SF-12) as a shorter alternative of the SF-36 is largely used in health outcomes surveys. The aim of this study was to validate the SF-12 in Iran.\n\n\nMETHODS\nA random sample of the general population aged 15 years and over living in Tehran, Iran completed the SF-12. Reliability was estimated using internal consistency and validity was assessed using known groups comparison and convergent validity. In addition, the factor structure of the questionnaire was extracted by performing both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).\n\n\nRESULTS\nIn all, 5587 individuals were studied (2721 male and 2866 female). The mean age and formal education of the respondents were 35.1 (SD = 15.4) and 10.2 (SD = 4.4) years respectively. The results showed satisfactory internal consistency for both summary measures, that are the Physical Component Summary (PCS) and the Mental Component Summary (MCS); Cronbach's alpha for PCS-12 and MCS-12 was 0.73 and 0.72, respectively. Known-groups comparison showed that the SF-12 discriminated well between men and women and those who differed in age and educational status (P < 0.001). In addition, correlations between the SF-12 scales and single items showed that the physical functioning, role physical, bodily pain and general health subscales correlated higher with the PCS-12 score, while the vitality, social functioning, role emotional and mental health subscales more correlated with the MCS-12 score lending support to its good convergent validity. Finally the principal component analysis indicated a two-factor structure (physical and mental health) that jointly accounted for 57.8% of the variance. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure (physical and mental health).\n\n\nCONCLUSION\nIn general the findings suggest that the SF-12 is a reliable and valid measure of health related quality of life among Iranian population. However, further studies are needed to establish stronger psychometric properties for this alternative form of the SF-36 Health Survey in Iran.",
"title": ""
},
{
"docid": "d55b50d30542099f8f55cfeb1aafd4dc",
"text": "Many avian species persist in human-dominated landscapes; however, little is known about the demographic consequences of urbanization in these populations. Given that urban habitats introduce novel benefits (e.g., anthropogenic resources) and pressures (e.g., mortality risks), conflicting mechanisms have been hypothesized to drive the dynamics of urban bird populations. Top-down processes such as predation predict reduced survivorship in suburban and urban habitats, whereas bottom-up processes, such as increased resource availability, predict peak survival in suburban habitats. In this study, we use mark–recapture data of seven focal species encountered between 2000 and 2012 to test hypotheses about the processes that regulate avian survival along an urbanization gradient in greater Washington, D.C., USA. American Robin, Gray Catbird, Northern Cardinal, and Song Sparrow exhibited peak survival at intermediate and upper portions of the rural-to-urban gradient; this pattern supports the hypothesis that bottom-up processes (e.g., resource availability) can drive patterns of avian survival in some species. In contrast, Carolina Chickadee showed no response and Carolina and House Wren showed a slightly negative response to urban land cover. These contrasting results underscore the need for comparative studies documenting the mechanisms that drive demography and how those factors differentially affect urban adapted and urban avoiding species.",
"title": ""
},
{
"docid": "2ed57c4430810b2b72a64f2315bf1160",
"text": "This study was an attempt to identify the interlingual strategies employed to translate English subtitles into Persian and to determine their frequency, as well. Contrary to many countries, subtitling is a new field in Iran. The study, a corpus-based, comparative, descriptive, non-judgmental analysis of an English-Persian parallel corpus, comprised English audio scripts of five movies of different genres, with Persian subtitles. The study’s theoretical framework was based on Gottlieb’s (1992) classification of subtitling translation strategies. The results indicated that all Gottlieb’s proposed strategies were applicable to the corpus with some degree of variation of distribution among different film genres. The most frequently used strategy was “transfer” at 54.06%; the least frequently used strategies were “transcription” and “decimation” both at 0.81%. It was concluded that the film genre plays a crucial role in using different strategies.",
"title": ""
},
{
"docid": "fe383fbca6d67d968807fb3b23489ad1",
"text": "In this project, we attempt to apply machine-learning algorithms to predict Bitcoin price. For the first phase of our investigation, we aimed to understand and better identify daily trends in the Bitcoin market while gaining insight into optimal features surrounding Bitcoin price. Our data set consists of over 25 features relating to the Bitcoin price and payment network over the course of five years, recorded daily. Using this information we were able to predict the sign of the daily price change with an accuracy of 98.7%. For the second phase of our investigation, we focused on the Bitcoin price data alone and leveraged data at 10-minute and 10-second interval timepoints, as we saw an opportunity to evaluate price predictions at varying levels of granularity and noisiness. By predicting the sign of the future change in price, we are modeling the price prediction problem as a binomial classification task, experimenting with a custom algorithm that leverages both random forests and generalized linear models. These results had 50-55% accuracy in predicting the sign of future price change using 10 minute time intervals.",
"title": ""
},
{
"docid": "149de84d7cbc9ea891b4b1297957ade7",
"text": "Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.",
"title": ""
}
] |
scidocsrr
|
2f421be3d10cc8988a5c134cf0852ec9
|
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
|
[
{
"docid": "fdf1b2f49540d5d815f2d052f2570afe",
"text": "It has been recently shown that Generative Adversarial Networks (GANs) can produce synthetic images of exceptional visual fidelity. In this work, we propose the first GAN-based method for automatic face aging. Contrary to previous works employing GANs for altering of facial attributes, we make a particular emphasize on preserving the original person's identity in the aged version of his/her face. To this end, we introduce a novel approach for “Identity-Preserving” optimization of GAN's latent vectors. The objective evaluation of the resulting aged and rejuvenated face images by the state-of-the-art face recognition and age estimation solutions demonstrate the high potential of the proposed method.",
"title": ""
},
{
"docid": "7c799fdfde40289ba4e0ce549f02a5ad",
"text": "In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.",
"title": ""
}
] |
[
{
"docid": "21bd6f42c74930c8e9876ff4f5ef1ee2",
"text": "Dynamic channel allocation (DCA) is the key technology to efficiently utilize the spectrum resources and decrease the co-channel interference for multibeam satellite systems. Most works allocate the channel on the basis of the beam traffic load or the user terminal distribution of the current moment. These greedy-like algorithms neglect the intrinsic temporal correlation among the sequential channel allocation decisions, resulting in the spectrum resources underutilization. To solve this problem, a novel deep reinforcement learning (DRL)-based DCA (DRL-DCA) algorithm is proposed. Specifically, the DCA optimization problem, which aims at minimizing the service blocking probability, is formulated in the multibeam satellite systems. Due to the temporal correlation property, the DCA optimization problem is modeled as the Markov decision process (MDP) which is the dominant analytical approach in DRL. In modeled MDP, the system state is reformulated into an image-like fashion, and then, convolutional neural network is used to extract useful features. Simulation results show that the DRL-DCA algorithm can decrease the blocking probability and improve the carried traffic and spectrum efficiency compared with other channel allocation algorithms.",
"title": ""
},
{
"docid": "3614bf0a54290ea80a2d6f061e830c91",
"text": "0749-5978/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.obhdp.2008.04.002 * Corresponding author. Fax: +1 407 823 3725. E-mail address: dmayer@bus.ucf.edu (D.M. Mayer) This research examines the relationships between top management and supervisory ethical leadership and group-level outcomes (e.g., deviance, OCB) and suggests that ethical leadership flows from one organizational level to the next. Drawing on social learning theory [Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice-Hall.; Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.] and social exchange theory [Blau, p. (1964). Exchange and power in social life. New York: John Wiley.], the results support our theoretical model using a sample of 904 employees and 195 managers in 195 departments. We find a direct negative relationship between both top management and supervisory ethical leadership and group-level deviance, and a positive relationship with group-level OCB. Finally, consistent with the proposed trickle-down model, the effects of top management ethical leadership on group-level deviance and OCB are mediated by supervisory ethical leadership. 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "39861e2759b709883f3d37a65d13834b",
"text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.",
"title": ""
},
{
"docid": "a2df7bbce7247125ef18a17d7dbb2166",
"text": "Few studies have evaluated the effectiveness of cyberbullying prevention/intervention programs. The goals of the present study were to develop a Theory of Reasoned Action (TRA)-based video program to increase cyberbullying knowledge (1) and empathy toward cyberbullying victims (2), reduce favorable attitudes toward cyberbullying (3), decrease positive injunctive (4) and descriptive norms about cyberbullying (5), and reduce cyberbullying intentions (6) and cyberbullying behavior (7). One hundred sixty-seven college students were randomly assigned to an online video cyberbullying prevention program or an assessment-only control group. Immediately following the program, attitudes and injunctive norms for all four types of cyberbullying behavior (i.e., unwanted contact, malice, deception, and public humiliation), descriptive norms for malice and public humiliation, empathy toward victims of malice and deception, and cyberbullying knowledge significantly improved in the experimental group. At one-month follow-up, malice and public humiliation behavior, favorable attitudes toward unwanted contact, deception, and public humiliation, and injunctive norms for public humiliation were significantly lower in the experimental than the control group. Cyberbullying knowledge was significantly higher in the experimental than the control group. These findings demonstrate a brief cyberbullying video is capable of improving, at one-month follow-up, cyberbullying knowledge, cyberbullying perpetration behavior, and TRA constructs known to predict cyberbullying perpetration. Considering the low cost and ease with which a video-based prevention/intervention program can be delivered, this type of approach should be considered to reduce cyberbullying.",
"title": ""
},
{
"docid": "6d0259e1c4047964bdba90dc1ecb0a68",
"text": "In order to further understand what physiological characteristics make a human hand irreplaceable for many dexterous tasks, it is necessary to develop artificial joints that are anatomically correct while sharing similar dynamic features. In this paper, we address the problem of designing a two degree of freedom metacarpophalangeal (MCP) joint of an index finger. The artificial MCP joint is composed of a ball joint, crocheted ligaments, and a silicon rubber sleeve which as a whole provides the functions required of a human finger joint. We quantitatively validate the efficacy of the artificial joint by comparing its dynamic characteristics with that of two human subjects' index fingers by analyzing their impulse response with linear regression. Design parameters of the artificial joint are varied to highlight their effect on the joint's dynamics. A modified, second-order model is fit which accounts for non-linear stiffness and damping, and a higher order model is considered. Good fits are observed both in the human (R2 = 0.97) and the artificial joint of the index finger (R2 = 0.95). Parameter estimates of stiffness and damping for the artificial joint are found to be similar to those in the literature, indicating our new joint is a good approximation for an index finger's MCP joint.",
"title": ""
},
{
"docid": "80aa839635765902dc7631d8f9a6934c",
"text": "3D volumetric object generation/prediction from single 2D image is a quite challenging but meaningful task in 3D visual computing. In this paper, we propose a novel neural network architecture, named \"3DensiNet\", which uses density heat-map as an intermediate supervision tool for 2D-to-3D transformation. Specifically, we firstly present a 2D density heat-map to 3D volumetric object encoding-decoding network, which outperforms classical 3D autoencoder. Then we show that using 2D image to predict its density heat-map via a 2D to 2D encoding-decoding network is feasible. In addition, we leverage adversarial loss to fine tune our network, which improves the generated/predicted 3D voxel objects to be more similar to the ground truth voxel object. Experimental results on 3D volumetric prediction from 2D images demonstrates superior performance of 3DensiNet over other state-of-the-art techniques in handling 3D volumetric object generation/prediction from single 2D image.",
"title": ""
},
{
"docid": "c38a6685895c23620afb6570be4c646b",
"text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.",
"title": ""
},
{
"docid": "009d79972bd748d7cf5206bb188aba00",
"text": "Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performanc e o a wide variety of tasks and enjoy super-linear convergence to the optimal s olution. For largescale learning problems, stochastic Quasi-Newton methods ave been recently proposed. However, these typically only achieve sub-linea r convergence rates and have not been shown to consistently perform well in practice s nce noisy Hessian approximations can exacerbate the effect of high-variance stochastic gradient estimates. In this work we propose V ITE, a novel stochastic Quasi-Newton algorithm that uses an existing first-order technique to reduce this va r ance. Without exploiting the specific form of the approximate Hessian, we show that V ITE reaches the optimum at a geometric rate with a constant step-size when de aling with smooth strongly convex functions. Empirically, we demonstrate im provements over existing stochastic Quasi-Newton and variance reduced stochast i gradient methods.",
"title": ""
},
{
"docid": "405a1e8badfb85dcd1d5cc9b4a0026d2",
"text": "It is of great practical importance to improve yield and quality of vegetables in soilless cultures. This study investigated the effects of iron-nutrition management on yield and quality of hydroponic-cultivated spinach (Spinacia oleracea L.). The results showed that mild Fe-deficient treatment (1 μM FeEDTA) yielded a greater biomass of edible parts than Fe-omitted treatment (0 μM FeEDTA) or Fe-sufficient treatments (10 and 50 μM FeEDTA). Conversely, mild Fe-deficient treatment had the lowest nitrate concentration in the edible parts out of all the Fe treatments. Interestingly, all the concentrations of soluble sugar, soluble protein and ascorbate in mild Fe-deficient treatments were higher than Fe-sufficient treatments. In addition, both phenolic concentration and DPPH scavenging activity in mild Fe-deficient treatments were comparable with those in Fe-sufficient treatments, but were higher than those in Fe-omitted treatments. Therefore, we concluded that using a mild Fe-deficient nutrition solution to cultivate spinach not only would increase yield, but also would improve quality.",
"title": ""
},
{
"docid": "e5d474fc8c0d2c97cc798eda4f9c52dd",
"text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.",
"title": ""
},
{
"docid": "eab86ab18bd47e883b184dcd85f366cd",
"text": "We study corporate bond default rates using an extensive new data set spanning the 1866–2008 period. We find that the corporate bond market has repeatedly suffered clustered default events much worse than those experienced during the Great Depression. For example, during the railroad crisis of 1873–1875, total defaults amounted to 36% of the par value of the entire corporate bond market. Using a regime-switching model, we examine the extent to which default rates can be forecast by financial and macroeconomic variables. We find that stock returns, stock return volatility, and changes in GDP are strong predictors of default rates. Surprisingly, however, credit spreads are not. Over the long term, credit spreads are roughly twice as large as default losses, resulting in an average credit risk premium of about 80 basis points. We also find that credit spreads do not adjust in response to realized default rates. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2cb298a8fc8102d61964a884c20e7d78",
"text": "In this paper, the concept of data mining was summarized and its significance towards its methodologies was illustrated. The data mining based on Neural Network and Genetic Algorithm is researched in detail and the key technology and ways to achieve the data mining on Neural Network and Genetic Algorithm are also surveyed. This paper also conducts a formal review of the area of rule extraction from ANN and GA.",
"title": ""
},
{
"docid": "ec673efa5f837ba4c997ee7ccd845ce1",
"text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.",
"title": ""
},
{
"docid": "8fa34eb8d0ab6b1248a98936ddad7c5c",
"text": "Planning with temporally extended goals and uncontrollable events has recently been introduced as a formal model for system reconfiguration problems. An important application is to automatically reconfigure a real-life system in such a way that its subsequent internal evolution is consistent with a temporal goal formula. In this paper we introduce an incremental search algorithm and a search-guidance heuristic, two generic planning enhancements. An initial problem is decomposed into a series of subproblems, providing two main ways of speeding up a search. Firstly, a subproblem focuses on a part of the initial goal. Secondly, a notion of action relevance allows to explore with higher priority actions that are heuristically considered to be more relevant to the subproblem at hand. Even though our techniques are more generally applicable, we restrict our attention to planning with temporally extended goals and uncontrollable events. Our ideas are implemented on top of a successful previous system that performs online learning to better guide planning and to safely avoid potentially expensive searches. In experiments, the system speed performance is further improved by a convincing margin.",
"title": ""
},
{
"docid": "8c54780de6c8d8c3fa71b31015ad044e",
"text": "Integrins are cell surface receptors for extracellular matrix proteins and play a key role in cell survival, proliferation, migration and gene expression. Integrin signaling has been shown to be deregulated in several types of cancer, including prostate cancer. This review is focused on integrin signaling pathways known to be deregulated in prostate cancer and known to promote prostate cancer progression.",
"title": ""
},
{
"docid": "3510bcd9d52729766e2abe2111f8be95",
"text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.",
"title": ""
},
{
"docid": "1d6b58df486d618341cea965724a7da9",
"text": "The focus on human capital as a driver of economic growth for developing countries has led to undue attention on school attainment. Developing countries have made considerable progress in closing the gap with developed countries in terms of school attainment, but recent research has underscored the importance of cognitive skills for economic growth. This result shifts attention to issues of school quality, and there developing countries have been much less successful in closing the gaps with developed countries. Without improving school quality, developing countries will find it difficult to improve their long run economic performance. JEL Classification: I2, O4, H4 Highlights: ! ! Improvements in long run growth are closely related to the level of cognitive skills of the population. ! ! Development policy has inappropriately emphasized school attainment as opposed to educational achievement, or cognitive skills. ! ! Developing countries, while improving in school attainment, have not improved in quality terms. ! ! School policy in developing countries should consider enhancing both basic and advanced skills.",
"title": ""
},
{
"docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a",
"text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
},
{
"docid": "57e71550633cdb4a37d3fa270f0ad3a7",
"text": "Classifiers based on sparse representations have recently been shown to provide excellent results in many visual recognition and classification tasks. However, the high cost of computing sparse representations at test time is a major obstacle that limits the applicability of these methods in large-scale problems, or in scenarios where computational power is restricted. We consider in this paper a simple yet efficient alternative to sparse coding for feature extraction. We study a classification scheme that applies the soft-thresholding nonlinear mapping in a dictionary, followed by a linear classifier. A novel supervised dictionary learning algorithm tailored for this low complexity classification architecture is proposed. The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver. We conduct experiments on several datasets, and show that our learning algorithm that leverages the structure of the classification problem outperforms generic learning procedures. Our simple classifier based on soft-thresholding also competes with the recent sparse coding classifiers, when the dictionary is learned appropriately. The adopted classification scheme further requires less computational time at the testing stage, compared to other classifiers. The proposed scheme shows the potential of the adequately trained soft-thresholding mapping for classification and paves the way towards the development of very efficient classification methods for vision problems.",
"title": ""
}
] |
scidocsrr
|
e36b448d2407944c4f7bccb9bd28f791
|
Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis.
|
[
{
"docid": "b51fcfa32dbcdcbcc49f1635b44601ed",
"text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.",
"title": ""
}
] |
[
{
"docid": "ff1f503123ce012b478a3772fa9568b5",
"text": "Cementoblastoma is a rare odontogenic tumor that has distinct clinical and radiographical features normally suggesting the correct diagnosis. The clinicians and oral pathologists must have in mind several possible differential diagnoses that can lead to a misdiagnosed lesion, especially when unusual clinical features are present. A 21-year-old male presented with dull pain in lower jaw on right side. The clinical inspection of the region was non-contributory to the diagnosis but the lesion could be appreciated on palpation. A swelling was felt in the alveolar region of mandibular premolar-molar on right side. Radiographic examination was suggestive of benign cementoblastoma and the tumor was removed surgically along with tooth. The diagnosis was confirmed by histopathologic study. Although this neoplasm is rare, the dental practitioner should be aware of the clinical, radiographical and histopathological features that will lead to its early diagnosis and treatment.",
"title": ""
},
{
"docid": "3732f96144d7f28c88670dd63aff63a1",
"text": "The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a systematic basis for its classification, and discuss linkages to related issues such as power system reliability and security.",
"title": ""
},
{
"docid": "876dd0a985f00bb8145e016cc8593a84",
"text": "This paper presents how to synthesize a texture in a procedural way that preserves the features of the input exemplar. The exemplar is analyzed in both spatial and frequency domains to be decomposed into feature and non-feature parts. Then, the non-feature parts are reproduced as a procedural noise, whereas the features are independently synthesized. They are combined to output a non-repetitive texture that also preserves the exemplar’s features. The proposed method allows the user to control the extent of extracted features and also enables a texture to edited quite effectively.",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "c8d4fad2d3f5c7c2402ca60bb4f6dcca",
"text": "The Pix2pix [17] and CycleGAN [40] losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.",
"title": ""
},
{
"docid": "aa83aa0a030e14449504ad77dd498b90",
"text": "An organization has to make the right decisions in time depending on demand information to enhance the commercial competitive advantage in a constantly fluctuating business environment. Therefore, estimating the demand quantity for the next period most likely appears to be crucial. This work presents a comparative forecasting methodology regarding to uncertain customer demands in a multi-level supply chain (SC) structure via neural techniques. The objective of the paper is to propose a new forecasting mechanism which is modeled by artificial intelligence approaches including the comparison of both artificial neural networks and adaptive network-based fuzzy inference system techniques to manage the fuzzy demand with incomplete information. The effectiveness of the proposed approach to the demand forecasting issue is demonstrated using real-world data from a company which is active in durable consumer goods industry in Istanbul, Turkey. Crown Copyright 2008 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd877197b06304b379d5caf9b5b89d30",
"text": "Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral \"correlates.\" As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of \"natural experiments,\" such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.",
"title": ""
},
{
"docid": "6b70a42b41de6831604e14904f682b69",
"text": "A large proportion of the Indian population is excluded from basic banking services. Just one in two Indians has access to a savings bank account and just one in seven Indians has access to bank credit (Business Standard, June 28 2013). There are merely 684 million savings bank accounts in the country with a population of 1.2 billion. Branch per 100,000 adult ratio in India stands at 747 compared to 1,065 for Brazil and 2,063 for Malaysia (World Bank Financial Access Report 2010). As more people, especially the poor, gain access to financial services, they will be able to save better and get access to funding in a more structured manner. This will reduce income inequality, help the poor up the ladder, and contribute to economic development. There is a need for transactions and savings accounts for the under-served in the population. Mobile banking has been evolved in last couple of years with the help of Mobile penetration, which has shown phenomenal growth in rural areas of India. The rural subscription increased from 398.68 million at the end of December 2014 to 404.16 million at the end of January 2015, said in a statement by the Telecom Regulatory Authority of India. Banks in India are already investing in mobile technology and security from last couple of years. They are adding value in services such as developing smartphone apps, mobile wallets and educating consumers about the benefits of using the mobile banking resulting in adoption of mobile banking faster among consumers as compared to internet banking.\n The objective of this study is:\n 1. To understand the scope of mobile banking to reach unbanked population in India.\n 2. To analyze the learnings of M-PESA and Payments Bank Opportunity.\n 3. To evaluate the upcoming challenges for the payments bank success in India.",
"title": ""
},
{
"docid": "a67f7593ea049be1e2785108b6181f7d",
"text": "This paper describes torque characteristics of the interior permanent magnet synchronous motor (IPMSM) using the inexpensive ferrite magnets. IPMSM model used in this study has the spoke and the axial type magnets in the rotor, and torque characteristics are analyzed by the three-dimensional finite element method (3D-FEM). As a result, torque characteristics can be improved by using both the spoke type magnets and the axial type magnets in the rotor.",
"title": ""
},
{
"docid": "4247314290ffa50098775e2bbc41b002",
"text": "Heterogeneous integration enables the construction of silicon (Si) photonic systems, which are fully integrated with a range of passive and active elements including lasers and detectors. Numerous advancements in recent years have shown that heterogeneous Si platforms can be extended beyond near-infrared telecommunication wavelengths to the mid-infrared (MIR) (2–20 μm) regime. These wavelengths hold potential for an extensive range of sensing applications and the necessary components for fully integrated heterogeneous MIR Si photonic technologies have now been demonstrated. However, due to the broad wavelength range and the diverse assortment of MIR technologies, the optimal platform for each specific application is unclear. Here, we overview Si photonic waveguide platforms and lasers at the MIR, including quantum cascade lasers on Si. We also discuss progress toward building an integrated multispectral source, which can be constructed by wavelength beam combining the outputs from multiple lasers with arrayed waveguide gratings and duplexing adiabatic couplers.",
"title": ""
},
{
"docid": "af6c98814dbd1301b16afb562c524842",
"text": "Online anomaly detection (AD) is an important technique for monitoring wireless sensor networks (WSNs), which protects WSNs from cyberattacks and random faults. As a scalable and parameter-free unsupervised AD technique, k-nearest neighbor (kNN) algorithm has attracted a lot of attention for its applications in computer networks and WSNs. However, the nature of lazy-learning makes the kNN-based AD schemes difficult to be used in an online manner, especially when communication cost is constrained. In this paper, a new kNN-based AD scheme based on hypergrid intuition is proposed for WSN applications to overcome the lazy-learning problem. Through redefining anomaly from a hypersphere detection region (DR) to a hypercube DR, the computational complexity is reduced significantly. At the same time, an attached coefficient is used to convert a hypergrid structure into a positive coordinate space in order to retain the redundancy for online update and tailor for bit operation. In addition, distributed computing is taken into account, and position of the hypercube is encoded by a few bits only using the bit operation. As a result, the new scheme is able to work successfully in any environment without human interventions. Finally, the experiments with a real WSN data set demonstrate that the proposed scheme is effective and robust.",
"title": ""
},
{
"docid": "d114f37ccb079106a728ad8fe1461919",
"text": "This paper describes a stochastic hill climbing algorithm named SHCLVND to optimize arbitrary vectorial < n ! < functions. It needs less parameters. It uses normal (Gaussian) distributions to represent probabilities which are used for generating more and more better argument vectors. The-parameters of the normal distributions are changed by a kind of Hebbian learning. Kvasnicka et al. KPP95] used algorithm Stochastic Hill Climbing with Learning (HCwL) to optimize a highly multimodal vectorial function on real numbers. We have tested proposed algorithm by optimizations of the same and a similar function and show the results in comparison to HCwL. In opposite to it algorithm SHCLVND desribed here works directly on vectors of numbers instead their bit-vector representations and uses normal distributions instead of numbers to represent probabilities. 1 Overview In Section 2 we give an introduction with the way to the algorithm. Then we describe it exactly in Section 3. There is also given a compact notation in pseudo PASCAL-code, see Section 3.4. After that we give an example: we optimize highly multimodal functions with the proposed algorithm and give some visualisations of the progress in Section 4. In Section 5 there are a short summary and some ideas for future works. At last in Section 6 we give some hints for practical use of the algorithm. 2 Introduction This paper describes a hill climbing algorithm to optimize vectorial functions on real numbers. 2.1 Motivation Flexible algorithms for optimizing any vectorial function are interesting if there is no or only a very diicult mathematical solution known, e.g. parameter adjustments to optimize with respect to some relevant property the recalling behavior of a (trained) neuronal net HKP91, Roj93], or the resulting image of some image-processing lter.",
"title": ""
},
{
"docid": "f93ee5c9de994fa07e7c3c1fe6e336d1",
"text": "Sleep bruxism (SB) is characterized by repetitive and coordinated mandible movements and non-functional teeth contacts during sleep time. Although the etiology of SB is controversial, the literature converges on its multifactorial origin. Occlusal factors, smoking, alcoholism, drug usage, stress, and anxiety have been described as SB trigger factors. Recent studies on this topic discussed the role of neurotransmitters on the development of SB. Thus, the purpose of this study was to detect and quantify the urinary levels of catecholamines, specifically of adrenaline, noradrenaline and dopamine, in subjects with SB and in control individuals. Urine from individuals with SB (n = 20) and without SB (n = 20) was subjected to liquid chromatography. The catecholamine data were compared by Mann–Whitney’s test (p ≤ 0.05). Our analysis showed higher levels of catecholamines in subjects with SB (adrenaline = 111.4 µg/24 h; noradrenaline = 261,5 µg/24 h; dopamine = 479.5 µg/24 h) than in control subjects (adrenaline = 35,0 µg/24 h; noradrenaline = 148,7 µg/24 h; dopamine = 201,7 µg/24 h). Statistical differences were found for the three catecholamines tested. It was concluded that individuals with SB have higher levels of urinary catecholamines.",
"title": ""
},
{
"docid": "e0b7efd5d3bba071ada037fc5b05a622",
"text": "Social exclusion can thwart people's powerful need for social belonging. Whereas prior studies have focused primarily on how social exclusion influences complex and cognitively downstream social outcomes (e.g., memory, overt social judgments and behavior), the current research examined basic, early-in-the-cognitive-stream consequences of exclusion. Across 4 experiments, the threat of exclusion increased selective attention to smiling faces, reflecting an attunement to signs of social acceptance. Compared with nonexcluded participants, participants who experienced the threat of exclusion were faster to identify smiling faces within a \"crowd\" of discrepant faces (Experiment 1), fixated more of their attention on smiling faces in eye-tracking tasks (Experiments 2 and 3), and were slower to disengage their attention from smiling faces in a visual cueing experiment (Experiment 4). These attentional attunements were specific to positive, social targets. Excluded participants did not show heightened attention to faces conveying social disapproval or to positive nonsocial images. The threat of social exclusion motivates people to connect with sources of acceptance, which is manifested not only in \"downstream\" choices and behaviors but also at the level of basic, early-stage perceptual processing.",
"title": ""
},
{
"docid": "aad2d6385cb8c698a521caea00fe56d2",
"text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some",
"title": ""
},
{
"docid": "7a56ca5ad5483aef5b886836c24bbb3b",
"text": "Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.",
"title": ""
},
{
"docid": "d93dbf04604d9e60a554f39b0f7e3122",
"text": "BACKGROUND\nThe World Health Organization (WHO) estimates that 1.9 million deaths worldwide are attributable to physical inactivity and at least 2.6 million deaths are a result of being overweight or obese. In addition, WHO estimates that physical inactivity causes 10% to 16% of cases each of breast cancer, colon, and rectal cancers as well as type 2 diabetes, and 22% of coronary heart disease and the burden of these and other chronic diseases has rapidly increased in recent decades.\n\n\nOBJECTIVES\nThe purpose of this systematic review was to summarize the evidence of the effectiveness of school-based interventions in promoting physical activity and fitness in children and adolescents.\n\n\nSEARCH METHODS\nThe search strategy included searching several databases to October 2011. In addition, reference lists of included articles and background papers were reviewed for potentially relevant studies, as well as references from relevant Cochrane reviews. Primary authors of included studies were contacted as needed for additional information.\n\n\nSELECTION CRITERIA\nTo be included, the intervention had to be relevant to public health practice (focused on health promotion activities), not conducted by physicians, implemented, facilitated, or promoted by staff in local public health units, implemented in a school setting and aimed at increasing physical activity, included all school-attending children, and be implemented for a minimum of 12 weeks. In addition, the review was limited to randomized controlled trials and those that reported on outcomes for children and adolescents (aged 6 to 18 years). Primary outcomes included: rates of moderate to vigorous physical activity during the school day, time engaged in moderate to vigorous physical activity during the school day, and time spent watching television. Secondary outcomes related to physical health status measures including: systolic and diastolic blood pressure, blood cholesterol, body mass index (BMI), maximal oxygen uptake (VO2max), and pulse rate.\n\n\nDATA COLLECTION AND ANALYSIS\nStandardized tools were used by two independent reviewers to assess each study for relevance and for data extraction. In addition, each study was assessed for risk of bias as specified in the Cochrane Handbook for Systematic Reviews of Interventions. Where discrepancies existed, discussion occurred until consensus was reached. The results were summarized narratively due to wide variations in the populations, interventions evaluated, and outcomes measured.\n\n\nMAIN RESULTS\nIn the original review, 13,841 records were identified and screened, 302 studies were assessed for eligibility, and 26 studies were included in the review. There was some evidence that school-based physical activity interventions had a positive impact on four of the nine outcome measures. Specifically positive effects were observed for duration of physical activity, television viewing, VO2 max, and blood cholesterol. Generally, school-based interventions had little effect on physical activity rates, systolic and diastolic blood pressure, BMI, and pulse rate. At a minimum, a combination of printed educational materials and changes to the school curriculum that promote physical activity resulted in positive effects.In this update, given the addition of three new inclusion criteria (randomized design, all school-attending children invited to participate, minimum 12-week intervention) 12 of the original 26 studies were excluded. In addition, studies published between July 2007 and October 2011 evaluating the effectiveness of school-based physical interventions were identified and if relevant included. In total an additional 2378 titles were screened of which 285 unique studies were deemed potentially relevant. Of those 30 met all relevance criteria and have been included in this update. This update includes 44 studies and represents complete data for 36,593 study participants. Duration of interventions ranged from 12 weeks to six years.Generally, the majority of studies included in this update, despite being randomized controlled trials, are, at a minimum, at moderate risk of bias. The results therefore must be interpreted with caution. Few changes in outcomes were observed in this update with the exception of blood cholesterol and physical activity rates. For example blood cholesterol was no longer positively impacted upon by school-based physical activity interventions. However, there was some evidence to suggest that school-based physical activity interventions led to an improvement in the proportion of children who engaged in moderate to vigorous physical activity during school hours (odds ratio (OR) 2.74, 95% confidence interval (CI), 2.01 to 3.75). Improvements in physical activity rates were not observed in the original review. Children and adolescents exposed to the intervention also spent more time engaged in moderate to vigorous physical activity (with results across studies ranging from five to 45 min more), spent less time watching television (results range from five to 60 min less per day), and had improved VO2max (results across studies ranged from 1.6 to 3.7 mL/kg per min). However, the overall conclusions of this update do not differ significantly from those reported in the original review.\n\n\nAUTHORS' CONCLUSIONS\nThe evidence suggests the ongoing implementation of school-based physical activity interventions at this time, given the positive effects on behavior and one physical health status measure. However, given these studies are at a minimum of moderate risk of bias, and the magnitude of effect is generally small, these results should be interpreted cautiously. Additional research on the long-term impact of these interventions is needed.",
"title": ""
},
{
"docid": "88fb71e503e0d0af7515dd8489061e25",
"text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e61e7d5ade8946c74d288d75aca93961",
"text": "The ill-posed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include the weighted minimum l(2)-norm, FOCUSS, minimum current estimation, VESTAL, sLORETA, restricted maximum likelihood, covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination, as well as many others. Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant.",
"title": ""
},
{
"docid": "16f5686c1675d0cf2025cf812247ab45",
"text": "This paper presents the system analysis and implementation of a soft switching Sepic-Cuk converter to achieve zero voltage switching (ZVS). In the proposed converter, the Sepic and Cuk topologies are combined together in the output side. The features of the proposed converter are to reduce the circuit components (share the power components in the transformer primary side) and to share the load current. Active snubber is connected in parallel with the primary side of transformer to release the energy stored in the leakage inductor of transformer and to limit the peak voltage stress of switching devices when the main switch is turned off. The active snubber can achieve ZVS turn-on for power switches. Experimental results, taken from a laboratory prototype rated at 300W, are presented to verify the effectiveness of the proposed converter. I. Introduction Modern",
"title": ""
}
] |
scidocsrr
|
b735a5acf90500cf0a0a049380468b19
|
Bunny Ear Combline Antennas for Compact Wide-Band Dual-Polarized Aperture Array
|
[
{
"docid": "c0600c577850c8286f816396ead9649f",
"text": "A parameter study of dual-polarized tapered slot antenna (TSA) arrays shows the key features that affect the wide-band and widescan performance of these arrays. The overall performance can be optimized by judiciously choosing a combination of parameters. In particular, it is found that smaller circular slot cavities terminating the bilateral slotline improve the performance near the low end of the operating band, especially when scanning in the -plane. The opening rate of the tapered slotline mainly determines the mid-band performance and it is possible to choose an opening rate to obtain balanced overall performance in the mid-band. Longer tapered slotline is shown to increase the bandwidth, especially in the lower end of the operating band. Finally, it is shown that the -plane anomalies are affected by the array element spacing. A design example demonstrates that the results from the parameter study can be used to design a dual-polarized TSA array with about 4.5 : 1 bandwidth for a scan volume of not less than = 45 from broadside in all planes.",
"title": ""
}
] |
[
{
"docid": "74c7ffaf4064218920f503a31a0f97b0",
"text": "In this paper, we present a new method for the control of soft robots with elastic behavior, piloted by several actuators. The central contribution of this work is the use of the Finite Element Method (FEM), computed in real-time, in the control algorithm. The FEM based simulation computes the nonlinear deformations of the robots at interactive rates. The model is completed by Lagrange multipliers at the actuation zones and at the end-effector position. A reduced compliance matrix is built in order to deal with the necessary inversion of the model. Then, an iterative algorithm uses this compliance matrix to find the contribution of the actuators (force and/or position) that will deform the structure so that the terminal end of the robot follows a given position. Additional constraints, like rigid or deformable obstacles, or the internal characteristics of the actuators are integrated in the control algorithm. We illustrate our method using simulated examples of both serial and parallel structures and we validate it on a real 3D soft robot made of silicone.",
"title": ""
},
{
"docid": "002a86f6e0611a7b705a166e05ef3988",
"text": "Due to a wide range of potential applications, research on mobile commerce has received a lot of interests from both of the industry and academia. Among them, one of the active topic areas is the mining and prediction of users' mobile commerce behaviors such as their movements and purchase transactions. In this paper, we propose a novel framework, called Mobile Commerce Explorer (MCE), for mining and prediction of mobile users' movements and purchase transactions under the context of mobile commerce. The MCE framework consists of three major components: 1) Similarity Inference Model (SIM) for measuring the similarities among stores and items, which are two basic mobile commerce entities considered in this paper; 2) Personal Mobile Commerce Pattern Mine (PMCP-Mine) algorithm for efficient discovery of mobile users' Personal Mobile Commerce Patterns (PMCPs); and 3) Mobile Commerce Behavior Predictor (MCBP) for prediction of possible mobile user behaviors. To our best knowledge, this is the first work that facilitates mining and prediction of mobile users' commerce behaviors in order to recommend stores and items previously unknown to a user. We perform an extensive experimental evaluation by simulation and show that our proposals produce excellent results.",
"title": ""
},
{
"docid": "b10c2eb2d074054721959ce5b1a35dbc",
"text": "With the coming of the era of big data, it is most urgent to establish the knowledge computational engine for the purpose of discovering implicit and valuable knowledge from the huge, rapidly dynamic, and complex network data. In this paper, we first survey the mainstream knowledge computational engines from four aspects and point out their deficiency. To cover these shortages, we propose the open knowledge network (OpenKN), which is a self-adaptive and evolutionable knowledge computational engine for network big data. To the best of our knowledge, this is the first work of designing the end-to-end and holistic knowledge processing pipeline in regard with the network big data. Moreover, to capture the evolutionable computing capability of OpenKN, we present the evolutionable knowledge network for knowledge representation. A case study demonstrates the effectiveness of the evolutionable computing of OpenKN.",
"title": ""
},
{
"docid": "3f86b345cc6b566957f8480bd89a4b59",
"text": "The concept of ecosystems services has become an important model for linking the functioning of ecosystems to human welfare benefits. Understanding this link is critical in decision-making contexts. While there have been several attempts to come up with a classification scheme for ecosystem services, there has not been an agreed upon, meaningful and consistent definition for ecosystem services. In this paper we offer a definition of ecosystem services that is likely to be operational for ecosystem service research and several classification schemes. We argue that any attempt at classifying ecosystem services should be based on both the characteristics of interest and a decisioncontext. Because of this there is not one classification scheme that will be adequate for the many context in which ecosystem service research may be utilized. We discuss several examples of how classification schemes will be a function of both ecosystem and ecosystem service characteristics and the decision-making context.",
"title": ""
},
{
"docid": "365a402b992bf06ab50d0ea2f591f74e",
"text": "In this paper, the ability to determine the wellness of an elderly living alone in a smart home using a lowcost, robust, flexible and data driven intelligent system is presented. A framework integrating temporal and spatial contextual information for determining the wellness of an elderly has been modeled. A novel behavior detection process based on the observed sensor data in performing essential daily activities has been designed and developed. The developed prototype is used to forecast the behavior and wellness of the elderly by monitoring the daily usages of appliances in a smart home. Wellness models are tested at various elderly houses, and the experimental results are encouraging. The wellness models are updated based on the time series analysis. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5f77218388ee927565a993a8e8c48ef3",
"text": "The paper presents an idea of Lexical Platform proposed as a means for a lightweight integration of various lexical resources into one complex (from the perspective of non-technical users). All LRs will be represented as software web components implementing a minimal set of predefined programming interfaces providing functionality for querying and generating simple common presentation format. A common data format for the resources will not be required. Users will be able to search, browse and navigate via resources on the basis of anchor elements of a limited set of types. Lexical resources linked to the platform via components will preserve their identity.",
"title": ""
},
{
"docid": "dba24c6bf3e04fc6d8b99a64b66cb464",
"text": "Recommender systems have to serve in online environments which can be highly non-stationary.1. Traditional recommender algorithmsmay periodically rebuild their models, but they cannot adjust to quick changes in trends caused by timely information. In our experiments, we observe that even a simple, but online trained recommender model can perform significantly better than its batch version. We investigate online learning based recommender algorithms that can efficiently handle non-stationary data sets. We evaluate our models over seven publicly available data sets. Our experiments are available as an open source project2.",
"title": ""
},
{
"docid": "3849284adb68f41831434afbf23be9ed",
"text": "Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.",
"title": ""
},
{
"docid": "feb57c831158e03530d59725ae23af00",
"text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine the success of MTL. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary task configurations, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, because significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"title": ""
},
{
"docid": "2125930409d54f6770f03a76f5ecdc59",
"text": "Why do certain combinations of words such as “disadvantageous peace” or “metal to the petal” appeal to our minds as interesting expressions with a sense of creativity, while other phrases such as “quiet teenager”, or “geometrical base” not as much? We present statistical explorations to understand the characteristics of lexical compositions that give rise to the perception of being original, interesting, and at times even artistic. We first examine various correlates of perceived creativity based on information theoretic measures and the connotation of words, then present experiments based on supervised learning that give us further insights on how different aspects of lexical composition collectively contribute to the perceived creativity.",
"title": ""
},
{
"docid": "7b0e63115a7d085a180e047ae1ab2139",
"text": "We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.",
"title": ""
},
{
"docid": "9bdddbd6b3619aa4c23566eea33b4ff7",
"text": "This was a prospective controlled study to compare the beneficial effects of office microlaparoscopic ovarian drilling (OMLOD) under augmented local anesthesia, as a new modality treatment option, compared to those following ovarian drilling with the conventional traditional 10-mm laparoscope (laparoscopic ovarian drilling, LOD) under general anesthesia. The study included 60 anovulatory women with polycystic ovary syndrome (PCOS) who underwent OMLOD (study group) and 60 anovulatory PCOS women, in whom conventional LOD using 10-mm laparoscope under general anesthesia was performed (comparison group). Transvaginal ultrasound scan and blood sampling to measure the serum concentrations of LH, FSH, testosterone and androstenedione were performed before and after the procedure. Intraoperative and postoperative pain scores in candidate women were evaluated during the office microlaparoscopic procedure, in addition to the number of candidates who needed extra analgesia. Women undergoing OMLOD showed good intraoperative and postoperative pain scores. The number of patients discharged within 2 h after the office procedure was significantly higher, without the need for postoperative analgesia in most patients. The LH:FSH ratio, mean serum concentrations of LH and testosterone and free androgen index decreased significantly after both OMLOD and LOD. The mean ovarian volume decreased significantly (P < 0.05) a year after both OMLOD and LOD. There were no significant differences in those results after both procedures. Intra- and postoperatively augmented local anesthesia allows outpatient bilateral ovarian drilling by microlaparoscopy without general anesthesia. The high pregnancy rate, the simplicity of the method and the faster discharge time offer a new option for patients with PCOS who are resistant to clomiphene citrate. Moreover, ovarian drilling could be performed simultaneously during the routine diagnostic microlaparoscopy and integrated into the fertility workup of these patients.",
"title": ""
},
{
"docid": "3d52248b140f516b82abc452336fa40c",
"text": "Requirements engineering is a creative process in which stakeholders and designers work together to create ideas for new systems that are eventually expressed as requirements. This paper describes RESCUE, a scenario-driven requirements engineering process that includes workshops that integrate creativity techniques with different types of use case and system context modelling. It reports a case study in which RESCUE creativity workshops were used to discover stakeholder and system requirements for DMAN, a future air traffic management system for managing departures from major European airports. The workshop was successful in that it provided new and important outputs for subsequent requirements processes. The paper describes the workshop structure and wider RESCUE process, important results and key lessons learned.",
"title": ""
},
{
"docid": "9539b057f14a48cec48468cb97a4a9c1",
"text": "Fuzzy-match repair (FMR), which combines a human-generated translation memory (TM) with the flexibility of machine translation (MT), is one way of using MT to augment resources available to translators. We evaluate rule-based, phrase-based, and neural MT systems as black-box sources of bilingual information for FMR. We show that FMR success varies based on both the quality of the MT system and the type of MT system being used.",
"title": ""
},
{
"docid": "9a5ef746c96a82311e3ebe8a3476a5f4",
"text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.",
"title": ""
},
{
"docid": "62501a588824f70daaf4c2dbc49223da",
"text": "ORB-SLAM2 is one of the better-known open source SLAM implementations available. However, the dependence of visual features causes it to fail in featureless environments. With the present work, we propose a new technique to improve visual odometry results given by ORB-SLAM2 using a tightly Sensor Fusion approach to integrate camera and odometer data. In this work, we use odometer readings to improve the tracking results by adding graph constraints between frames and introduce a new method for preventing the tracking loss. We test our method using three different datasets, and show an improvement in the estimated trajectory, allowing a continuous tracking without losses.",
"title": ""
},
{
"docid": "125b3b5ad3855bfb3206793657661e7d",
"text": "Dependency parsers are among the most crucial tools in natural language processing as they have many important applications in downstream tasks such as information retrieval, machine translation and knowledge acquisition. We introduce the Yara Parser, a fast and accurate open-source dependency parser based on the arc-eager algorithm and beam search. It achieves an unlabeled accuracy of 93.32 on the standard WSJ test set which ranks it among the top dependency parsers. At its fastest, Yara can parse about 4000 sentences per second when in greedy mode (1 beam). When optimizing for accuracy (using 64 beams and Brown cluster features), Yara can parse 45 sentences per second. The parser can be trained on any syntactic dependency treebank and different options are provided in order to make it more flexible and tunable for specific tasks. It is released with the Apache version 2.0 license and can be used for both commercial and academic purposes. The parser can be found at https: //github.com/yahoo/YaraParser.",
"title": ""
},
{
"docid": "dd47b07c8233fe069b5d6999da3af0b2",
"text": "Many students play (computer) games in their leisure time, thus acquiring skills which can easily be utilized when it comes to teaching more sophisticated knowledge. Nevertheless many educators today are wasting this opportunity. Some have evaluated gaming scenarios and methods for teaching students and have created the term “gamification”. This paper describes the history of this new term and explains the possible impact on teaching. It will take well-researched facts into consideration to discuss the potential of games. Moreover, scenarios will be illustrated and evaluated for educators to adopt and use on their own.",
"title": ""
},
{
"docid": "7a720c34f461728bab4905716f925ace",
"text": "We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call \"bricks,\" are essentially new input devices that can be tightly coupled or \"attached\" to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the \"ActiveDesk.\" We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces.",
"title": ""
},
{
"docid": "b7e42b4dbcd34d57c25c184f72ed413e",
"text": "How smart can a micron-sized bag of chemicals be? How can an artificial or real cell make inferences about its environment? From which kinds of probability distributions can chemical reaction networks sample? We begin tackling these questions by showing four ways in which a stochastic chemical reaction network can implement a Boltzmann machine, a stochastic neural network model that can generate a wide range of probability distributions and compute conditional probabilities. The resulting models, and the associated theorems, provide a road map for constructing chemical reaction networks that exploit their native stochasticity as a computational resource. Finally, to show the potential of our models, we simulate a chemical Boltzmann machine to classify and generate MNIST digits in-silico.",
"title": ""
}
] |
scidocsrr
|
37c29a17b493e1ce267ec285962f06c3
|
ChronoStream: Elastic stateful stream computation in the cloud
|
[
{
"docid": "7add673c4f72e6a7586109ac3bdab2ec",
"text": "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this article, we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.",
"title": ""
},
{
"docid": "60e06e3eebafa9070eecf1ab1e9654f8",
"text": "In most enterprises, databases are deployed on dedicated database servers. Often, these servers are underutilized much of the time. For example, in traces from almost 200 production servers from different organizations, we see an average CPU utilization of less than 4%. This unused capacity can be potentially harnessed to consolidate multiple databases on fewer machines, reducing hardware and operational costs. Virtual machine (VM) technology is one popular way to approach this problem. However, as we demonstrate in this paper, VMs fail to adequately support database consolidation, because databases place a unique and challenging set of demands on hardware resources, which are not well-suited to the assumptions made by VM-based consolidation.\n Instead, our system for database consolidation, named Kairos, uses novel techniques to measure the hardware requirements of database workloads, as well as models to predict the combined resource utilization of those workloads. We formalize the consolidation problem as a non-linear optimization program, aiming to minimize the number of servers and balance load, while achieving near-zero performance degradation. We compare Kairos against virtual machines, showing up to a factor of 12× higher throughput on a TPC-C-like benchmark. We also tested the effectiveness of our approach on real-world data collected from production servers at Wikia.com, Wikipedia, Second Life, and MIT CSAIL, showing absolute consolidation ratios ranging between 5.5:1 and 17:1.",
"title": ""
}
] |
[
{
"docid": "8d3c4598b7d6be5894a1098bea3ed81a",
"text": "Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined. © 2015 Elsevier Inc. All rights reserved. Retrieval practice or testing is one of the most powerful memory enhancers. Testing that follows shortly after learning benefits long-term retention more than studying the to-be-remembered material again (Roediger & Karpicke, 2006a, 2006b). This effect has been shown using a variety of materials and paradigms, such as text passages (e.g., Roediger & Karpicke, 2006a), paired associates (Allen, Mahler, & Estes, 1969), general knowledge questions (McDaniel & Fisher, 1991), and word and picture lists (e.g., McDaniel & Masson, 1985; Wheeler & Roediger, 1992; Wheeler, Ewers, & Buonanno, 2003). Testing effects have been observed in traditional lab as well as educational settings (Grimaldi & Karpicke, 2015; Larsen, Butler, & Roediger, 2008; McDaniel, Anderson, Derbish, & Morrisette, 2007). Testing not only improves long-term retention, it also enhances subsequent encoding (Pastötter, Schicker, Niedernhuber, & Bäuml, 2011), protects memories from the buildup of proactive interference (PI; Nunes & Weinstein, 2012; Wahlheim, 2014), and reduces the probability that the tested items intrude into subsequently studied lists (Szpunar, McDermott, & Roediger, 2008; Weinstein, McDermott, & Szpunar, 2011). The reduced PI and intrusion rates are assumed to reflect enhanced list discriminability or improved within-list organization. Enhanced list discriminability in turn helps participants distinguish different sets or sources of information and allows them to circumscribe the search set during retrieval to the relevant list (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). ∗ Correspondence to: Department of Psychology, Lehigh University, 17 Memorial Drive East, Bethlehem, PA 18015, USA. E-mail address: hupbach@lehigh.edu http://dx.doi.org/10.1016/j.lmot.2015.01.004 0023-9690/© 2015 Elsevier Inc. All rights reserved. 24 A. Hupbach / Learning and Motivation 49 (2015) 23–30 If testing increases list discriminability, then it should also protect the tested list(s) from RI and intrusions from material that is encoded after retrieval practice. However, testing also necessarily reactivates a memory, and according to the reconsolidation account reactivation re-introduces plasticity into the memory trace, making it especially vulnerable to modifications (e.g., Dudai, 2004; Nader, Schafe, & LeDoux, 2000; for a recent review, see e.g., Hupbach, Gomez, & Nadel, 2013). Increased vulnerability to modification would suggest increased rather than reduced RI and intrusions. The few studies addressing this issue have yielded mixed results, with some suggesting that retrieval practice diminishes RI (Halamish & Bjork, 2011; Potts & Shanks, 2012), and others showing that retrieval practice can exacerbate the potential negative effects of post-retrieval learning (e.g., Chan & LaPaglia, 2013; Chan, Thomas, & Bulevich, 2009; Walker, Brakefield, Hobson, & Stickgold, 2003). Chan and colleagues (Chan & Langley, 2011; Chan et al., 2009; Thomas, Bulevich, & Chan, 2010) assessed the effects of testing on suggestibility in a misinformation paradigm. After watching a television episode, participants answered cuedrecall questions about it (retrieval practice) or performed an unrelated distractor task. Then, all participants read a narrative, which summarized the video but also contained some misleading information. A final cued-recall test revealed that participants in the retrieval practice condition recalled more misleading details and fewer correct details than participants in the distractor condition; that is, retrieval increased the misinformation effect (retrieval-enhanced suggestibility, RES). Chan et al. (2009) discuss two mechanisms that can explain this finding. First, since testing can potentiate subsequent new learning (e.g., Izawa, 1967; Tulving & Watkins, 1974), initial testing might have improved encoding of the misinformation. Indeed, when a modified final test was used, which encouraged the recall of both the correct information and the misinformation, participants in the retrieval practice condition recalled more misinformation than participants in the distractor condition (Chan et al., 2009). Second, retrieval might have rendered the memory more susceptible to interference by misinformation, an explanation that is in line with the reconsolidation account. Indeed, Chan and LaPaglia (2013) found reduced recognition of the correct information when retrieval preceded the presentation of misinformation (cf. Walker et al., 2003 for a similar effect in procedural memory). In contrast to Chan and colleagues’ findings, a study by Potts and Shanks (2012) suggests that testing protects memories from the negative influences of post-retrieval encoding of related material. Potts and Shanks asked participants to learn English–Swahili word pairs (List 1, A–B). One day later, one group of participants took a cued recall test of List 1 (testing condition) immediately before learning English–Finnish word pairs with the same English cues as were used in List 1 (List 2, A–C). Additionally, several control groups were implemented: one group was tested on List 1 without learning a second list, one group learned List 2 without prior retrieval practice, and one group did not participate in this session at all. On the third day, all participants took a final cued-recall test of List 1. Although retrieval practice per se did not enhance List 1 memory (i.e., no testing effect in the groups that did not learn List 2), it protected memory from RI (see Halamish & Bjork, 2011 for a similar result in a one-session study). Crucial for assessing the reconsolidation account is the comparison between the groups that learned List 2 either after List 1 recall or without prior List 1 recall. Contrary to the predictions derived from the reconsolidation account, final List 1 recall was enhanced when retrieval of List 1 preceded learning of List 2.1 While this clearly shows that testing counteracts RI, it would be premature to conclude that testing prevented the disruption of memory reconsolidation, because (a) retrieval practice without List 2 learning led to minimal forgetting between Day 2 and 3, while retrieval practice followed by List 2 learning led to significant memory decline, and (b) a reactivation condition that is independent from retrieval practice is missing. One could argue that repeating the cue words in List 2 likely reactivated memory for the original associations. It has been shown that the strength of reactivation (Detre, Natarajan, Gershman, & Norman, 2013) and the specific reminder structure (Forcato, Argibay, Pedreira, & Maldonado, 2009) determine whether or not a memory will be affected by post-reactivation procedures. The current study re-evaluates the question of how testing affects RI and intrusions. It uses a reconsolidation paradigm (Hupbach, Gomez, Hardt, & Nadel, 2007; Hupbach, Hardt, Gomez, & Nadel, 2008; Hupbach, Gomez, & Nadel, 2009; Hupbach, Gomez, & Nadel, 2011) to assess how testing in comparison to other reactivation procedures affects declarative memory. This paradigm will allow for a direct evaluation of the hypotheses that testing makes declarative memories vulnerable to interference, or that testing protects memories from the potential negative effects of subsequently learned material, as suggested by the list-separation hypothesis (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). This question has important practical implications. For instance, when students test their memory while preparing for an exam, will such testing increase or reduce interference and intrusions from information that is learned afterwards?",
"title": ""
},
{
"docid": "02a3b3034bb6c58eee37b462236a9e7d",
"text": "Short Message Service (SMS) is a text messaging service component of phone, web, or mobile communication systems, using standardized communications protocols that allow the exchange of short text messages between fixed line or mobile phone devices. Security of SMS’s is still an open challenging task. Various Cryptographic algorithms have been applied to secure the mobile SMS. The success of any cryptography technique depends on various factors like complexity, time, memory requirement, cost etc. In this paper we survey the most common and widely used SMS Encryption techniques. Each has its own advantages and disadvantages. Recent trends on Cryptography on android message applications have also been discussed. The latest cryptographic algorithm is based on lookup table and dynamic key which is easy to implement and to use and improve the efficiency. In this paper, an improvement in lookup table and dynamic algorithm is proposed. Rather than using the Static Lookup Table, Dynamic Lookup Table may be used which will improve the overall efficiency. KeywordsSMS, AES, DES, Blowfish, RSA, 3DES, LZW.",
"title": ""
},
{
"docid": "2b7d91c38a140628199cbdbee65c008a",
"text": "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and/or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot.",
"title": ""
},
{
"docid": "91dd10428713ab2bbf1d07bf543cd2da",
"text": "Recent findings showed that users on Facebook tend to select information that adhere to their system of beliefs and to form polarized groups - i.e., echo chambers. Such a tendency dominates information cascades and might affect public debates on social relevant issues. In this work we explore the structural evolution of communities of interest by accounting for users emotions and engagement. Focusing on the Facebook pages reporting on scientific and conspiracy content, we characterize the evolution of the size of the two communities by fitting daily resolution data with three growth models - i.e. the Gompertz model, the Logistic model, and the Log-logistic model. Although all the models appropriately describe the data structure, the Logistic one shows the best fit. Then, we explore the interplay between emotional state and engagement of users in the group dynamics. Our findings show that communities' emotional behavior is affected by the users' involvement inside the echo chamber. Indeed, to an higher involvement corresponds a more negative approach. Moreover, we observe that, on average, more active users show a faster shift towards the negativity than less active ones.",
"title": ""
},
{
"docid": "325772543e172b1a5bd08d20092b1069",
"text": "Despite considerable research on passwords, empirical studies of password strength have been limited by lack of access to plaintext passwords, small data sets, and password sets specifically collected for a research study or from low-value accounts. Properties of passwords used for high-value accounts thus remain poorly understood.\n We fill this gap by studying the single-sign-on passwords used by over 25,000 faculty, staff, and students at a research university with a complex password policy. Key aspects of our contributions rest on our (indirect) access to plaintext passwords. We describe our data collection methodology, particularly the many precautions we took to minimize risks to users. We then analyze how guessable the collected passwords would be during an offline attack by subjecting them to a state-of-the-art password cracking algorithm. We discover significant correlations between a number of demographic and behavioral factors and password strength. For example, we find that users associated with the computer science school make passwords more than 1.5 times as strong as those of users associated with the business school. while users associated with computer science make strong ones. In addition, we find that stronger passwords are correlated with a higher rate of errors entering them.\n We also compare the guessability and other characteristics of the passwords we analyzed to sets previously collected in controlled experiments or leaked from low-value accounts. We find more consistent similarities between the university passwords and passwords collected for research studies under similar composition policies than we do between the university passwords and subsets of passwords leaked from low-value accounts that happen to comply with the same policies.",
"title": ""
},
{
"docid": "d22f3bbb7af0ce2a221a17a12381de25",
"text": "Ambient occlusion is a technique that computes the amount of light reaching a point on a diffuse surface based on its directly visible occluders. It gives perceptual clues of depth, curvature, and spatial proximity and thus is important for realistic rendering. Traditionally, ambient occlusion is calculated by integrating the visibility function over the normal-oriented hemisphere around any given surface point. In this paper we show this hemisphere can be partitioned into two regions by a horizon line defined by the surface in a local neighborhood of such point. We introduce an image-space algorithm for finding an approximation of this horizon and, furthermore, we provide an analytical closed form solution for the occlusion below the horizon, while the rest of the occlusion is computed by sampling based on a distribution to improve the convergence. The proposed ambient occlusion algorithm operates on the depth buffer of the scene being rendered and the associated per-pixel normal buffer. It can be implemented on graphics hardware in a pixel shader, independently of the scene geometry. We introduce heuristics to reduce artifacts due to the incompleteness of the input data and we include parameters to make the algorithm easy to customize for quality or performance purposes. We show that our technique can render high-quality ambient occlusion at interactive frame rates on current GPUs. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation—; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—;",
"title": ""
},
{
"docid": "95a36969ad22c9ad42639cf0e4a824d6",
"text": "We consider the problems of kinematic and dynamic constraints, with actuator saturation and wheel slippage avoidance, for motion planning of a holonomic three-wheeled omni-directional robot. That is, the motion planner must not demand more velocity and acceleration at each time instant than the robot can provide. A new coupled non-linear dynamics model is derived. The novel concepts of Velocity and Acceleration Cones are proposed for determining the kinematic and dynamic constraints. The Velocity Cone is based on kinematics; we propose two Acceleration Cones, one for avoiding actuator saturation and the other for avoiding wheel slippage. The wheel slippage Acceleration Cone was found to dominate. In practical motion, all commanded velocities and accelerations from the motion planner must lie within these cones for successful motion. Case studies, simulations, and experimental validations are presented for our dynamic model and controller, plus the Velocity and Acceleration Cones.",
"title": ""
},
{
"docid": "0bcff493580d763dbc1dd85421546201",
"text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
},
{
"docid": "363a465d626fec38555563722ae92bb1",
"text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.",
"title": ""
},
{
"docid": "0f2023682deaf2eb70c7becd8b3375dd",
"text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.",
"title": ""
},
{
"docid": "9dab38b961f4be434c95ca6696ba52bd",
"text": "The widespread use and increasing capabilities of mobiles devices are making them a viable platform for offering mobile services. However, the increasing resource demands of mobile services and the inherent constraints of mobile devices limit the quality and type of functionality that can be offered, preventing mobile devices from exploiting their full potential as reliable service providers. Computation offloading offers mobile devices the opportunity to transfer resource-intensive computations to more resourcefulcomputing infrastructures. We present a framework for cloud-assisted mobile service provisioning to assist mobile devices in delivering reliable services. The framework supports dynamic offloading based on the resource status of mobile systems and current network conditions, while satisfying the user-defined energy constraints. It also enables the mobile provider to delegate the cloud infrastructure to forward the service response directly to the user when no further processing is required by the provider. Performance evaluation shows up to 6x latency improvement for computation-intensive services that do not require large data transfer. Experiments show that the operation of the cloud-assisted service provisioning framework does not pose significant overhead on mobile resources, yet it offers robust and efficient computation offloading.",
"title": ""
},
{
"docid": "82bfc1bc10247a23f45e30481db82245",
"text": "The performance of automatic speech recognition (ASR) has improved tremendously due to the application of deep neural networks (DNNs). Despite this progress, building a new ASR system remains a challenging task, requiring various resources, multiple training stages and significant expertise. This paper presents our Eesen framework which drastically simplifies the existing pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen involves learning a single recurrent neural network (RNN) predicting context-independent targets (phonemes or characters). To remove the need for pre-generated frame labels, we adopt the connectionist temporal classification (CTC) objective function to infer the alignments between speech and label sequences. A distinctive feature of Eesen is a generalized decoding approach based on weighted finite-state transducers (WFSTs), which enables the efficient incorporation of lexicons and language models into CTC decoding. Experiments show that compared with the standard hybrid DNN systems, Eesen achieves comparable word error rates (WERs), while at the same time speeding up decoding significantly.",
"title": ""
},
{
"docid": "028eb05afad2183bdf695b4268c438ed",
"text": "OBJECTIVE\nChoosing an appropriate method for regression analyses of cost data is problematic because it must focus on population means while taking into account the typically skewed distribution of the data. In this paper we illustrate the use of generalised linear models for regression analysis of cost data.\n\n\nMETHODS\nWe consider generalised linear models with either an identity link function (providing additive covariate effects) or log link function (providing multiplicative effects), and with gaussian (normal), overdispersed poisson, gamma, or inverse gaussian distributions. These are applied to estimate the treatment effects in two randomised trials adjusted for baseline covariates. Criteria for choosing an appropriate model are presented.\n\n\nRESULTS\nIn both examples considered, the gaussian model fits poorly and other distributions are to be preferred. When there are variables of prognostic importance in the model, using different distributions can materially affect the estimates obtained; it may also be possible to discriminate between additive and multiplicative covariate effects.\n\n\nCONCLUSIONS\nGeneralised linear models are attractive for the regression of cost data because they provide parametric methods of analysis where a variety of non-normal distributions can be specified and the way covariates act can be altered. Unlike the use of data transformation in ordinary least-squares regression, generalised linear models make inferences about the mean cost directly.",
"title": ""
},
{
"docid": "324e67e78d8786448106b25871c91ed6",
"text": "Interpretation of image contents is one of the objectives in computer vision specifically in image processing. In this era it has received much awareness of researchers. In image interpretation the partition of the image into object and background is a severe step. Segmentation separates an image into its component regions or objects. Image segmentation t needs to segment the object from the background to read the image properly and identify the content of the image carefully. In this context, edge detection is a fundamental tool for image segmentation. In this paper an attempt is made to study the performance of most commonly used edge detection techniques for image segmentation and also the comparison of these techniques is carried out with an experiment by using MATLAB software.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
},
{
"docid": "389a8e74f6573bd5e71b7c725ec3a4a7",
"text": "Paucity of large curated hand-labeled training data forms a major bottleneck in the deployment of machine learning models in computer vision and other fields. Recent work (Data Programming) has shown how distant supervision signals in the form of labeling functions can be used to obtain labels for given data in near-constant time. In this work, we present Adversarial Data Programming (ADP), which presents an adversarial methodology to generate data as well as a curated aggregated label, given a set of weak labeling functions. We validated our method on the MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many state-of-the-art models. We conducted extensive experiments to study its usefulness, as well as showed how the proposed ADP framework can be used for transfer learning as well as multi-task learning, where data from two domains are generated simultaneously using the framework along with the label information. Our future work will involve understanding the theoretical implications of this new framework from a game-theoretic perspective, as well as explore the performance of the method on more complex datasets.",
"title": ""
},
{
"docid": "ae85cf24c079ff446b76f0ba81146369",
"text": "Subgraph Isomorphism is a fundamental problem in graph data processing. Most existing subgraph isomorphism algorithms are based on a backtracking framework which computes the solutions by incrementally matching all query vertices to candidate data vertices. However, we observe that extensive duplicate computation exists in these algorithms, and such duplicate computation can be avoided by exploiting relationships between data vertices. Motivated by this, we propose a novel approach, BoostIso, to reduce duplicate computation. Our extensive experiments with real datasets show that, after integrating our approach, most existing subgraph isomorphism algorithms can be speeded up significantly, especially for some graphs with intensive vertex relationships, where the improvement can be up to several orders of magnitude.",
"title": ""
}
] |
scidocsrr
|
e8872a10a902f508cb71148612dc6224
|
Bucket Elimination: A Unifying Framework for Reasoning
|
[
{
"docid": "34b3c5ee3ea466c23f5c7662f5ce5b33",
"text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.",
"title": ""
}
] |
[
{
"docid": "8923cd83f3283ef27fca8dd0ecf2a08f",
"text": "This paper investigates when users create profiles in different social networks, whether they are redundant expressions of the same persona, or they are adapted to each platform. Using the personal webpages of 116,998 users on About.me, we identify and extract matched user profiles on several major social networks including Facebook, Twitter, LinkedIn, and Instagram. We find evidence for distinct site-specific norms, such as differences in the language used in the text of the profile self-description, and the kind of picture used as profile image. By learning a model that robustly identifies the platform given a user’s profile image (0.657–0.829 AUC) or self-description (0.608–0.847 AUC), we confirm that users do adapt their behaviour to individual platforms in an identifiable and learnable manner. However, different genders and age groups adapt their behaviour differently from each other, and these differences are, in general, consistent across different platforms. We show that differences in social profile construction correspond to differences in how formal or informal",
"title": ""
},
{
"docid": "79c2623b0e1b51a216fffbc6bbecd9ec",
"text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.",
"title": ""
},
{
"docid": "a6d3a8fcf10ee1fed6e3a933987db365",
"text": "This interdisciplinary conference explores exoticism, understood as a highly contested discourse on cultural difference as well as an alluring form of alterity that promotes a sense of cosmopolitan connectivity. Presentations and discussions will revolve around the question how the collapsed distances of globalisation and the transnational flows of media and people have transformed exoticism, which is no longer exclusively the projection of Orientalist fantasies of the Other from one centre, the West, but which emanates from multiple localities and is multidirectional in perspective.",
"title": ""
},
{
"docid": "dfbf284e97000e884281e4f25e7b615e",
"text": "Due to its popularity and open-source nature, Android is the mobile platform that has been targeted the most by malware that aim to steal personal information or to control the users' devices. More specifically, mobile botnets are malware that allow an attacker to remotely control the victims' devices through different channels like HTTP, thus creating malicious networks of bots. In this paper, we show how it is possible to effectively group mobile botnets families by analyzing the HTTP traffic they generate. To do so, we create malware clusters by looking at specific statistical information that are related to the HTTP traffic. This approach also allows us to extract signatures with which it is possible to precisely detect new malware that belong to the clustered families. Contrarily to x86 malware, we show that using fine-grained HTTP structural features do not increase detection performances. Finally, we point out how the HTTP information flow among mobile bots contains more information when compared to the one generated by desktop ones, allowing for a more precise detection of mobile threats.",
"title": ""
},
{
"docid": "610629d3891c10442fe5065e07d33736",
"text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.",
"title": ""
},
{
"docid": "2c8dc61a5dbdfcf8f086a5e6a0d920c1",
"text": "This work achieves a two-and-a-half-dimensional (2.5D) wafer-level radio frequency (RF) energy harvesting rectenna module with a compact size and high power conversion efficiency (PCE) that integrates a 2.45 GHz antenna in an integrated passive device (IPD) and a rectifier in a tsmcTM 0.18 μm CMOS process. The proposed rectifier provides a master-slave voltage doubling full-wave topology which can reach relatively high PCE by means of a relatively simple circuitry. The IPD antenna was stacked on top of the CMOS rectifier. The rectenna (including an antenna and rectifier) achieves an output voltage of 1.2 V and PCE of 47 % when the operation frequency is 2.45 GHz, with −12 dBm input power. The peak efficiency of the circuit is 83 % with −4 dBm input power. The die size of the RF harvesting module is less than 1 cm2. The performance of this module makes it possible to energy mobile device and it is also very suitable for wearable and implantable wireless sensor networks (WSN).",
"title": ""
},
{
"docid": "a82a658a8200285cf5a6eab8035a3fce",
"text": "This paper examines the magnitude of informational problems associated with the implementation and interpretation of simple monetary policy rules. Using Taylor’s rule as an example, I demonstrate that real-time policy recommendations differ considerably from those obtained with ex post revised data. Further, estimated policy reaction functions based on ex post revised data provide misleading descriptions of historical policy and obscure the behavior suggested by information available to the Federal Reserve in real time. These results indicate that reliance on the information actually available to policy makers in real time is essential for the analysis of monetary policy rules. (JEL E52, E58)",
"title": ""
},
{
"docid": "01b35a491b36f9c90f37237ef3975e33",
"text": "Wide bandgap semiconductors show superior material properties enabling potential power device operation at higher temperatures, voltages, and switching speeds than current Si technology. As a result, a new generation of power devices is being developed for power converter applications in which traditional Si power devices show limited operation. The use of these new power semiconductor devices will allow both an important improvement in the performance of existing power converters and the development of new power converters, accounting for an increase in the efficiency of the electric energy transformations and a more rational use of the electric energy. At present, SiC and GaN are the more promising semiconductor materials for these new power devices as a consequence of their outstanding properties, commercial availability of starting material, and maturity of their technological processes. This paper presents a review of recent progresses in the development of SiC- and GaN-based power semiconductor devices together with an overall view of the state of the art of this new device generation.",
"title": ""
},
{
"docid": "a055a3799dbf1f1cf1c389262a882d65",
"text": "This paper constitutes a first study of the Particle Swarm Optimization (PSO) method in Multiobjective Optimization (MO) problems. The ability of PSO to detect Pareto Optimal points and capture the shape of the Pareto Front is studied through experiments on well-known non-trivial test functions. The Weighted Aggregation technique with fixed or adaptive weights is considered. Furthermore, critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO framework in order to develop a multi-swarm PSO that can cope effectively with MO problems. Conclusions are derived and ideas for further research are proposed.",
"title": ""
},
{
"docid": "18c9eb47a76d2320f3d42bcf0129d5fe",
"text": "In his article Open Problems in the Philosophy of Information (Metaphilosophy 2004, 35 (4)), Luciano Floridi presented a Philosophy of Information research programme in the form of eighteen open problems, covering the following fundamental areas: information definition, information semantics, intelligence/cognition, informational universe/nature and values/ethics. We revisit Floridi’s programme, highlighting some of the major advances, commenting on unsolved problems and rendering the new landscape of the Philosophy of Information (PI) emerging at present. As we analyze the progress of PI we try to situate Floridi’s programme in the context of scientific and technological development that have been made last ten years. We emphasize that Philosophy of Information is a huge and vibrant research field, with its origins dating before Open Problems, and its domains extending even outside their scope. In this paper, we have been able only to sketch some of the developments during the past ten years. Our hope is that, even if fragmentary, this review may serve as a contribution to the effort of understanding the present state of the art and the paths of development of Philosophy of Information as seen through the lens of Open Problems.",
"title": ""
},
{
"docid": "009d79972bd748d7cf5206bb188aba00",
"text": "Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performanc e o a wide variety of tasks and enjoy super-linear convergence to the optimal s olution. For largescale learning problems, stochastic Quasi-Newton methods ave been recently proposed. However, these typically only achieve sub-linea r convergence rates and have not been shown to consistently perform well in practice s nce noisy Hessian approximations can exacerbate the effect of high-variance stochastic gradient estimates. In this work we propose V ITE, a novel stochastic Quasi-Newton algorithm that uses an existing first-order technique to reduce this va r ance. Without exploiting the specific form of the approximate Hessian, we show that V ITE reaches the optimum at a geometric rate with a constant step-size when de aling with smooth strongly convex functions. Empirically, we demonstrate im provements over existing stochastic Quasi-Newton and variance reduced stochast i gradient methods.",
"title": ""
},
{
"docid": "ed3b4ace00c68e9ad2abe6d4dbdadfcb",
"text": "With decreasing costs of high-quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in real time by applying a novel image projection that reduces the dimensionality of the data and, thus, reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require 2-D data. We evaluate the performance of our algorithm using the i-LIDS vehicle detection challenge datasets as well as videos we have taken ourselves. These videos test the algorithm in a variety of outdoor conditions, including nighttime video and instances of sudden changes in weather.",
"title": ""
},
{
"docid": "d2d3c47010566662eeaa2df01c768d5f",
"text": "To be rational is to be able to reason. Thirty years ago psychologists believed that human reasoning depended on formal rules of inference akin to those of a logical calculus. This hypothesis ran into difficulties, which led to an alternative view: reasoning depends on envisaging the possibilities consistent with the starting point--a perception of the world, a set of assertions, a memory, or some mixture of them. We construct mental models of each distinct possibility and derive a conclusion from them. The theory predicts systematic errors in our reasoning, and the evidence corroborates this prediction. Yet, our ability to use counterexamples to refute invalid inferences provides a foundation for rationality. On this account, reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences.",
"title": ""
},
{
"docid": "2ce90f045706cf98f3a0d624828b99b8",
"text": "A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.",
"title": ""
},
{
"docid": "b46498351a95cbb9ce21b34b58eb3d94",
"text": "Under normal circumstances, mammalian adult skeletal muscle is a stable tissue with very little turnover of nuclei. However, upon injury, skeletal muscle has the remarkable ability to initiate a rapid and extensive repair process preventing the loss of muscle mass. Skeletal muscle repair is a highly synchronized process involving the activation of various cellular responses. The initial phase of muscle repair is characterized by necrosis of the damaged tissue and activation of an inflammatory response. This phase is rapidly followed by activation of myogenic cells to proliferate, differentiate, and fuse leading to new myofiber formation and reconstitution of a functional contractile apparatus. Activation of adult muscle satellite cells is a key element in this process. Muscle satellite cell activation resembles embryonic myogenesis in several ways including the de novo induction of the myogenic regulatory factors. Signaling factors released during the regenerating process have been identified, but their functions remain to be fully defined. In addition, recent evidence supports the possible contribution of adult stem cells in the muscle regeneration process. In particular, bone marrow-derived and muscle-derived stem cells contribute to new myofiber formation and to the satellite cell pool after injury.",
"title": ""
},
{
"docid": "e21f4c327c0006196fde4cf53ed710a7",
"text": "To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.",
"title": ""
},
{
"docid": "f77495366909b9713463bebf2b4ff2fc",
"text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.",
"title": ""
},
{
"docid": "e7ba504d2d9a80c0a10bfa4830a1fc54",
"text": "BACKGROUND\nGlobal and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment.\n\n\nMETHODS\nWe did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12).\n\n\nFINDINGS\nGlobally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9-65·4) were blind (crude prevalence 0·48%; 80% UI 0·17-0·87; 56% female), 216·6 million (80% UI 98·5-359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34-4·89; 55% female), and 188·5 million (80% UI 64·5-350·2) had mild visual impairment (2·57%, 80% UI 0·88-4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1-1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9-997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9-57·3) in 1990 to 36·0 million (80% UI 12·9-65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (-36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3-270·0) in 1990 to 216·6 million (80% UI 98·5-359·1) in 2015.\n\n\nINTERPRETATION\nThere is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels.\n\n\nFUNDING\nBrien Holden Vision Institute.",
"title": ""
},
{
"docid": "18288c42186b7fec24a5884454e69989",
"text": "This article addresses the problem of multichannel audio source separation. We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique. We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura-Saito divergence, and also Kullback-Leibler, Cauchy, mean squared error, and phase-sensitive cost functions. We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration. Finally, we present its application to a speech enhancement problem. The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm.",
"title": ""
},
{
"docid": "f438c1b133441cd46039922c8a7d5a7d",
"text": "This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.",
"title": ""
}
] |
scidocsrr
|
486882875a939d90011d3c367eed9e06
|
An Exhaustive DPLL Algorithm for Model Counting
|
[
{
"docid": "3a0ce7b1e1b1e599954a467cd780ec4f",
"text": "Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.",
"title": ""
}
] |
[
{
"docid": "3e5fd66795e92999aacf6e39cc668aed",
"text": "A couple of popular methods are presented with their benefits and drawbacks. Commonly used methods are using wrapped phase and impulse response. With real time FFT analysis, magnitude and time domain can be analyzed simultaneously. Filtered impulse response and Cepstrum analysis are helpful tools when the spectral content differs and make it hard to analyse the impulse response. To make a successful time alignment the measurements must be anechoic. Methods such as multiple time windowing and averaging in frequency domain are presented. Group-delay and wavelets analysis are used to evaluate the measurements.",
"title": ""
},
{
"docid": "c29f8aeed7f7ccfe3687d300da310c25",
"text": "Global investment in ICT to improve teaching and learning in schools have been initiated by many governments. Despite all these investments on ICT infrastructure, equipments and professional development to improve education in many countries, ICT adoption and integration in teaching and learning have been limited. This article reviews personal, institutional and technological factors that encourage teachers’ use of computer technology in teaching and learning processes. Also teacher-level, school-level and system-level factors that prevent teachers from ICT use are reviewed. These barriers include lack of teacher ICT skills; lack of teacher confidence; lack of pedagogical teacher training; l lack of suitable educational software; limited access to ICT; rigid structure of traditional education systems; restrictive curricula, etc. The article concluded that knowing the extent to which these barriers affect individuals and institutions may help in taking a decision on how to tackle them.",
"title": ""
},
{
"docid": "8d6a33661e281516433df5caa1f35c3a",
"text": "The main contribution of this work is the comparison of three user modeling strategies based on job titles, educational fields and skills in LinkedIn profiles, for personalized MOOC recommendations in a cold start situation. Results show that the skill-based user modeling strategy performs best, followed by the job- and edu-based strategies.",
"title": ""
},
{
"docid": "98c4f94eb35489d452cbd16c817e2bec",
"text": "Many defect prediction techniques are proposed to improve software reliability. Change classification predicts defects at the change level, where a change is the modifications to one file in a commit. In this paper, we conduct the first study of applying change classification in practice.\n We identify two issues in the prediction process, both of which contribute to the low prediction performance. First, the data are imbalanced---there are much fewer buggy changes than clean changes. Second, the commonly used cross-validation approach is inappropriate for evaluating the performance of change classification. To address these challenges, we apply and adapt online change classification, resampling, and updatable classification techniques to improve the classification performance.\n We perform the improved change classification techniques on one proprietary and six open source projects. Our results show that these techniques improve the precision of change classification by 12.2-89.5% or 6.4--34.8 percentage points (pp.) on the seven projects. In addition, we integrate change classification in the development process of the proprietary project. We have learned the following lessons: 1) new solutions are needed to convince developers to use and believe prediction results, and prediction results need to be actionable, 2) new and improved classification algorithms are needed to explain the prediction results, and insensible and unactionable explanations need to be filtered or refined, and 3) new techniques are needed to improve the relatively low precision.",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "1af3be5ed92448095c8a82738e003855",
"text": "OBJECTIVE\nThe aim of this review is to identify, critically evaluate, and summarize the laughter literature across a number of fields related to medicine and health care to assess to what extent laughter health-related benefits are currently supported by empirical evidence.\n\n\nDATA SOURCES AND STUDY SELECTION\nA comprehensive laughter literature search was performed. A thorough search of the gray literature was also undertaken. A list of inclusion and exclusion criteria was identified.\n\n\nDATA EXTRACTION\nIt was necessary to distinguish between humor and laughter to assess health-related outcomes elicited by laughter only.\n\n\nDATA SYNTHESIS\nThematic analysis was applied to summarize laughter health-related outcomes, relationships, and general robustness.\n\n\nCONCLUSIONS\nLaughter has shown physiological, psychological, social, spiritual, and quality-of-life benefits. Adverse effects are very limited, and laughter is practically lacking in contraindications. Therapeutic efficacy of laughter is mainly derived from spontaneous laughter (triggered by external stimuli or positive emotions) and self-induced laughter (triggered by oneself at will), both occurring with or without humor. The brain is not able to distinguish between these types; therefore, it is assumed that similar benefits may be achieved with one or the other. Although there is not enough data to demonstrate that laughter is an all-around healing agent, this review concludes that there exists sufficient evidence to suggest that laughter has some positive, quantifiable effects on certain aspects of health. In this era of evidence-based medicine, it would be appropriate for laughter to be used as a complementary/alternative medicine in the prevention and treatment of illnesses, although further well-designed research is warranted.",
"title": ""
},
{
"docid": "e35f6f4e7b6589e992ceeccb4d25c9f1",
"text": "One of the key success factors of lending organizations in general and banks in particular is the assessment of borrower credit worthiness in advance during the credit evaluation process. Credit scoring models have been applied by many researchers to improve the process of assessing credit worthiness by differentiating between prospective loans on the basis of the likelihood of repayment. Thus, credit scoring is a very typical Data Mining (DM) classification problem. Many traditional statistical and modern computational intelligence techniques have been presented in the literature to tackle this problem. The main objective of this paper is to describe an experiment of building suitable Credit Scoring Models (CSMs) for the Sudanese banks. Two commonly discussed data mining classification techniques are chosen in this paper namely: Decision Tree (DT) and Artificial Neural Networks (ANN). In addition Genetic Algorithms (GA) and Principal Component Analysis (PCA) are also applied as feature selection techniques. In addition to a Sudanese credit dataset, German credit dataset is also used to evaluate these techniques. The results reveal that ANN models outperform DT models in most cases. Using GA as a feature selection is more effective than PCA technique. The highest accuracy of German data set (80.67%) and Sudanese credit scoring models (69.74%) are achieved by a hybrid GA-ANN model. Although DT and its hybrid models (PCA-DT, GA-DT) are outperformed by ANN and its hybrid models (PCA-ANN, GA-ANN) in most cases, they produced interpretable loan granting decisions.",
"title": ""
},
{
"docid": "b01c62a4593254df75c1e390487982fa",
"text": "This paper addresses the question \"why and how is it that we say the same thing differently to different people, or even to the same person in different circumstances?\" We vary the content and form of our text in order to convey more information than is contained in the literal meanings of our words. This information expresses the speaker's interpersonal goals toward the hearer and, in general, his or her perception of the pragmatic aspects of the conversation. This paper discusses two insights that arise when one studies this question: the existence of a level of organization that mediates between communicative goals and generator decisions, and the interleaved planningrealization regime and associated monitoring required for generation. To illustrate these ideas, a computer program is described which contains plans and strategies to produce stylistically appropriate texts from a single representation under various settings that model pragmatic circumstances.",
"title": ""
},
{
"docid": "a1fffeaf5f28fe5795ba207ae926d32b",
"text": "This paper presents mathematical models, design and experimental validation, and calibration of a model-based diagnostic algorithm for an electric-power generation and storage automotive system, including a battery and an alternator with a rectifier and a voltage regulator. Mathematical models of these subsystems are derived, based on the physics of processes involved as characterized by time-varying nonlinear ordinary differential equations. The diagnostic problem focuses on detection and isolation of a specific set of alternator faults, including belt slipping, rectifier fault, and voltage regulator fault. The proposed diagnostic approach is based on the generation of residuals obtained using system models and comparing predicted and measured value of selected variables, including alternator output current, field voltage, and battery voltage. An equivalent input-output alternator model, which is used in the diagnostic scheme, is also formulated and parameterized. The test bench used for calibration of thresholds of the diagnostic algorithm and overall validation process are discussed. The effectiveness of the fault diagnosis algorithm and threshold selection is experimentally demonstrated.",
"title": ""
},
{
"docid": "9f9910c9b51c6da269dd2eb0279bb6a1",
"text": "The distribution between sediments and water plays a key role in the food-chain transfer of hydrophobic organic chemicals. Current models and assessment methods of sediment-water distribution predominantly rely on chemical equilibrium partitioning despite several observations reporting an \"enrichment\" of chemical concentrations in suspended sediments. In this study we propose and derive a fugacity based model of chemical magnification due to organic carbon decomposition throughout the process of sediment diagenesis. We compare the behavior of the model to observations of bottom sediment-water, suspended sediments-water, and plankton-water distribution coefficients of a range of hydrophobic organic chemicals in five Great Lakes. We observe that (i) sediment-water distribution coefficients of organic chemicals between bottom sediments and water and between suspended sediments and water are considerably greaterthan expected from chemical partitioning and that the degree sediment-water disequilibrium appears to follow a relationship with the depth of the lake; (ii) concentrations increase from plankton to suspended sediments to bottom sediments and follow an inverse ratherthan a proportional relationship with the organic carbon content and (iii) the degree of disequilibrium between bottom sediment and water, suspended sediments and water, and plankton and water increases when the octanol-water partition coefficient K(ow) drops. We demonstrate that these observations can be explained by a proposed organic carbon mineralization model. Our findings imply that sediment-water distribution is not solely a chemical partitioning process but is to a large degree controlled by lake specific organic carbon mineralization processes.",
"title": ""
},
{
"docid": "82edffdadaee9ac0a5b11eb686e109a1",
"text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.",
"title": ""
},
{
"docid": "a9e26514ffc78c1018e00c63296b9584",
"text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.",
"title": ""
},
{
"docid": "ed3ce0f0ae0a89fad2242bd2c61217ba",
"text": "We present MegaMIMO, a joint multi-user beamforming system that enables independent access points (APs) to beamform their signals, and communicate with their clients on the same channel as if they were one large MIMO transmitter. The key enabling technology behind MegaMIMO is a new low-overhead technique for synchronizing the phase of multiple transmitters in a distributed manner. The design allows a wireless LAN to scale its throughput by continually adding more APs on the same channel. MegaMIMO is implemented and tested with both software radio clients and off-the-shelf 802.11n cards, and evaluated in a dense congested deployment resembling a conference room. Results from a 10-AP software-radio testbed show a linear increase in network throughput with a median gain of 8.1 to 9.4×. Our results also demonstrate that MegaMIMO’s joint multi-user beamforming can provide throughput gains with unmodified 802.11n cards.",
"title": ""
},
{
"docid": "a2aa3c023f2cf2363bac0b97b3e1e65c",
"text": "Digital data collected for forensics analysis often contain valuable information about the suspects’ social networks. However, most collected records are in the form of unstructured textual data, such as e-mails, chat messages, and text documents. An investigator often has to manually extract the useful information from the text and then enter the important pieces into a structured database for further investigation by using various criminal network analysis tools. Obviously, this information extraction process is tedious and errorprone. Moreover, the quality of the analysis varies by the experience and expertise of the investigator. In this paper, we propose a systematic method to discover criminal networks from a collection of text documents obtained from a suspect’s machine, extract useful information for investigation, and then visualize the suspect’s criminal network. Furthermore, we present a hypothesis generation approach to identify potential indirect relationships among the members in the identified networks. We evaluated the effectiveness and performance of the method on a real-life cybercrimine case and some other datasets. The proposed method, together with the implemented software tool, has received positive feedback from the digital forensics team of a law enforcement unit in Canada. a 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "81459e136452983861ac154f2013dc70",
"text": "Semantic segmentation has been widely investigated for its important role in computer vision. However, some challenges still exist. The first challenge is how to perceive semantic regions with various attributes, which can result in unbalanced distribution of training samples. Another challenge is accurate semantic boundary determination. In this paper, a contour-aware network for semantic segmentation via adaptive depth is proposed which particularly exploits the power of adaptive-depth neural network and contouraware neural network on pixel-level semantic segmentation. Specifically, an adaptive-depth model, which can adaptively determine the feedback and forward procedure of neural network, is constructed. Moreover, a contour-aware neural network is respectively built to enhance the coherence and the localization accuracy of semantic regions. By formulating the contour information and coarse semantic segmentation results in a unified manner, global inference is proposed to obtain the final segmentation results. Three contributions are claimed: (1) semantic segmentation via adaptive depth neural network; (2) contouraware neural network for semantic segmentation; and (3) global inference for final decision. Experiments on three popular datasets are conducted and experimental results have verified the superiority of the proposed method compared with the state-of-the-art methods. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "528812aa635d6b9f0b65cc784fb256e1",
"text": "Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.",
"title": ""
},
{
"docid": "e63a5af56d8b20c9e3eac658940413ce",
"text": "OBJECTIVE\nThis study examined the effects of various backpack loads on elementary schoolchildren's posture and postural compensations as demonstrated by a change in forward head position.\n\n\nSUBJECTS\nA convenience sample of 11 schoolchildren, aged 8-11 years participated.\n\n\nMETHODS\nSagittal digital photographs were taken of each subject standing without a backpack, and then with the loaded backpack before and after walking 6 minutes (6MWT) at free walking speed. This was repeated over three consecutive weeks using backpacks containing randomly assigned weights of 10%, 15%, or 20% body weight of each respective subject. The craniovertebral angle (CVA) was measured using digitizing software, recorded and analyzed.\n\n\nRESULTS\nSubjects demonstrated immediate and statistically significant changes in CVA, indicating increased forward head positions upon donning the backpacks containing 15% and 20% body weight. Following the 6MWT, the CVA demonstrated further statistically significant changes for all backpack loads indicating increased forward head postures. For the 15 & 20%BW conditions, more than 50% of the subjects reported discomfort after walking, with the neck as the primary location of reported pain.\n\n\nCONCLUSIONS\nBackpack loads carried by schoolchildren should be limited to 10% body weight due to increased forward head positions and subjective complaints at 15% and 20% body weight loads.",
"title": ""
},
{
"docid": "bb6314a8e6ec728d09aa37bfffe5c835",
"text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.",
"title": ""
}
] |
scidocsrr
|
ad039ffae4d42ba98915c60f27c3ed0c
|
Adaptive Stochastic Gradient Descent Optimisation for Image Registration
|
[
{
"docid": "607797e37b056dab866d175767343353",
"text": "We propose a new method for the intermodal registration of images using a criterion known as mutual information. Our main contribution is an optimizer that we specifically designed for this criterion. We show that this new optimizer is well adapted to a multiresolution approach because it typically converges in fewer criterion evaluations than other optimizers. We have built a multiresolution image pyramid, along with an interpolation process, an optimizer, and the criterion itself, around the unifying concept of spline-processing. This ensures coherence in the way we model data and yields good performance. We have tested our approach in a variety of experimental conditions and report excellent results. We claim an accuracy of about a hundredth of a pixel under ideal conditions. We are also robust since the accuracy is still about a tenth of a pixel under very noisy conditions. In addition, a blind evaluation of our results compares very favorably to the work of several other researchers.",
"title": ""
},
{
"docid": "990c8e69811a8ebafd6e8c797b36349d",
"text": "Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. The authors present results by comparing their automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75%/spl plusmn/2.29% (mean/spl plusmn/standard deviation).",
"title": ""
}
] |
[
{
"docid": "f333bc03686cf85aee0a65d4a81e8b34",
"text": "A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.",
"title": ""
},
{
"docid": "26282a6d69b021755e5b02f8798bdcb9",
"text": "Recently, extensive research efforts have been dedicated to view-based methods for 3-D object retrieval due to the highly discriminative property of multiviews for 3-D object representation. However, most of state-of-the-art approaches highly depend on their own camera array settings for capturing views of 3-D objects. In order to move toward a general framework for 3-D object retrieval without the limitation of camera array restriction, a camera constraint-free view-based (CCFV) 3-D object retrieval algorithm is proposed in this paper. In this framework, each object is represented by a free set of views, which means that these views can be captured from any direction without camera constraint. For each query object, we first cluster all query views to generate the view clusters, which are then used to build the query models. For a more accurate 3-D object comparison, a positive matching model and a negative matching model are individually trained using positive and negative matched samples, respectively. The CCFV model is generated on the basis of the query Gaussian models by combining the positive matching model and the negative matching model. The CCFV removes the constraint of static camera array settings for view capturing and can be applied to any view-based 3-D object database. We conduct experiments on the National Taiwan University 3-D model database and the ETH 3-D object database. Experimental results show that the proposed scheme can achieve better performance than state-of-the-art methods.",
"title": ""
},
{
"docid": "6db6dccccbdcf77068ae4270a1d6b408",
"text": "In many engineering disciplines, abstract models are used to describe systems on a high level of abstraction. On this abstract level, it is often easier to gain insights about that system that is being described. When models of a system change – for example because the system itself has changed – any analyses based on these models have to be invalidated and thus have to be reevaluated again in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. However, as most often only small parts of the model change, large parts of this reevaluation could be saved by using previous results but such an incremental execution is barely done in practice as it is non-trivial and error-prone. The approach of implicit incrementalization o ers a solution by deriving an incremental evaluation strategy implicitly from a batch speci cation of the analysis. This works by deducing a dynamic dependency graph that allows to only reevaluate those parts of an analysis that are a ected by a given model change. Thus advantages of an incremental execution can be gained without changes to the code that would potentially degrade its understandability. However, current approaches to implicit incremental computation only support narrow classes of analysis, are restricted to an incremental derivation at instruction level or require an explicit state management. In addition, changes are only propagated sequentially, meanwhile modern multi-core architectures would allow parallel change propagation. Even with such improvements, it is unclear whether incremental execution in fact brings advantages as changes may easily cause butter y e ects, making a reuse of previous analysis results pointless (i.e. ine cient). This thesis deals with the problems of implicit incremental model analyses by proposing multiple approaches that mostly can be combined. Further, the",
"title": ""
},
{
"docid": "8250046c31b18d9c4996e8f285949e1f",
"text": "This article models the detection and prediction of managerial fraud in the financial statements of Tunisian banks. The methodology used consist of examining a battery of financial ratios used by the Federal Deposit Insurance Corporation (FDIC) as indicators of the financial situation of a bank. We test the predictive power of these ratios using logistic regression. The results show that we can detect managerial fraud in the financial statements of Tunisian banks using performance ratios three years before its occurrence with a classification rate of 71.1%. JEL: M41, M42, C23, C25, G21",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "aad742025eba642c23533d34337a6255",
"text": "Obtaining good probability estimates is imperative for many applications. The increased uncertainty and typically asymmetric costs surrounding rare events increase this need. Experts (and classification systems) often rely on probabilities to inform decisions. However, we demonstrate that class probability estimates obtained via supervised learning in imbalanced scenarios systematically underestimate the probabilities for minority class instances, despite ostensibly good overall calibration. To our knowledge, this problem has not previously been explored. We propose a new metric, the stratified Brier score, to capture class-specific calibration, analogous to the per-class metrics widely used to assess the discriminative performance of classifiers in imbalanced scenarios. We propose a simple, effective method to mitigate the bias of probability estimates for imbalanced data that bags estimators independently calibrated over balanced bootstrap samples. This approach drastically improves performance on the minority instances without greatly affecting overall calibration. We extend our previous work in this direction by providing ample additional empirical evidence for the utility of this strategy, using both support vector machines and boosted decision trees as base learners. Finally, we show that additional uncertainty can be exploited via a Bayesian approach by considering posterior distributions over bagged probability estimates.",
"title": ""
},
{
"docid": "1fadb803baf3593fef6628d841532a9b",
"text": "Three studies examined the impact of sexual-aggressive song lyrics on aggressive thoughts, emotions, and behavior toward the same and the opposite sex. In Study 1, the authors directly manipulated whether male or female participants listened to misogynous or neutral song lyrics and measured actual aggressive behavior. Male participants who were exposed to misogynous song lyrics administered more hot chili sauce to a female than to a male confederate. Study 2 shed some light on the underlying psychological processes: Male participants who heard misogynous song lyrics recalled more negative attributes of women and reported more feelings of vengeance than when they heard neutral song lyrics. In addition, men-hating song lyrics had a similar effect on aggression-related responses of female participants toward men. Finally, Study 3 replicated the findings of the previous two studies with an alternative measure of aggressive behavior as well as a more subtle measure of aggressive cognitions. The results are discussed in the framework of the General Aggression Model.",
"title": ""
},
{
"docid": "dc2f4cbd2c18e4f893750a0a1a40002b",
"text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.",
"title": ""
},
{
"docid": "bef119e43fcc9f2f0b50fdf521026680",
"text": "Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution. 2009 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "07425e53be0f6314d52e3b4de4d1b601",
"text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "136a2f401b3af00f0f79b991ab65658f",
"text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.",
"title": ""
},
{
"docid": "cd058902ed470efc022c328765a40b34",
"text": "Secure signal authentication is arguably one of the most challenging problems in the Internet of Things (IoT), due to the large-scale nature of the system and its susceptibility to man-in-the-middle and data-injection attacks. In this paper, a novel watermarking algorithm is proposed for dynamic authentication of IoT signals to detect cyber-attacks. The proposed watermarking algorithm, based on a deep learning long short-term memory structure, enables the IoT devices (IoTDs) to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT gateway, which collects signals from the IoTDs, to effectively authenticate the reliability of the signals. Moreover, in massive IoT scenarios, since the gateway cannot authenticate all of the IoTDs simultaneously due to computational limitations, a game-theoretic framework is proposed to improve the gateway’s decision making process by predicting vulnerable IoTDs. The mixed-strategy Nash equilibrium (MSNE) for this game is derived, and the uniqueness of the expected utility at the equilibrium is proven. In the massive IoT system, due to the large set of available actions for the gateway, the MSNE is shown to be analytically challenging to derive, and thus, a learning algorithm that converges to the MSNE is proposed. Moreover, in order to handle incomplete information scenarios, in which the gateway cannot access the state of the unauthenticated IoTDs, a deep reinforcement learning algorithm is proposed to dynamically predict the state of unauthenticated IoTDs and allow the gateway to decide on which IoTDs to authenticate. Simulation results show that with an attack detection delay of under 1 s, the messages can be transmitted from IoTDs with an almost 100% reliability. The results also show that by optimally predicting the set of vulnerable IoTDs, the proposed deep reinforcement learning algorithm reduces the number of compromised IoTDs by up to 30%, compared to an equal probability baseline.",
"title": ""
},
{
"docid": "4f40700ccdc1b6a8a306389f1d7ea107",
"text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.",
"title": ""
},
{
"docid": "7f479783ccab6c705bc1d76533f0b1c6",
"text": "The purpose of this research, computerized hotel management system with Satellite Motel Ilorin, Nigeria as the case study is to understand and make use of the computer to solve some of the problems which are usually encountered during manual operations of the hotel management. Finding an accommodation or a hotel after having reached a particular destination is quite time consuming as well as expensive. Here comes the importance of online hotel booking facility. Online hotel booking is one of the latest techniques in the arena of internet that allows travelers to book a hotel located anywhere in the world and that too according to your tastes and preferences. In other words, online hotel booking is one of the awesome facilities of the internet. Booking a hotel online is not only fast as well as convenient but also very cheap. Nowadays, many of the hotel providers have their sites on the web, which in turn allows the users to visit these sites and view the facilities and amenities offered by each of them. So, the proposed computerized of an online hotel management system is set to find a more convenient, well organized, faster, reliable and accurate means of processing the current manual system of the hotel for both near and far customer.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "c7106bb2ec2c41979ebacdba7dd55217",
"text": "Till recently, the application of the detailed combustion chemistry approach as a predictive tool for engine modeling has been a sort of a ”taboo” motivated by different reasons, but, mainly, by an exaggerated rigor to the chemistry/turbulence interaction modeling. The situation has drastically changed only recently, when STAR-CD and Reaction Design declared in the Newsletter of Compuatational Dynamics (2000/1) the aim to combine multi-dimensional flow solver with the detailed chemistry analysis based on CHEMKIN and SURFACE CHEMKIN packages. Relying on their future developments, we present here the methodology based on the KIVA code. The basic novelty of the proposed methodology is the coupling of a generalized partially stirred reactor, PaSR, model with a high efficiency numerics based on a sparse matrix algebra technique to treat detailed oxidation kinetics of hydrocarbon fuels assuming that chemical processes proceed in two successive steps: the reaction act follows after the micro-mixing resolved on a sub-grid scale. In a completed form, the technique represents detailed chemistry extension of the classic EDCturbulent combustion model. The model application is illustrated by results of numerical simulation of spray combustion and emission formation in the Volvo D12C DI Diesel engine. The results of the 3-D engine modeling on a sector mesh are in reasonable agreement with video data obtained using an endoscopic technique. INTRODUCTION As pollutant emission regulations are becoming more stringent, it turns increasingly more difficult to reconcile emission requirements with the engine economy and thermal efficiency. Soot formation in DI Diesel engines is the key environmental problem whose solution will define the future of these engines: will they survive or are they doomed to disappear? To achieve the design goals, the understanding of the salient features of spray combustion and emission formation processes is required. Diesel spray combustion is nonstationary, three-dimensional, multi-phase process that proAddress all correspondence to this author. ceeds in a high-pressure and high-temperature environment. Recent attempts to develop a ”conceptual model” of diesel spray combustion, see Dec (1997), represent it as a relatively well organized process in which events take place in a logical sequence as the fuel evolves along the jet, undergoing the various stages: spray atomization, droplet ballistics and evaporation, reactant mixing, (macroand micromixing), and, finally, heat release and emissions formation. This opens new perspectives for the modeling based on realization of idealized patterns well confirmed by optical diagnostics data. The success of engine CFD simulations depends on submodels of the physical processes incorporated into the main solver. The KIVA-3v computer code developed by Amsden (1993, July 1997) has been selected for the reason that the code source is available, thus, representing an ideal platform for modification, validation and evaluation. For Diesel engine applications, the KIVA codes solve the conservation equations for evaporating fuel sprays coupled with the three-dimensional turbulent fluid dynamics of compressible, multicomponent, reactive gases in engine cylinders with arbitrary shaped piston geometries. The code treats in different ways ”fast” chemical reactions, which are assumed to be in equilibrium, and ”slow” reactions proceeding kinetically, albeit the general trimolecular processes with different third body efficiencies are not incorporated in the mechanism. The turbulent combustion is realized in the form of Magnussen-Hjertager approach not accounting for chemistry/turbulence interaction. This is why the chemical routines in the original code were replaced with our specialized sub-models. The code fuel library has been also updated using modern property data compiled in Daubert and Danner (1989-1994). The detailed mechanism integrating the n-heptane oxidation chemistry with the kinetics of aromatics (up to four aromatic rings) formation for rich acetylene flames developed by Wang and Frenklach (1997) consisting of 117 species and 602 reactions has been validated in conventional kinetic analysis, and a reduced mechanism (60 species, including soot forming agents and N2O and NOx species, 237 reactions) has been incorporated into the KIVA-3v code. This extends capabilities of the code to predict spray combustion of hydrocarbon fuels with particulate emission.",
"title": ""
},
{
"docid": "bde03a5d90507314ce5f034b9b764417",
"text": "Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.",
"title": ""
},
{
"docid": "85693811a951a191d573adfe434e9b18",
"text": "Diagnosing problems in data centers has always been a challenging problem due to their complexity and heterogeneity. Among recent proposals for addressing this challenge, one promising approach leverages provenance, which provides the fundamental functionality that is needed for performing fault diagnosis and debugging—a way to track direct and indirect causal relationships between system states and their changes. This information is valuable, since it permits system operators to tie observed symptoms of a faults to their potential root causes. However, capturing provenance in a data center is challenging because, at high data rates, it would impose a substantial cost. In this paper, we introduce techniques that can help with this: We show how to reduce the cost of maintaining provenance by leveraging structural similarities for compression, and by offloading expensive but highly parallel operations to hardware. We also discuss our progress towards transforming provenance into compact actionable diagnostic decisions to repair problems caused by misconfigurations and program bugs.",
"title": ""
},
{
"docid": "be5b0dd659434e77ce47034a51fd2767",
"text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks",
"title": ""
}
] |
scidocsrr
|
0c8107a1605a54c2e1f35f31ca34932a
|
Learning Disentangled Multimodal Representations for the Fashion Domain
|
[
{
"docid": "e77dc44a5b42d513bdbf4972d62a74f9",
"text": "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"title": ""
},
{
"docid": "88033862d9fac08702977f1232c91f3a",
"text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.",
"title": ""
},
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "26884c49c5ada3fc80dbc2f2d1e5660b",
"text": "We introduce a complete pipeline for recognizing and classifying people’s clothing in natural scenes. This has several interesting applications, including e-commerce, event and activity recognition, online advertising, etc. The stages of the pipeline combine a number of state-of-the-art building blocks such as upper body detectors, various feature channels and visual attributes. The core of our method consists of a multi-class learner based on a Random Forest that uses strong discriminative learners as decision nodes. To make the pipeline as automatic as possible we also integrate automatically crawled training data from the web in the learning process. Typically, multi-class learning benefits from more labeled data. Because the crawled data may be noisy and contain images unrelated to our task, we extend Random Forests to be capable of transfer learning from different domains. For evaluation, we define 15 clothing classes and introduce a benchmark data set for the clothing classification task consisting of over 80, 000 images, which we make publicly available. We report experimental results, where our classifier outperforms an SVM baseline with 41.38 % vs 35.07 % average accuracy on challenging benchmark data.",
"title": ""
},
{
"docid": "527d7c091cfc63c8e9d36afdd6b7bdfe",
"text": "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"title": ""
}
] |
[
{
"docid": "01ebd4b68fb94fc5defaff25c2d294b0",
"text": "High data rate E-band (71 GHz- 76 GHz, 81 GHz - 86 GHz, 92 GHz - 95 GHz) communication systems will benefit from power amplifiers that are more than twice as powerful than commercially available GaAs pHEMT MMICs. We report development of three stage GaN MMIC power amplifiers for E-band radio applications that produce 500 mW of saturated output power in CW mode and have > 12 dB of associated power gain. The output power density from 300 mum output gate width GaN MMICs is seven times higher than the power density of commercially available GaAs pHEMT MMICs in this frequency range.",
"title": ""
},
{
"docid": "c96fc4b6f28c1832c6e150dc62101f5e",
"text": "BACKGROUND AND OBJECTIVES\nNerve blocks and radiofrequency neurotomy of the nerves supplying the cervical zygapophyseal joints are validated tools for diagnosis and treatment of chronic neck pain, respectively. Unlike fluoroscopy, ultrasound may allow visualization of the target nerves, thereby potentially improving diagnostic accuracy and therapeutic efficacy of the procedures. The aims of this exploratory study were to determine the ultrasound visibility of the target nerves in chronic neck pain patients and to describe the variability of their course in relation to the fluoroscopically used bony landmarks.\n\n\nMETHODS\nFifty patients with chronic neck pain were studied. Sonographic visibility of the nerves and the bony target of fluoroscopically guided blocks were determined. The craniocaudal distance between the nerves and their corresponding fluoroscopic targets was measured.\n\n\nRESULTS\nSuccessful visualization of the nerves varied from 96% for the third occipital nerve to 84% for the medial branch of C6. The great exception was the medial branch of C7, which was visualized in 32%. The bony targets could be identified in all patients, with exception of C7, which was identified in 92%. The craniocaudal distance of each nerve to the corresponding bony target varied, the upper limit of the range being 2.2 mm at C4, the lower limit 1.0 mm at C7.\n\n\nCONCLUSIONS\nThe medial branches and their relation to the fluoroscopically used bony targets were mostly visualized by ultrasound, with the exception of the medial branch of C7 and, to a lesser extent, the bony target of C7. The nerve location may be distant from the fluoroscope's target. These findings justify further studies to investigate the validity of ultrasound guided blocks for invasive diagnosis/treatment of cervical zygapophyseal joint pain.",
"title": ""
},
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "95efc564448b3ec74842d047f94cb779",
"text": "Over the past 25 years or so there has been much interest in the use of digital pre-distortion (DPD) techniques for the linearization of RF and microwave power amplifiers. In this paper, we describe the important system and hardware requirements for the four main subsystems found in the DPD linearized transmitter: RF/analog, data converters, digital signal processing, and the DPD architecture and algorithms, and illustrate how the overall DPD system architecture is influenced by the design choices that may be made in each of these subsystems. We shall also consider the challenges presented to future applications of DPD systems for wireless communications, such as higher operating frequencies, wider signal bandwidths, greater spectral efficiency signals, resulting in higher peak-to-average power ratios, multiband and multimode operation, lower power consumption requirements, faster adaption, and how these affect the system design choices.",
"title": ""
},
{
"docid": "c3f25271d25590bf76b36fee4043d227",
"text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.",
"title": ""
},
{
"docid": "24e2efc78dc8ffd57f25744ac7532807",
"text": "In this paper, we address the problem of outdoor, appearance-based topological localization, particularly over long periods of time where seasonal changes alter the appearance of the environment. We investigate a straight-forward method that relies on local image features to compare single image pairs. We first look into which of the dominating image feature algorithms, SIFT or the more recent SURF, that is most suitable for this task. We then fine-tune our localization algorithm in terms of accuracy, and also introduce the epipolar constraint to further improve the result. The final localization algorithm is applied on multiple data sets, each consisting of a large number of panoramic images, which have been acquired over a period of nine months with large seasonal changes. The final localization rate in the single-image matching, cross-seasonal case is between 80 to 95%.",
"title": ""
},
{
"docid": "1b7efa9ffda9aa23187ae7028ea5d966",
"text": "Tools for clinical assessment and escalation of observation and treatment are insufficiently established in the newborn population. We aimed to provide an overview over early warning- and track and trigger systems for newborn infants and performed a nonsystematic review based on a search in Medline and Cinahl until November 2015. Search terms included 'infant, newborn', 'early warning score', and 'track and trigger'. Experts in the field were contacted for identification of unpublished systems. Outcome measures included reference values for physiological parameters including respiratory rate and heart rate, and ways of quantifying the extent of deviations from the reference. Only four neonatal early warning scores were published in full detail, and one system for infants with cardiac disease was considered as having a more general applicability. Temperature, respiratory rate, heart rate, SpO2, capillary refill time, and level of consciousness were parameters commonly included, but the definition and quantification of 'abnormal' varied slightly. The available scoring systems were designed for term and near-term infants in postpartum wards, not neonatal intensive care units. In conclusion, there is a limited availability of neonatal early warning scores. Scoring systems for high-risk neonates in neonatal intensive care units and preterm infants were not identified.",
"title": ""
},
{
"docid": "db09043f9491381140febff04b2bb212",
"text": "In this book Professor Holmes discusses some of the evidence relating to one of the most baffling problems yet recognized by biologists-the factors involved in the regulation of growth and form. A wide range of possible influences, from enzymes to cellular competition, is considered. Numerous experiments and theories are described, with or without bibliographic citation. There is a list of references for each chapter and an index. The subject from a scientific standpoint is an exceedingly difficult one, for the reason that very little indeed is understood regarding such phenomena as differentiation. It follows that the problem offers fine opportunities for intellectual jousting by mechanists and vitalists, that hypotheses and theories must often be the weapons of choice, and philosophy the armor. Professor Holmes gives us a good seat from which to watch the combats, explains clearly what is going on, and occasionally slips away to enter the lists himself. This stereoscopic atlas of anatomy was designed as an aid in teaching neuro-anatomy for beginning medical students and as a review for physicians taking Board examinations in Psychiatry and Neurology. Each plate consists of a pair of stereoscopic photographs and a labelled diagram of the important parts seen in the photograph. Perhaps in this day of scarcity of materials, particularly of human brains hardened for dissection, photographs of this kind conceivably can be used as a substitute. Successive stages of dissection are presented in such a fashion that, used in conjunction with the dissecting manual, a student should be able to identify most of the important components of the nervous system without much outside help. The area covered is limited to the gross features of the brain and brain stem and perhaps necessarily does not deal with any of the microscopic structure. So much more can be learned from the dissection of the actual brain that it is doubtful if this atlas would be useful except where brains are not available. A good deal of effort has been spent on the preparation of this atlas, with moderately successful results.",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "68abef37fe49bb675d7a2ce22f7bf3a7",
"text": "Objective: The case for exercise and health has primarily been made on its impact on diseases such coronary heart disease, obesity and diabetes. However, there is a very high cost attributed to mental disorders and illness and in the last 15 years there has been increasing research into the role of exercise a) in the treatment of mental health, and b) in improving mental well-being in the general population. There are now several hundred studies and over 30 narrative or meta-analytic reviews of research in this field. These have summarised the potential for exercise as a therapy for clinical or subclinical depression or anxiety, and the use of physical activity as a means of upgrading life quality through enhanced self-esteem, improved mood states, reduced state and trait anxiety, resilience to stress, or improved sleep. The purpose of this paper is to a) provide an updated view of this literature within the context of public health promotion and b) investigate evidence for physical activity and dietary interactions affecting mental well-being. Design: Narrative review and summary. Conclusions: Sufficient evidence now exists for the effectiveness of exercise in the treatment of clinical depression. Additionally, exercise has a moderate reducing effect on state and trait anxiety and can improve physical self-perceptions and in some cases global self-esteem. Also there is now good evidence that aerobic and resistance exercise enhances mood states, and weaker evidence that exercise can improve cognitive function (primarily assessed by reaction time) in older adults. Conversely, there is little evidence to suggest that exercise addiction is identifiable in no more than a very small percentage of exercisers. Together, this body of research suggests that moderate regular exercise should be considered as a viable means of treating depression and anxiety and improving mental well-being in the general public.",
"title": ""
},
{
"docid": "0eaee4f37754d0137de78cf1b4d8d950",
"text": "Outlier detection is an important task in data mining with numerous applications, including credit card fraud detection, video surveillance, etc. Outlier detection has been widely focused and studied in recent years. The concept about outlier factor of object is extended to the case of cluster. Although many outlier detection algorithms have been proposed, most of them face the top-n problem, i.e., it is difficult to know how many points in a database are outliers. In this paper we propose a novel outlier cluster detection algorithm called ROCF based on the concept of mutual neighbor graph and on the idea that the size of outlier clusters is usually much smaller than the normal clusters. ROCF can automatically figure out the outlier rate of a database and effectively detect the outliers and outlier clusters without top-n parameter. The formal analysis and experiments show that this method can achieve good performance in outlier detection. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "23d6c80c6cb04dcfd6f29037548d2f05",
"text": "In recent years, the chaos based cryptographic algorithms have suggested some new and efficient ways to develop secure image encryption techniques. In this communication,wepropose a newapproach for image encryption basedon chaotic logisticmaps in order tomeet the requirements of the secure image transfer. In the proposed image encryption scheme, an external secret key of 80-bit and two chaotic logistic maps are employed. The initial conditions for the both logistic maps are derived using the external secret key by providing different weightage to all its bits. Further, in the proposed encryption process, eight different types of operations are used to encrypt the pixels of an image and which one of them will be used for a particular pixel is decided by the outcome of the logistic map. To make the cipher more robust against any attack, the secret key is modified after encrypting each block of sixteen pixels of the image. The results of several experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for real-time image encryption and transmission. q 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5aeb8a7daa383259340ac7e27113f783",
"text": "This paper reports on the design, implementation and characterization of wafer-level packaging technology for a wide range of microelectromechanical system (MEMS) devices. The encapsulation technique is based on thermal decomposition of a sacrificial polymer through a polymer overcoat to form a released thin-film organic membrane with scalable height on top of the active part of the MEMS. Hermiticity and vacuum operation are obtained by thin-film deposition of a metal such as chromium, aluminum or gold. The thickness of the overcoat can be optimized according to the size of the device and differential pressure to package a wide variety of MEMS such as resonators, accelerometers and gyroscopes. The key performance metrics of several batches of packaged devices do not degrade as a result of residues from the sacrificial polymer. A Q factor of 5000 at a resonant frequency of 2.5 MHz for the packaged resonator, and a static sensitivity of 2 pF g−1 for the packaged accelerometer were obtained. Cavities as small as 0.000 15 mm3 for the resonator and as large as 1 mm3 for the accelerometer have been made by this method. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "217e3b6bc1ed6a1ef8860efff285f4ab",
"text": "Currently, salvage is considered as an effective way for protecting ecosystems of inland water from toxin-producing algal blooms. Yet, the magnitude of algal blooms, which is the essential information required for dispatching salvage boats, cannot be estimated accurately with low cost in real time. In this paper, a data-driven soft sensor is proposed for algal blooms monitoring, which estimates the magnitude of algal blooms using data collected by inexpensive water quality sensors as input. The modeling of the soft sensor consists of two steps: 1) magnitude calculation and 2) regression model training. In the first step, we propose an active learning strategy to construct high-accuracy image classification model with ~50 % less labeled data. Based on this model, we design a new algorithm that recognizes algal blooms and calculates the magnitude using water surface pictures. In the second step, we propose to use Gaussian process to train the regression model that maps the multiparameter water quality sensor data to the calculated magnitude of algal blooms and learn the parameters of the model automatically from the training data. We conduct extensive experiments to evaluate our modeling method, AlgaeSense, based on over 200 000 heterogeneous sensor data records collected in four months from our field-deployed sensor system. The results indicate that the soft sensor can accurately estimate the magnitude of algal blooms in real time using data collected by just three kinds of inexpensive water quality sensors.",
"title": ""
},
{
"docid": "bfe5c10940d4cccfb071598ed04020ac",
"text": "BACKGROUND\nKnowledge about quality of life and sexual health in patients with genital psoriasis is limited.\n\n\nOBJECTIVES\nWe studied quality of life and sexual function in a large group of patients with genital psoriasis by means of validated questionnaires. In addition, we evaluated whether sufficient attention is given by healthcare professionals to sexual problems in patients with psoriasis, as perceived by the patients.\n\n\nMETHODS\nA self-administered questionnaire was sent to 1579 members of the Dutch Psoriasis Association. Sociodemographic patient characteristics, medical data and scores of several validated questionnaires regarding quality of life (Dermatology Life Quality Index) and sexual health (Sexual Quality of Life Questionnaire for use in Men, International Index of Erectile Function, Female Sexual Distress Scale and Female Sexual Function Index) were collected and analysed.\n\n\nRESULTS\nThis study (n = 487) shows that psoriasis has a detrimental effect on quality of life and sexual health. Patients with genital lesions reported even significantly worse quality of life than patients without genital lesions (mean ± SD quality of life scores 8·5 ± 6·5 vs. 5·5 ± 4·6, respectively, P < 0·0001). Sexual distress and dysfunction are particularly prominent in women (reported by 37·7% and 48·7% of the female patients, respectively). Sexual distress is especially high when genital skin is affected (mean ± SD sexual distress score in patients with genital lesions 16·1 ± 12·1 vs. 10·1 ± 9·7 in patients without genital lesions, P = 0·001). The attention given to possible sexual problems in the psoriasis population by healthcare professionals is perceived as insufficient by patients.\n\n\nCONCLUSIONS\nIn addition to quality of life, sexual health is diminished in a considerable number of patients with psoriasis and particularly women with genital lesions have on average high levels of sexual distress. We underscore the need for physicians to pay attention to the impact of psoriasis on psychosocial and sexual health when treating patients for this skin disease.",
"title": ""
},
{
"docid": "ad57044935e65f144a5d718844672b2c",
"text": "DeLone and McLean’s (1992) model of information systems success has received much attention amongst researchers. This study provides the first empirical test of the entire DeLone and McLean model in the user developed application domain. Overall, the model was not supported by the data. Of the nine hypothesised relationships tested four were found to be significant and the remainder not significant. The model provided strong support for the relationships between perceived system quality and user satisfaction, perceived information quality and user satisfaction, user satisfaction and intended use, and user satisfaction and perceived individual impact.",
"title": ""
},
{
"docid": "01165a990d16000ac28b0796e462147a",
"text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.",
"title": ""
},
{
"docid": "b958af84a3f977ea4c3efd854bd7de48",
"text": "This paper presents the novel development of an embedded system that aims at digital TV content recommendation based on descriptive metadata collected from versatile sources. The described system comprises a user profiling subsystem identifying user preferences and a user agent subsystem performing content rating. TV content items are ranked using a combined multimodal approach integrating classification-based and keyword-based similarity predictions so that a user is presented with a limited subset of relevant content. Observable user behaviors are discussed as instrumental in user profiling and a formula is provided for implicitly estimating the degree of user appreciation of content. A new relation-based similarity measure is suggested to improve categorized content rating precision. Experimental results show that our system can recommend desired content to users with significant amount of accuracy.",
"title": ""
},
{
"docid": "26d1678ccbd8f1453dccc4fa2eacd3aa",
"text": "A standard assumption in machine learning is the exchangeability of data, which is equivalent to assuming that the examples are generated from the same probability distribution independently. This paper is devoted to testing the assumption of exchangeability on-line: the examples arrive one by one, and after receiving each example we would like to have a valid measure of the degree to which the assumption of exchangeability has been falsified. Such measures are provided by exchangeability martingales. We extend known techniques for constructing exchangeability martingales and show that our new method is competitive with the martingales introduced before. Finally we investigate the performance of our testing method on two benchmark datasets, USPS and Statlog Satellite data; for the former, the known techniques give satisfactory results, but for the latter our new more flexible method becomes necessary.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
}
] |
scidocsrr
|
2a6609f28ccd04f9de7c4e9b02837b33
|
A Tale of Two Kernels: Towards Ending Kernel Hardening Wars with Split Kernel
|
[
{
"docid": "7c05ef9ac0123a99dd5d47c585be391c",
"text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.",
"title": ""
},
{
"docid": "16186ff81d241ecaea28dcf5e78eb106",
"text": "Different kinds of people use computers now than several decades ago, but operating systems have not fully kept pace with this change. It is true that we have point-and-click GUIs now instead of command line interfaces, but the expectation of the average user is different from what it used to be, because the user is different. Thirty or 40 years ago, when operating systems began to solidify into their current form, almost all computer users were programmers, scientists, engineers, or similar professionals doing heavy-duty computation, and they cared a great deal about speed. Few teenagers and even fewer grandmothers spent hours a day behind their terminal. Early users expected the computer to crash often; reboots came as naturally as waiting for the neighborhood TV repairman to come replace the picture tube on their home TVs. All that has changed and operating systems need to change with the times.",
"title": ""
}
] |
[
{
"docid": "3475d98ae13c4bab3424103f009f3fb1",
"text": "According to a small, lightweight, low-cost high performance inertial Measurement Units(IMU), an effective calibration method is implemented to evaluate the performance of Micro-Electro-Mechanical Systems(MEMS) sensors suffering from various errors to get acceptable navigation results. A prototype development board based on FPGA, dual core processor's configuration for INS/GPS integrated navigation system is designed for experimental testing. The significant error sources of IMU such as bias, scale factor, and misalignment are estimated in virtue of static tests, rate tests, thermal tests. Moreover, an effective intelligent calibration method combining with Kalman Filter is proposed to estimate parameters and compensate errors. The proposed approach has been developed and its efficiency is demonstrated by various experimental scenarios with real MEMS data.",
"title": ""
},
{
"docid": "41c317b0e275592ea9009f3035d11a64",
"text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.",
"title": ""
},
{
"docid": "363cc184a6cae8b7a81744676e339a80",
"text": "Dismissing-avoidant adults are characterized by expressing relatively low levels of attachment-related distress. However, it is unclear whether this reflects a relative absence of covert distress or an attempt to conceal covert distress. Two experiments were conducted to distinguish between these competing explanations. In Experiment 1, participants were instructed to suppression resulted in a decrease in the accessibility of abandonment-related thoughts for dismissing-avoidant adults. Experiment 2 demonstrated that attempts to suppress the attachment system resulted in decreases in physiological arousal for dismissing-avoidant adults. These experiments indicate that dismissing-avoidant adults are capable of suppressing the latent activation of their attachment system and are not simply concealing latent distress. The discussion focuses on development, cognitive, and social factors that may promote detachment.",
"title": ""
},
{
"docid": "329ab44195e7c20e696e5d7edc8b65a8",
"text": "In this work, we consider challenges relating to security for Industrial Control Systems (ICS) in the context of ICS security education and research targeted both to academia and industry. We propose to address those challenges through gamified attack training and countermeasure evaluation. We tested our proposed ICS security gamification idea in the context of the (to the best of our knowledge) first Capture-The-Flag (CTF) event targeted to ICS security called SWaT Security Showdown (S3). Six teams acted as attackers in a security competition leveraging an ICS testbed, with several academic defense systems attempting to detect the ongoing attacks. The event was conducted in two phases. The online phase (a jeopardy-style CTF) served as a training session. The live phase was structured as an attack-defense CTF. We acted as judges and we assigned points to the attacker teams according to a scoring system that we developed internally based on multiple factors, including realistic attacker models. We conclude the paper with an evaluation and discussion of the S3, including statistics derived from the data collected in each phase of S3.",
"title": ""
},
{
"docid": "6825c5294da2dfe7a26b6ac89ba8f515",
"text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.",
"title": ""
},
{
"docid": "fed23432144a6929c4f3442b10157771",
"text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa",
"title": ""
},
{
"docid": "85c4c0ffb224606af6bc3af5411d31ca",
"text": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-tofine attention models lag behind state-ofthe-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.",
"title": ""
},
{
"docid": "404fce3f101d0a1d22bc9afdf854b1e0",
"text": "The intimate connection between the brain and the heart was enunciated by Claude Bernard over 150 years ago. In our neurovisceral integration model we have tried to build on this pioneering work. In the present paper we further elaborate our model. Specifically we review recent neuroanatomical studies that implicate inhibitory GABAergic pathways from the prefrontal cortex to the amygdala and additional inhibitory pathways between the amygdala and the sympathetic and parasympathetic medullary output neurons that modulate heart rate and thus heart rate variability. We propose that the default response to uncertainty is the threat response and may be related to the well known negativity bias. We next review the evidence on the role of vagally mediated heart rate variability (HRV) in the regulation of physiological, affective, and cognitive processes. Low HRV is a risk factor for pathophysiology and psychopathology. Finally we review recent work on the genetics of HRV and suggest that low HRV may be an endophenotype for a broad range of dysfunctions.",
"title": ""
},
{
"docid": "6ce3156307df03190737ee7c0ae24c75",
"text": "Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.",
"title": ""
},
{
"docid": "f153ee3853f40018ed0ae8b289b1efcf",
"text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.",
"title": ""
},
{
"docid": "308622daf5f4005045f3d002f5251f8c",
"text": "The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.",
"title": ""
},
{
"docid": "9d2f569d1105bdac64071541eb01c591",
"text": "1. Outline the principles of the diagnostic tests used to confirm brain death. . 2. The patient has been certified brain dead and her relatives agree with her previously stated wishes to donate her organs for transplantation. Outline the supportive measures which should be instituted to maintain this patient’s organs in an optimal state for subsequent transplantation of the heart, lungs, liver and kidneys.",
"title": ""
},
{
"docid": "01a649c8115810c8318e572742d9bd00",
"text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.",
"title": ""
},
{
"docid": "1f20204533ade658723cc56b429d5792",
"text": "ILQUA first participated in TREC QA main task in 2003. This year we have made modifications to the system by removing some components with poor performance and enhanced the system with new methods and new components. The newly built ILQUA is an IE-driven QA system. To answer “Factoid” and “List” questions, we apply our answer extraction methods on NE-tagged passages. The answer extraction methods adopted here are surface text pattern matching, n-gram proximity search and syntactic dependency matching. Surface text pattern matching has been applied in some previous TREC QA systems. However, the patterns used in ILQUA are automatically generated by a supervised learning system and represented in a format of regular expressions which can handle up to 4 question terms. N-gram proximity search and syntactic dependency matching are two steps of one component. N-grams of question terms are matched around every named entity in the candidate passages and a list of named entities are generated as answer candidate. These named entities go through a multi-level syntactic dependency matching until a final answer is generated. To answer “Other” questions, we parse the answer sentences of “Other” questions in 2004 main task and built syntactic patterns combined with semantic features. These patterns are applied to the parsed candidate sentences to extract answers of “Other” questions. The evaluation results showed ILQUA has reached an accuracy of 30.9% for factoid questions. ILQUA is an IE-driven QA system without any pre-compiled knowledge base of facts and it doesn’t get reference from any other external search engine such as Google. The disadvantage of an IE-driven QA system is that there are some types of questions that can’t be answered because the answer in the passages can’t be tagged as appropriate NE types. Figure 1 shows the diagram of the ILQUA architecture.",
"title": ""
},
{
"docid": "73333ad599c6bbe353e46d7fd4f51768",
"text": "The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research–brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.",
"title": ""
},
{
"docid": "0c9bbeaa783b2d6270c735f004ecc47f",
"text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu.",
"title": ""
},
{
"docid": "f4edb4f6bc0d0e9b31242cf860f6692d",
"text": "Search on the web is a delay process and it can be hard task especially for beginners when they attempt to use a keyword query language. Beginner (inexpert) searchers commonly attempt to find information with ambiguous queries. These ambiguous queries make the search engine returns irrelevant results. This work aims to get more relevant pages to query through query reformulation and expanding search space. The proposed system has three basic parts WordNet, Google search engine and Genetic Algorithm. Every part has a special task. The system uses WordNet to remove ambiguity from queries by displaying the meaning of every keyword in user query and selecting the proper meaning for keywords. The system obtains synonym for every keyword from WordNet and generates query list. Genetic algorithm is used to create generation for every query in query list. Every query in system is navigated using Google search engine to obtain results from group of documents on the Web. The system has been tested on number of ambiguous queries and it has obtained more relevant URL to user query especially when the query has one keyword. The results are promising and therefore open further research directions.",
"title": ""
},
{
"docid": "29d2a613f7da6b99e35eb890d590f4ca",
"text": "Recent work has focused on generating synthetic imagery and augmenting real imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling the variation in the sensor domain. Unfortunately, varying sensor effects can degrade performance and generalizability of results for visual tasks trained on human annotated datasets. This paper proposes an efficient, automated physicallybased augmentation pipeline to vary sensor effects – specifically, chromatic aberration, blur, exposure, noise, and color cast – across both real and synthetic imagery. In particular, this paper illustrates that augmenting training datasets with the proposed pipeline improves the robustness and generalizability of object detection on a variety of benchmark vehicle datasets.",
"title": ""
},
{
"docid": "5873204bba0bd16262274d4961d3d5f9",
"text": "The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive.",
"title": ""
}
] |
scidocsrr
|
dfe84773f14b5e5f43ff495ec2509a45
|
Dense visual SLAM
|
[
{
"docid": "75c2b1565c61136bf014d5e67eb52daf",
"text": "This paper describes a system for dense depth estimation for multiple images in real-time. The algorithm runs almost entirely on standard graphics hardware, leaving the main CPU free for other tasks as image capture, compression and storage during scene capture. We follow a plain-sweep approach extended by truncated SSD scores, shiftable windows and best camera selection. We do not need specialized hardware and exploit the computational power of freely programmable PC graphics hardware. Dense depth maps are computed with up to 20 fps.",
"title": ""
}
] |
[
{
"docid": "f3641cacf284444ac45f0e085c7214bf",
"text": "Recognition that the entire central nervous system (CNS) is highly plastic, and that it changes continually throughout life, is a relatively new development. Until very recently, neuroscience has been dominated by the belief that the nervous system is hardwired and changes at only a few selected sites and by only a few mechanisms. Thus, it is particularly remarkable that Sir John Eccles, almost from the start of his long career nearly 80 years ago, focused repeatedly and productively on plasticity of many different kinds and in many different locations. He began with muscles, exploring their developmental plasticity and the functional effects of the level of motor unit activity and of cross-reinnervation. He moved into the spinal cord to study the effects of axotomy on motoneuron properties and the immediate and persistent functional effects of repetitive afferent stimulation. In work that combined these two areas, Eccles explored the influences of motoneurons and their muscle fibers on one another. He studied extensively simple spinal reflexes, especially stretch reflexes, exploring plasticity in these reflex pathways during development and in response to experimental manipulations of activity and innervation. In subsequent decades, Eccles focused on plasticity at central synapses in hippocampus, cerebellum, and neocortex. His endeavors extended from the plasticity associated with CNS lesions to the mechanisms responsible for the most complex and as yet mysterious products of neuronal plasticity, the substrates underlying learning and memory. At multiple levels, Eccles' work anticipated and helped shape present-day hypotheses and experiments. He provided novel observations that introduced new problems, and he produced insights that continue to be the foundation of ongoing basic and clinical research. This article reviews Eccles' experimental and theoretical contributions and their relationships to current endeavors and concepts. It emphasizes aspects of his contributions that are less well known at present and yet are directly relevant to contemporary issues.",
"title": ""
},
{
"docid": "7ae8e724297985e0531f90b3da3424f4",
"text": "This study examined the extent to which previous experience with duration in first language (L1) vowel distinctions affects the use of duration when perceiving vowels in a second language (L2). Native speakers of Greek (where duration is not used to differentiate vowels) and Japanese (where vowels are distinguished by duration) first identified and rated the eleven English monophthongs, embedded in /bVb/ and /bVp/ contexts, in terms of their L1 categories and then carried out discrimination tests on those English vowels. The results demonstrated that both L2 groups were sensitive to durational cues when perceiving the English vowels. However, listeners were found to temporally assimilate L2 vowels to L1 category/categories. Temporal information was available in discrimination only when the listeners' L1 duration category/categories did not interfere with the target duration categories and hence the use of duration in such cases cannot be attributed to its perceptual salience as has been proposed.",
"title": ""
},
{
"docid": "7a6a1bf378f5bdfc6c373dc55cf0dabd",
"text": "In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.",
"title": ""
},
{
"docid": "d2421e2458f6f2ce55cb9664542a7ea8",
"text": "Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operating the sensor network for a long period of time. In [12], a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. There is some fixed amount of energy cost in the electronics when transmitting or receiving a packet and a variable cost when transmitting a packet which depends on the distance of transmission. If each node transmits its sensed data directly to the base station, then it will deplete its power quickly. The LEACH protocol presented in [12] is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster-heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. An improved version of LEACH, called LEACH-C, is presented in [14], where the central base station performs the clustering to improve energy efficiency. In this paper, we present an improved scheme, called PEGASIS (Power-Efficient GAthering in Sensor Information Systems), which is a near-optimal chain-based protocol that minimizes energy. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 200 percent when 1 percent, 25 percent, 50 percent, and 100 percent of nodes die for different network sizes and topologies. For many applications, in addition to minimizing energy, it is also important to consider the delay incurred in gathering sensed data. We capture this with the energy delay metric and present schemes that attempt to balance the energy and delay cost for data gathering from sensor networks. Since most of the delay factor is in the transmission time, we measure delay in terms of number of transmissions to accomplish a round of data gathering. Therefore, delay can be reduced by allowing simultaneous transmissions when possible in the network. With CDMA capable sensor nodes [11], simultaneous data transmissions are possible with little interference. In this paper, we present two new schemes to minimize energy delay using CDMA and non-CDMA sensor nodes. If the goal is to minimize only the delay cost, then a binary combining scheme can be used to accomplish this task in about logN units of delay with parallel communications and incurring a slight increase in energy cost. With CDMA capable sensor nodes, a chain-based binary scheme performs best in terms of energy delay. If the sensor nodes are not CDMA capable, then parallel communications are possible only among spatially separated nodes and a chain-based 3-level hierarchy scheme performs well. We compared the performance of direct, LEACH, and our schemes with respect to energy delay using extensive simulations for different network sizes. Results show that our schemes perform 80 or more times better than the direct scheme and also outperform the LEACH protocol.",
"title": ""
},
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "91c9dcfd3428fb79afd8d99722c95b69",
"text": "In this article we describe results of our research on the disambiguation of user queries using ontologies for categorization. We present an approach to cluster search results by using classes or “Sense Folders” ~prototype categories! derived from the concepts of an assigned ontology, in our case WordNet. Using the semantic relations provided from such a resource, we can assign categories to prior, not annotated documents. The disambiguation of query terms in documents with respect to a user-specific ontology is an important issue in order to improve the retrieval performance for the user. Furthermore, we show that a clustering process can enhance the semantic classification of documents, and we discuss how this clustering process can be further enhanced using only the most descriptive classes of the ontology. © 2006 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "526a2bcf3b9a32a27fe5f7dd431cd231",
"text": "This study examined the effects of a waitlist policy for state psychiatric hospitals on length of stay and time to readmission using data from North Carolina for 2004–2010. Cox proportional hazards models tested the hypothesis that patients were discharged “quicker-but-sicker” post-waitlist, as hospitals struggled to manage admission delays and quickly admit waitlisted patients. Results refute this hypothesis, indicating that waitlists were associated with increased length of stay and time to readmission. Further research is needed to evaluate patients’ clinical outcomes directly and to examine the impact of state hospital waitlists in other areas, such as state hospital case mix, local emergency departments, and outpatient mental health agencies.",
"title": ""
},
{
"docid": "125e3793333d94347ec53bea19b4dd56",
"text": "Minutiae are important features in the fingerprints matching. The effective of minutiae extraction depends greatly on the results of fingerprint enhancement. This paper proposes a novel fingerprint enhancement method for direct gray scale extracting minutiae based on combining Gabor filters with the Adaptive Modified Finite Radon Transform (AMFRAT) filters. First, the proposed method uses Gabor filters as band-pass filters for deleting the noise and clarifying ridges. Next, AMFRAT filters are applied for connecting broken ridges together, filling the created holes and clarifying linear symmetry of ridges quickly. AMFRAT is the MFRAT filter, the window size of which is adaptively adjusted according to the coherence values. The small window size is for high curvature ridge areas (small coherence value), and vice versa. As the result, the ridges are the linear symmetry areas, and more suitable for direct gray scale minutiae extraction. Finally, linear symmetry filter is only used for locating minutiae in an inverse model, as \"lack of linear symmetry\" occurs at minutiae points. Experimental results on FVC2004 databases DB4 (set A) shows that the proposed method is capable of improving the goodness index (GI).",
"title": ""
},
{
"docid": "3d1093e183b4e9c656e5dd20efe5a311",
"text": "In the past, tactile displays were of one of two kinds: they were either shape displays, or relied on distributed vibrotactile stimulation. A tactile display device is described in this paper which is distinguished by the fact that it relies exclusively on lateral skin stretch stimulation. It is constructed from an array of 64 closely packed piezoelectric actuators connected to a membrane. The deformations of this membrane cause an array of 112 skin contactors to create programmable lateral stress fields in the skin of the finger pad. Some preliminary observations are reported with respect to the sensations that this kind of display can produce. INTRODUCTION Tactile displays are devices used to provide subjects with the sensation of touching objects directly with the skin. Previously reported tactile displays portray distributed tactile stimulation as a one of two possibilities [1]. One class of displays, termed “shape displays”, typically consists of devices having a dense array of skin contactors which can move orthogonally to the surface of the skin in an attempt to display the shape of objects via its spatially sampled approximation. There exist numerous examples of such displays, for recent designs see [2; 3; 4; 5]. In the interest of brevity, the distinction between “pressure displays” and shape displays is not made here. However, an important distinction with regard to the focus of this paper must be made between displays intended to cause no slip between the contactors and the skin and those intended for the opposite case.1 Displays which are intended to be used without slip can be mounted on a carrier device [6; 2]. 1Braille displays can be found in this later category. Another class of displays takes advantage of vibrotactile stimulation. With this technique, an array of tactilly active sites stimulates the skin using an array of contactors vibrating at a fixed frequency. This frequency is selected to maximize the loudness of the sensation (200–300 Hz). Tactile images are associated, not to the quasi-static depth of indentation, but the amplitude of the vibration [7].2 Figure 1. Typical Tactile Display. Shape displays control the rising movement of the contactors (resp. the force applied to). In a vibrotactile display, the contactors oscillate at a fixed frequency. Devices intended to be used as general purpose tactile displays cause stimulation by independently and simultaneously activated skin contactors according to patterns that depend both on space and on time. Such patterns may be thought of as “tactile images”, but because of the rapid adaptation of the skin mechanoreceptors, the images should more accurately be described as “tactile movies”. It is also accepted that the separation between these contactors needs to be of the order of one millimeter so that the resulting percept fuse into one single continuous image. In addition, when contactors apply vibratory signals to the skin at a frequency, which may range from a few Hertz to a few kiloHertz, a perception is derived which may be described 2The Optacon device is a well known example [8]. Proceedings of the Haptic Interfaces for Virtual Environment and Teleoperator Systems Symposium, ASME International Mechanical Engineering Congress & Exposition 2000, Orlando, Florida, USA . pp. 1309-1314",
"title": ""
},
{
"docid": "dc2c952b5864a167c19b34be6db52389",
"text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.",
"title": ""
},
{
"docid": "aef85d4f84b56e1355c5a0d7e3354e2e",
"text": "Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-More iterations, require solution of system of linear equations. In large scale optimization this may be prohibitively expensive. It is shown in this paper that an approximate solution of the trust region problem may be found by the preconditioned conjugate gradient method. This may be regarded as a generalized dogleg technique where we asymptotically take the inexact quasi-Newton step. We also show that we have the same convergence properties as existing methods based on the dogleg strategy using an approximate Hessian.",
"title": ""
},
{
"docid": "1368a00839a5dd1edc7dbaced35e56f1",
"text": "Nowadays, transfer of the health care from ambulance to patient's home needs higher demand on patient's mobility, comfort and acceptance of the system. Therefore, the goal of this study is to proof the concept of a system which is ultra-wearable, less constraining and more suitable for long term measurements than conventional ECG monitoring systems which use conductive electrolytic gels for low impedance electrical contact with skin. The developed system is based on isolated capacitive coupled electrodes without any galvanic contact to patient's body and does not require the common right leg electrode. Measurements performed under real conditions show that it is possible to acquire well known ECG waveforms without the common electrode when the patient is sitting and even during walking. Results of the validation process demonstrate that the system performance is comparable to the conventional ECG system while the wearability is increased.",
"title": ""
},
{
"docid": "e0b253fc2216e7985ccf9d5631e827f5",
"text": "Solitons, nonlinear self-trapped wavepackets, have been extensively studied in many and diverse branches of physics such as optics, plasmas, condensed matter physics, fluid mechanics, particle physics and even astrophysics. Interestingly, over the past two decades, the field of solitons and related nonlinear phenomena has been substantially advanced and enriched by research and discoveries in nonlinear optics. While optical solitons have been vigorously investigated in both spatial and temporal domains, it is now fair to say that much soliton research has been mainly driven by the work on optical spatial solitons. This is partly due to the fact that although temporal solitons as realized in fiber optic systems are fundamentally one-dimensional entities, the high dimensionality associated with their spatial counterparts has opened up altogether new scientific possibilities in soliton research. Another reason is related to the response time of the nonlinearity. Unlike temporal optical solitons, spatial solitons have been realized by employing a variety of noninstantaneous nonlinearities, ranging from the nonlinearities in photorefractive materials and liquid crystals to the nonlinearities mediated by the thermal effect, thermophoresis and the gradient force in colloidal suspensions. Such a diversity of nonlinear effects has given rise to numerous soliton phenomena that could otherwise not be envisioned, because for decades scientists were of the mindset that solitons must strictly be the exact solutions of the cubic nonlinear Schrödinger equation as established for ideal Kerr nonlinear media. As such, the discoveries of optical spatial solitons in different systems and associated new phenomena have stimulated broad interest in soliton research. In particular, the study of incoherent solitons and discrete spatial solitons in optical periodic media not only led to advances in our understanding of fundamental processes in nonlinear optics and photonics, but also had a very important impact on a variety of other disciplines in nonlinear science. In this paper, we provide a brief overview of optical spatial solitons. This review will cover a variety of issues pertaining to self-trapped waves supported by different types of nonlinearities, as well as various families of spatial solitons such as optical lattice solitons and surface solitons. Recent developments in the area of optical spatial solitons, such as 3D light bullets, subwavelength solitons, self-trapping in soft condensed matter and spatial solitons in systems with parity-time symmetry will also be discussed briefly.",
"title": ""
},
{
"docid": "c57cbe432fdab3f415d2c923bea905ff",
"text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.",
"title": ""
},
{
"docid": "5dcc5026f959b202240befbe56857ac4",
"text": "When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.",
"title": ""
},
{
"docid": "5ed9fde132f44ff2f2354b5d9f5b14ab",
"text": "An issue in microfabrication of the fluidic channels in glass/poly (dimethyl siloxane) (PDMS) is the absence of a well-defined study of the bonding strength between the surfaces making up these channels. Although most of the research papers mention the use of oxygen plasma for developing chemical (siloxane) bonds between the participating surfaces, yet they only define a certain set of parameters, tailored to a specific setup. An important requirement of all the microfluidics/biosensors industry is the development of a general regime, which defines a systematic method of gauging the bond strength between the participating surfaces in advance by correlation to a common parameter. This enhances the reliability of the devices and also gives a structured approach to its future large-scale manufacturing. In this paper, we explore the possibility of the existence of a common scale, which can be used to gauge bond strength between various surfaces. We find that the changes in wettability of surfaces owing to various levels of plasma exposure can be a useful parameter to gauge the bond strength. We obtained a good correlation between contact angle of deionized water (a direct measure of wettability) on the PDMS and glass surfaces based on various dosages or oxygen plasma treatment. The exposure was done first in an inductively coupled high-density (ICP) plasma system and then in plasma enhanced chemical vapor deposition (PECVD) system. This was followed by the measurement of bond strength by use or the standardized blister test.",
"title": ""
},
{
"docid": "d83ecee8e5f59ee8e6a603c65f952c22",
"text": "PredPatt is a pattern-based framework for predicate-argument extraction. While it works across languages and provides a well-formed syntax-semantics interface for NLP tasks, a large-scale and reproducible evaluation has been lacking, which prevents comparisons between PredPatt and other related systems, and inhibits the updates of the patterns in PredPatt. In this work, we improve and evaluate PredPatt by introducing a large set of high-quality annotations converted from PropBank, which can also be used as a benchmark for other predicate-argument extraction systems. We compare PredPatt with other prominent systems and shows that PredPatt achieves the best precision and recall.",
"title": ""
},
{
"docid": "f2b13b98556a57b0d9486d628409892a",
"text": "Emerging Complex Event Processing (CEP) applications in cyber physical systems like Smart Power Grids present novel challenges for end-to-end analysis over events, flowing from heterogeneous information sources to persistent knowledge repositories. CEP for these applications must support two distinctive features – easy specification patterns over diverse information streams, and integrated pattern detection over realtime and historical events. Existing work on CEP has been limited to relational query patterns, and engines that match events arriving after the query has been registered. We propose SCEPter, a semantic complex event processing framework which uniformly processes queries over continuous and archived events. SCEPteris built around an existing CEP engine with innovative support for semantic event pattern specification and allows their seamless detection over past, present and future events. Specifically, we describe a unified semantic query model that can operate over data flowing through event streams to event repositories. Compile-time and runtime semantic patterns are distinguished and addressed separately for efficiency. Query rewriting is examined and analyzed in the context of temporal boundaries that exist between event streams and their repository to avoid duplicate or missing results. The design and prototype implementation of SCEPterare analyzed using latency and throughput metrics for scenarios from the Smart Grid domain.",
"title": ""
},
{
"docid": "7f81e1d6a6955cec178c1c811810322b",
"text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems",
"title": ""
}
] |
scidocsrr
|
500ba7fc08e0f640a5e601be8a24768b
|
Stigmergy as a universal coordination mechanism I: Definition and components
|
[
{
"docid": "5dfd057e7abc9eda57d031fc0f922505",
"text": "Collective behaviour is often characterised by the so-called “coordination paradox” : Looking at individual ants, for example, they do not seem to cooperate or communicate explicitly, but nevertheless at the social level cooperative behaviour, such as nest building, emerges, apparently without any central coordination. In the case of social insects such emergent coordination has been explained by the theory of stigmergy, which describes how individuals can effect the behaviour of others (and their own) through artefacts, i.e. the product of their own activity (e.g., building material in the ants’ case). Artefacts clearly also play a strong role in human collective behaviour, which has been emphasised, for example, by proponents of activity theory and distributed cognition. However, the relation between theories of situated/social cognition and theories of social insect behaviour has so far received relatively li ttle attention in the cognitive science literature. This paper aims to take a step in this direction by comparing three theoretical frameworks for the study of cognition in the context of agent-environment interaction (activity theory, situated action, and distributed cognition) to each other and to the theory of stigmergy as a possible minimal common ground. The comparison focuses on what each of the four theories has to say about the role/nature of (a) the agents involved in collective behaviour, (b) their environment, (c) the collective activities addressed, and (d) the role that artefacts play in the interaction between agents and their environments, and in particular in the coordination",
"title": ""
}
] |
[
{
"docid": "cefa0a3c3a80fa0a170538abdb3f7e46",
"text": "This tutorial introduces the basics of emerging nonvolatile memory (NVM) technologies including spin-transfer-torque magnetic random access memory (STTMRAM), phase-change random access memory (PCRAM), and resistive random access memory (RRAM). Emerging NVM cell characteristics are summarized, and device-level engineering trends are discussed. Emerging NVM array architectures are introduced, including the one-transistor-one-resistor (1T1R) array and the cross-point array with selectors. Design challenges such as scaling the write current and minimizing the sneak path current in cross-point array are analyzed. Recent progress on megabit-to gigabit-level prototype chip demonstrations is summarized. Finally, the prospective applications of emerging NVM are discussed, ranging from the last-level cache to the storage-class memory in the memory hierarchy. Topics of three-dimensional (3D) integration and radiation-hard NVM are discussed. Novel applications beyond the conventional memory applications are also surveyed, including physical unclonable function for hardware security, reconfigurable routing switch for field-programmable gate array (FPGA), logic-in-memory and nonvolatile cache/register/flip-flop for nonvolatile processor, and synaptic device for neuro-inspired computing.",
"title": ""
},
{
"docid": "f6a149131a816989ae246a6de0c50dbc",
"text": "In this paper a comparison of outlier detection algorithms is presented, we present an overview on outlier detection methods and experimental results of six implemented methods. We applied these methods for the prediction of stellar populations parameters as well as on machine learning benchmark data, inserting artificial noise and outliers. We used kernel principal component analysis in order to reduce the dimensionality of the spectral data. Experiments on noisy and noiseless data were performed.",
"title": ""
},
{
"docid": "a38e863016bfcead5fd9af46365d4d5c",
"text": "Social networks generate a large amount of text content over time because of continuous interaction between participants. The mining of such social streams is more challenging than traditional text streams, because of the presence of both text content and implicit network structure within the stream. The problem of event detection is also closely related to clustering, because the events can only be inferred from aggregate trend changes in the stream. In this paper, we will study the two related problems of clustering and event detection in social streams. We will study both the supervised and unsupervised case for the event detection problem. We present experimental results illustrating the effectiveness of incorporating network structure in event discovery over purely content-based",
"title": ""
},
{
"docid": "5f5828952aa0a0a95e348a0c0b2296fb",
"text": "Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter.",
"title": ""
},
{
"docid": "bed080cd023291a70eb88467240c81b6",
"text": "As new data products of research increasingly become the product or output of complex processes, the lineage of the resulting products takes on greater importance as a description of the processes that contributed to the result. Without adequate description of data products, their reuse is lessened. The act of instrumenting an application for provenance capture is burdensome, however. This paper explores the option of deriving provenance from existing log files, an approach that reduces the instrumentation task substantially but raises questions about sifting through huge amounts of information for what may or may not be complete provenance. In this paper we study the tradeoff of ease of capture and provenance completeness, and show that under some circumstances capture through logs can result in high quality provenance.",
"title": ""
},
{
"docid": "38d86817d68a8047fa19ae5948b1c056",
"text": "The crossbar array architecture with resistive synaptic devices is attractive for on-chip implementation of weighted sum and weight update in the neuro-inspired learning algorithms. This paper discusses the design challenges on scaling up the array size due to non-ideal device properties and array parasitics. Circuit-level mitigation strategies have been proposed to minimize the learning accuracy loss in a large array. This paper also discusses the peripheral circuits design considerations for the neuro-inspired architecture. Finally, a circuit-level macro simulator is developed to explore the design trade-offs and evaluate the overhead of the proposed mitigation strategies as well as project the scaling trend of the neuro-inspired architecture.",
"title": ""
},
{
"docid": "910fdcf9e9af05b5d1cb70a9c88e4143",
"text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.",
"title": ""
},
{
"docid": "93fcbdfe59015b67955246927d67a620",
"text": "The Emotion Recognition in the Wild (EmotiW) Challenge has been held for three years. Previous winner teams primarily focus on designing specific deep neural networks or fusing diverse hand-crafted and deep convolutional features. They all neglect to explore the significance of the latent relations among changing features resulted from facial muscle motions. In this paper, we study this recognition challenge from the perspective of analyzing the relations among expression-specific facial features in an explicit manner. Our method has three key components. First, we propose a pair-wise learning strategy to automatically seek a set of facial image patches which are important for discriminating two particular emotion categories. We found these learnt local patches are in part consistent with the locations of expression-specific Action Units (AUs), thus the features extracted from such kind of facial patches are named AU-aware facial features. Second, in each pair-wise task, we use an undirected graph structure, which takes learnt facial patches as individual vertices, to encode feature relations between any two learnt facial patches. Finally, a robust emotion representation is constructed by concatenating all task-specific graph-structured facial feature relations sequentially. Extensive experiments on the EmotiW 2015 Challenge testify the efficacy of the proposed approach. Without using additional data, our final submissions achieved competitive results on both sub-challenges including the image based static facial expression recognition (we got 55.38% recognition accuracy outperforming the baseline 39.13% with a margin of 16.25%) and the audio-video based emotion recognition (we got 53.80% recognition accuracy outperforming the baseline 39.33% and the 2014 winner team's final result 50.37% with the margins of 14.47% and 3.43%, respectively).",
"title": ""
},
{
"docid": "0f29172ecf0ed3dfd775c3fa43db4127",
"text": "Reusing software through copying and pasting is a continuous plague in software development despite the fact that it creates serious maintenance problems. Various techniques have been proposed to find duplicated redundant code (also known as software clones). A recent study has compared these techniques and shown that token-based clone detection based on suffix trees is extremely fast but yields clone candidates that are often no syntactic units. Current techniques based on abstract syntax trees-on the other hand-find syntactic clones but are considerably less efficient. This paper describes how we can make use of suffix trees to find clones in abstract syntax trees. This new approach is able to find syntactic clones in linear time and space. The paper reports the results of several large case studies in which we empirically compare the new technique to other techniques using the Bellon benchmark for clone detectors",
"title": ""
},
{
"docid": "84e47d33a895afd0fab28784c112d8f4",
"text": "Hybrid analog/digital precoding is a promising technique to reduce the hardware cost of radio-frequency components compared with the conventional full-digital precoding approach in millimeter-wave multiple-input multiple output systems. However, the large antenna dimensions of the hybrid precoder design makes it difficult to acquire an optimal full-digital precoder. Moreover, it also requires matrix inversion, which leads to high complexity in the hybrid precoder design. In this paper, we propose a low-complexity optimal full-digital precoder acquisition algorithm, named beamspace singular value decomposition (SVD) that saves power for the base station and user equipment. We exploit reduced-dimension beamspace channel state information (CSI) given by compressive sensing (CS) based channel estimators. Then, we propose a CS-assisted beamspace hybrid precoding (CS-BHP) algorithm that leverages CS-based CSI. Simulation results show that the proposed beamspace-SVD reduces complexity by 99.4% compared with an optimal full-digital precoder acquisition using full-dimension SVD. Furthermore, the proposed CS-BHP reduces the complexity of the state-of-the-art approach by 99.6% and has less than 5% performance loss compared with an optimal full-digital precoder.",
"title": ""
},
{
"docid": "18ada6a64572d11cf186e4497fd81f43",
"text": "The task of ranking is crucial in information retrieval. With the advent of the Big Data age, new challenges have arisen for the field. Deep neural architectures are capable of learning complex functions, and capture the underlying representation of the data more effectively. In this work, ranking is reduced to a classification problem and deep neural architectures are used for this task. A dynamic, pointwise approach is used to learn a ranking function, which outperforms the existing ranking algorithms. We introduce three architectures for the task, our primary objective being to identify architectures which produce good results, and to provide intuitions behind their usefulness. The inputs to the models are hand-crafted features provided in the datasets. The outputs are relevance levels. Further, we also explore the idea as to whether the semantic grouping of handcrafted features aids deep learning models in our task.",
"title": ""
},
{
"docid": "cf8b7c330ae26f1839682ebf0610dbc8",
"text": "Motivation\nBest performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models.\n\n\nResults\nWe propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems.\n\n\nAvailability and implementation\nThe GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN.\n\n\nContact\nandyli@ece.ufl.edu or aconesa@ufl.edu.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "779ca56cf734a3b187095424c79ae554",
"text": "Web crawlers are automated tools that browse the web to retrieve and analyze information. Although crawlers are useful tools that help users to find content on the web, they may also be malicious. Unfortunately, unauthorized (malicious) crawlers are increasingly becoming a threat for service providers because they typically collect information that attackers can abuse for spamming, phishing, or targeted attacks. In particular, social networking sites are frequent targets of malicious crawling, and there were recent cases of scraped data sold on the black market and used for blackmailing. In this paper, we introduce PUBCRAWL, a novel approach for the detection and containment of crawlers. Our detection is based on the observation that crawler traffic significantly differs from user traffic, even when many users are hidden behind a single proxy. Moreover, we present the first technique for crawler campaign attribution that discovers synchronized traffic coming from multiple hosts. Finally, we introduce a containment strategy that leverages our detection results to efficiently block crawlers while minimizing the impact on legitimate users. Our experimental results in a large, wellknown social networking site (receiving tens of millions of requests per day) demonstrate that PUBCRAWL can distinguish between crawlers and users with high accuracy. We have completed our technology transfer, and the social networking site is currently running PUBCRAWL in production.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "9379523ea300bd07d0e26242f692948a",
"text": "There has been a growing interest in recent years in the poten tial use of product differentiation (through eco-type labelling) as a means of promoting and rewarding the sustainable management and exploitation of fish stocks. This interest is marked by the growing literature on the topic, exploring both the concept and the key issues associated with it. It reflects a frustration among certain groups with the supply-side measures currently employed in fisheries management, which on their own have proven insufficient to counter the negative incentive structures characterising open-a ccess fisheries. The potential encapsulated by product differentiation has, however, yet to be tested in the market place. One of the debates that continues to accompany the concept is the nature and extent of the response of consumers to the introduction of labelled seafood products. Though differentiated seafood products are starting to come onto the market, we are still essentially dealing with a hypothetical market situation in terms of analysing consumer behaviour. Moving the debate from theoretical extrapolation to one of empirical evidence, this paper presents the preliminary empirical results of a study undertaken in the UK. The study aimed, amongst other things, to evaluate whether UK consumers are prepared to pay a premium for seafood products that are differentiated on the grounds that the fish is either of (a) high quality or (b) comes from a sustainably managed fishery. It also aimed to establish whether the quantity of fish products purchased would change. The results are presented in this paper.",
"title": ""
},
{
"docid": "b5c8263dd499088ded04c589b5da1d9f",
"text": "User interfaces and information systems have become increasingly social in recent years, aimed at supporting the decentralized, cooperative production and use of content. A theory that predicts the impact of interface and interaction designs on such factors as participation rates and knowledge discovery is likely to be useful. This paper reviews a variety of observed phenomena in social information foraging and sketches a framework extending Information Foraging Theory towards making predictions about the effects of diversity, interference, and cost-of-effort on performance time, participation rates, and utility of discoveries.",
"title": ""
},
{
"docid": "cce107dc268b2388e301f64718de1463",
"text": "The training of convolutional neural networks for image recognition usually requires large image datasets to produce favorable results. Those large datasets can be acquired by web crawlers that accumulate images based on keywords. Due to the nature of data in the web, these image sets display a broad variation of qualities across the contained items. In this work, a filtering approach for noisy datasets is proposed, utilizing a smaller trusted dataset. Hereby a convolutional neural network is trained on the trusted dataset and then used to construct a filtered subset from the noisy datasets. The methods described in this paper were applied to plant image classification and the created models have been submitted to the PlantCLEF 2017 competition.",
"title": ""
},
{
"docid": "e4f26f4ed55e51fb2a9a55fd0f04ccc0",
"text": "Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "2e0262fce0a7ba51bd5ccf9e1397b0ca",
"text": "We present a topology detection method combining smart meter sensor information and sparse line measurements. The problem is formulated as a spanning tree identification problem over a graph given partial nodal and edge power flow information. In the deterministic case of known nodal power consumption and edge power flow we provide sensor placement criterion which guarantees correct identification of all spanning trees. We then present a detection method which is polynomial in complexity to the size of the graph. In the stochastic case where loads are given by forecasts derived from delayed smart meter data, we provide a combinatorial complexity MAP detector and a polynomial complexity approximate MAP detector which is shown to work near optimum in all numerical cases.",
"title": ""
}
] |
scidocsrr
|
050a1209dcbfe63ab99f79b3cff59762
|
What is Market News ?
|
[
{
"docid": "6a23c39da8a17858964040a06aa30a80",
"text": "Psychological research indicates that people have a cognitive bias that leads them to misinterpret new information as supporting previously held hypotheses. We show in a simple model that such conrmatory bias induces overcondence: given any probabilistic assessment by an agent that one of two hypotheses is true, the appropriate beliefs would deem it less likely to be true. Indeed, the hypothesis that the agent believes in may be more likely to be wrong than right. We also show that the agent may come to believe with near certainty in a false hypothesis despite receiving an innite amount of information.",
"title": ""
}
] |
[
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
},
{
"docid": "45f75c8d642be90e45abff69b4c6fbcf",
"text": "We describe a method for identifying the speakers of quoted speech in natural-language textual stories. We have assembled a corpus of more than 3,000 quotations, whose speakers (if any) are manually identified, from a collection of 19th and 20th century literature by six authors. Using rule-based and statistical learning, our method identifies candidate characters, determines their genders, and attributes each quote to the most likely speaker. We divide the quotes into syntactic classes in order to leverage common discourse patterns, which enable rapid attribution for many quotes. We apply learning algorithms to the remainder and achieve an overall accuracy of 83%.",
"title": ""
},
{
"docid": "f6249304dbd2b275a70b2b12faeb4712",
"text": "This paper describes a system, built and refined over the past five years, that automatically analyzes student programs assigned in a computer organization course. The system tests a student's program, then e-mails immediate feedback to the student to assist and encourage the student to continue testing, debugging, and optimizing his or her program. The automated feedback system improves the students' learning experience by allowing and encouraging them to improve their program iteratively until it is correct. The system has also made it possible to add challenging parts to each project, such as optimization and testing, and it has enabled students to meet these challenges. Finally, the system has reduced the grading load of University of Michigan's large classes significantly and helped the instructors handle the rapidly increasing enrollments of the 1990s. Initial experience with the feedback system showed that students depended too heavily on the feedback system as a substitute for their own testing. This problem was addressed by requiring students to submit a comprehensive test suite along with their program and by applying automated feedback techniques to help students learn how to write good test suites. Quantitative iterative feedback has proven to be extremely helpful in teaching students specific concepts about computer organization and general concepts on computer programming and testing.",
"title": ""
},
{
"docid": "8b3ad3d48da22c529e65c26447265372",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
},
{
"docid": "d622cf283f27a32b2846a304c0359c5f",
"text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.",
"title": ""
},
{
"docid": "0b0791d64f67b4df8441215a6c6cd116",
"text": "The offset voltage of the dynamic latched comparator is analyzed in detail, and the dynamic latched comparator design is optimized for the minimal offset voltage based on the analysis in this paper. As a result, 1-sigma offset voltage was reduced from 12.5mV to 6.5mV at the cost of 9% increase of the power dissipation (152µW from 136µW). Using a digitally controlled capacitive offset calibration technique, the offset voltage of the comparator is further reduced from 6.50mV to 1.10mV at 1-sigma at the operating clock frequency of 3 GHz and it consumes 54µW/GHz after the calibration.",
"title": ""
},
{
"docid": "73905bf74f0f66c7a02aeeb9ab231d7b",
"text": "This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.",
"title": ""
},
{
"docid": "9dc9b5bad3422a6f1c7f33ccb25fdead",
"text": "We present a named entity recognition (NER) system for extracting product attributes and values from listing titles. Information extraction from short listing titles present a unique challenge, with the lack of informative context and grammatical structure. In this work, we combine supervised NER with bootstrapping to expand the seed list, and output normalized results. Focusing on listings from eBay’s clothing and shoes categories, our bootstrapped NER system is able to identify new brands corresponding to spelling variants and typographical errors of the known brands, as well as identifying novel brands. Among the top 300 new brands predicted, our system achieves 90.33% precision. To output normalized attribute values, we explore several string comparison algorithms and found n-gram substring matching to work well in practice.",
"title": ""
},
{
"docid": "3d811c193d489f347119bc911006e2cd",
"text": "The performance of massive multiple input multiple output systems may be limited by inter-cell pilot contamination (PC) unless appropriate PC mitigation or avoidance schemes are employed. In this paper we develop techniques based on existing long term evolution (LTE) measurements - open loop power control (OLPC) and pilot sequence reuse schemes, that avoid PC within a group of cells. We compare the performance of simple least-squares channel estimator with the higher-complexity minimum mean square error estimator, and evaluate the performance of the recently proposed coordinated pilot allocation (CPA) technique (which is appropriate in cooperative systems). The performance measures of interest include the normalized mean square error of channel estimation, the downlink signal-to-interference-plus-noise and spectral efficiency when employing maximum ratio transmission or zero forcing precoding at the base station. We find that for terminals moving at vehicular speeds, PC can be effectively mitigated in an operation and maintenance node using both the OLPC and the pilot reuse schemes. Additionally, greedy CPA provides performance gains only for a fraction of terminals, at the cost of degradation for the rest of the terminals and higher complexity. These results indicate that in practice, PC may be effectively mitigated without the need for second-order channel statistics or inter-cell cooperation.",
"title": ""
},
{
"docid": "ac1d1bf198a178cb5655768392c3d224",
"text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.",
"title": ""
},
{
"docid": "f795ba59b0c2c81953b94ac981ee0b57",
"text": "The Digital Beamforming Synthetic Aperture Radar (DBSAR) is a state-of-the-art L-band radar that employs advanced radar technology and a customized data acquisition and real-time processor in order to enable multimode measurement techniques in a single radar platform. DBSAR serves as a test bed for the development, implementation, and testing of digital beamforming radar techniques applicable to Earth science and planetary measurements. DBSAR flew its first field campaign on board the National Aeronautics and Space Administration P3 aircraft in October 2008, demonstrating enabling techniques for scatterometry, synthetic aperture, and altimetry.",
"title": ""
},
{
"docid": "fc97e17c5c9e1ea43570d799ac1ecd1f",
"text": "OBJECTIVE\nTo determine the clinical course in dogs with aural cholesteatoma.\n\n\nSTUDY DESIGN\nCase series.\n\n\nANIMALS\nDogs (n=20) with aural cholesteatoma.\n\n\nMETHODS\nCase review (1998-2007).\n\n\nRESULTS\nTwenty dogs were identified. Clinical signs other than those of chronic otitis externa included head tilt (6 dogs), unilateral facial palsy (4), pain on opening or inability to open the mouth (4), and ataxia (3). Computed tomography (CT) was performed in 19 dogs, abnormalities included osteoproliferation (13 dogs), lysis of the bulla (12), expansion of the bulla (11), bone lysis in the squamous or petrosal portion of the temporal bone (4) and enlargement of associated lymph nodes (7). Nineteen dogs had total ear canal ablation-lateral bulla osteotomy or ventral bulla osteotomy with the intent to cure; 9 dogs had no further signs of middle ear disease whereas 10 had persistent or recurrent clinical signs. Risk factors for recurrence after surgery were inability to open the mouth or neurologic signs on admission and lysis of any portion of the temporal bone on CT imaging. Dogs admitted with neurologic signs or inability to open the mouth had a median survival of 16 months.\n\n\nCONCLUSIONS\nEarly surgical treatment of aural cholesteatoma may be curative. Recurrence after surgery is associated with advanced disease, typically indicated by inability to open the jaw, neurologic disease, or bone lysis on CT imaging.\n\n\nCLINICAL RELEVANCE\nPresence of aural cholesteatoma may affect the prognosis for successful surgical treatment of middle ear disease.",
"title": ""
},
{
"docid": "e0ec89c103aedb1d04fbc5892df288a8",
"text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.",
"title": ""
},
{
"docid": "4e9438fede70ff0aa1c87cdcd64f0bac",
"text": "This paper presents a novel formulation for detecting objects with articulated rigid bodies from high-resolution monitoring images, particularly engineering vehicles. There are many pixels in high-resolution monitoring images, and most of them represent the background. Our method first detects object patches from monitoring images using a coarse detection process. In this phase, we build a descriptor based on histograms of oriented gradient, which contain color frequency information. Then we use a linear support vector machine to rapidly detect many image patches that may contain object parts, with a low false negative rate and a high false positive rate. In the second phase, we apply a refinement classification to determine the patches that actually contain objects. In this stage, we increase the size of the image patches so that they include the complete object using models of the object parts. Then an accelerated and improved salient mask is used to improve the performance of the dense scale-invariant feature transform descriptor. The detection process returns the absolute position of positive objects in the original images. We have applied our methods to three datasets to demonstrate their effectiveness.",
"title": ""
},
{
"docid": "c60693035f0f99528a741fe5e3d88219",
"text": "Transmit array design is more challenging for dual-band operation than for single band, due to the independent 360° phase wrapping jumps needed at each band when large electrical length compensation is involved. This happens when aiming at large gains, typically above 25 dBi with beam scanning and $F/D \\le 1$ . No such designs have been reported in the literature. A general method is presented here to reduce the complexity of dual-band transmit array design, valid for arbitrarily large phase error compensation and any band ratio, using a finite number of different unit cells. The procedure is demonstrated for two offset transmit array implementations operating in circular polarization at 20 GHz(Rx) and 30 GHz(Tx) for Ka-band satellite-on-the-move terminals with mechanical beam-steering. An appropriate set of 30 dual-band unit cells is developed with transmission coefficient greater than −0.9 dB. The full-size transmit array is characterized by full-wave simulation enabling elevation beam scanning over 0°–50° with gains reaching 26 dBi at 20 GHz and 29 dBi at 30 GHz. A smaller prototype was fabricated and measured, showing a measured gain of 24 dBi at 20 GHz and 27 dBi at 30 GHz. In both cases, the beam pointing direction is coincident over the two frequency bands, and thus confirming the proposed design procedure.",
"title": ""
},
{
"docid": "4f50f9ed932635614d0f4facbaa80992",
"text": "In this paper we propose an overview of the recent academic literature devoted to the applications of Hawkes processes in finance. Hawkes processes constitute a particular class of multivariate point processes that has become very popular in empirical high frequency finance this last decade. After a reminder of the main definitions and properties that characterize Hawkes processes, we review their main empirical applications to address many different problems in high frequency finance. Because of their great flexibility and versatility, we show that they have been successfully involved in issues as diverse as estimating the volatility at the level of transaction data, estimating the market stability, accounting for systemic risk contagion, devising optimal execution strategies or capturing the dynamics of the full order book.",
"title": ""
},
{
"docid": "0ac7db546c11b9d18897ceeb2e5be70f",
"text": "A backstepping approach is proposed in this paper to cope with the failure of a quadrotor propeller. The presented methodology supposes to turn off also the motor which is opposite to the broken one. In this way, a birotor configuration with fixed propellers is achieved. The birotor is controlled to follow a planned emergency landing trajectory. Theory shows that the birotor can reach any point in the Cartesian space losing the possibility to control the yaw angle. Simulation tests are employed to validate the proposed controller design.",
"title": ""
},
{
"docid": "88fb71e503e0d0af7515dd8489061e25",
"text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "49610c4b28f85faaa333b4845443e121",
"text": "The variety of wound types has resulted in a wide range of wound dressings with new products frequently introduced to target different aspects of the wound healing process. The ideal dressing should achieve rapid healing at reasonable cost with minimal inconvenience to the patient. This article offers a review of the common wound management dressings and emerging technologies for achieving improved wound healing. It also reviews many of the dressings and novel polymers used for the delivery of drugs to acute, chronic and other types of wound. These include hydrocolloids, alginates, hydrogels, polyurethane, collagen, chitosan, pectin and hyaluronic acid. There is also a brief section on the use of biological polymers as tissue engineered scaffolds and skin grafts. Pharmacological agents such as antibiotics, vitamins, minerals, growth factors and other wound healing accelerators that take active part in the healing process are discussed. Direct delivery of these agents to the wound site is desirable, particularly when systemic delivery could cause organ damage due to toxicological concerns associated with the preferred agents. This review concerns the requirement for formulations with improved properties for effective and accurate delivery of the required therapeutic agents. General formulation approaches towards achieving optimum physical properties and controlled delivery characteristics for an active wound healing dosage form are also considered briefly.",
"title": ""
}
] |
scidocsrr
|
b049d9544a7cee820b8df4f4b4fe1adc
|
Compact CPW-Fed Tri-Band Printed Antenna With Meandering Split-Ring Slot for WLAN/WiMAX Applications
|
[
{
"docid": "237a88ea092d56c6511bb84604e6a7c7",
"text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.",
"title": ""
},
{
"docid": "7bc8be5766eeb11b15ea0aa1d91f4969",
"text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.",
"title": ""
}
] |
[
{
"docid": "0b6f3498022abdf0407221faba72dcf1",
"text": "A broadband coplanar waveguide (CPW) to coplanar strip (CPS) transmission line transition directly integrated with an RF microelectromechanical systems reconfigurable multiband antenna is presented in this paper. This transition design exhibits very good performance up to 55 GHz, and uses a minimum number of dissimilar transmission line sections and wire bonds, achieving a low-loss and low-cost balancing solution to feed planar antenna designs. The transition design methodology that was followed is described and measurement results are presented.",
"title": ""
},
{
"docid": "c31dddbca92e13e84e08cca310329151",
"text": "For the first time, automated Hex solvers have surpassed humans in their ability to solve Hex positions: they can now solve many 9×9 Hex openings. We summarize the methods that attained this milestone, and examine the future of Hex solvers.",
"title": ""
},
{
"docid": "65ed76ddd6f7fd0aea717d2e2643dd16",
"text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "26b67fe7ee89c941d313187672b1d514",
"text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.",
"title": ""
},
{
"docid": "1336b193e4884a024f21a384b265eac6",
"text": "In this proposal, we introduce Bayesian Abductive Logic Programs (BALP), a probabilistic logic that adapts Bayesian Logic Programs (BLPs) for abductive reasoning. Like BLPs, BALPs also combine first-order logic and Bayes nets. However, unlike BLPs, which use deduction to construct Bayes nets, BALPs employ logical abduction. As a result, BALPs are more suited for problems like plan/activity recognition that require abductive reasoning. In order to demonstrate the efficacy of BALPs, we apply it to two abductive reasoning tasks – plan recognition and natural language understanding.",
"title": ""
},
{
"docid": "529929af902100d25e08fe00d17e8c1a",
"text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.",
"title": ""
},
{
"docid": "ee61181cb9625868526eb608db0c58b4",
"text": "The primary focus of machine learning has traditionally been on learning from data assumed to be sufficient and representative of the underlying fixed, yet unknown, distribution. Such restrictions on the problem domain paved the way for development of elegant algorithms with theoretically provable performance guarantees. As is often the case, however, real-world problems rarely fit neatly into such restricted models. For instance class distributions are often skewed, resulting in the “class imbalance” problem. Data drawn from non-stationary distributions is also common in real-world applications, resulting in the “concept drift” or “non-stationary learning” problem which is often associated with streaming data scenarios. Recently, these problems have independently experienced increased research attention, however, the combined problem of addressing all of the above mentioned issues has enjoyed relatively little research. If the ultimate goal of intelligent machine learning algorithms is to be able to address a wide spectrum of real-world scenarios, then the need for a general framework for learning from, and adapting to, a non-stationary environment that may introduce imbalanced data can be hardly overstated. In this paper, we first present an overview of each of these challenging areas, followed by a comprehensive review of recent research for developing such a general framework.",
"title": ""
},
{
"docid": "54a1257346f9a1ead514bb8077b0e7ca",
"text": "Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.",
"title": ""
},
{
"docid": "5785108e48e62ce2758a7b18559a697e",
"text": "The objective of this article is to create a better understanding of the intersection of the academic fields of entrepreneurship and strategic management, based on an aggregation of the extant literature in these two fields. The article structures and synthesizes the existing scholarly works in the two fields, thereby generating new knowledge. The results can be used to further enhance fruitful integration of these two overlapping but separate academic fields. The article attempts to integrate the two fields by first identifying apparent interrelations, and then by concentrating in more detail on some important intersections, including strategic management in small and medium-sized enterprises and start-ups, acknowledging the central role of the entrepreneur. The content and process sides of strategic management are discussed as well as their important connecting link, the business plan. To conclude, implications and future research directions for the two fields are proposed.",
"title": ""
},
{
"docid": "efde28bc545de68dbb44f85b198d85ff",
"text": "Blockchain technology is regarded as highly disruptive, but there is a lack of formalization and standardization of terminology. Not only because there are several (sometimes propriety) implementation platforms, but also because the academic literature so far is predominantly written from either a purely technical or an economic application perspective. The result of the confusion is an offspring of blockchain solutions, types, roadmaps and interpretations. For blockchain to be accepted as a technology standard in established industries, it is pivotal that ordinary internet users and business executives have a basic yet fundamental understanding of the workings and impact of blockchain. This conceptual paper provides a theoretical contribution and guidance on what blockchain actually is by taking an ontological approach. Enterprise Ontology is used to make a clear distinction between the datalogical, infological and essential level of blockchain transactions and smart contracts.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
},
{
"docid": "c75967795041ef900236d71328dd7936",
"text": "In order to investigate the strategies used to plan and control multijoint arm trajectories, two-degrees-of-freedom arm movements performed by normal adult humans were recorded. Only the shoulder and elbow joints were active. When a subject was told simply to move his hand from one visual target to another, the path of the hand was roughly straight, and the hand speed profile of their straight trajectories was bell-shaped. When the subject was required to produce curved hand trajectories, the path usually had a segmented appearance, as if the subject was trying to approximate a curve with low curvature elements. Hand speed profiles associated with curved trajectories contained speed valleys or inflections which were temporally associated with the local maxima in the trajectory curvature. The mean duration of curved movements was longer than the mean for straight movements. These results are discussed in terms of trajectory control theories which have originated in the fields of mechanical manipulator control and biological motor control. Three explanations for the results are offered.",
"title": ""
},
{
"docid": "06c4281aad5e95cac1f4525cbb90e5c7",
"text": "Offering training programs to their employees is one of the necessary tasks that managers must comply with. Training is done mainly to provide upto-date knowledge or to convey to staff the objectives, history, corporate name, functions of the organization’s areas, processes, laws, norms or policies that must be fulfilled. Although there are a lot of methods, models or tools that are useful for this purpose, many companies face with some common problems like employee’s motivation and high costs in terms of money and time. In an effort to solve this problem, new trends have emerged in the last few years, in particular strategies related to games, such as serious games and gamification, whose success has been demonstrated by numerous researchers. According to the above, we present a systematic literature review of the different approaches that have used games or their elements, using the procedure suggested by Cooper, on this matter, ending with about the positive and negative findings.",
"title": ""
},
{
"docid": "24d55c65807e4a90fb0dffb23fc2f7bc",
"text": "This paper presents a comprehensive study of deep correlation features on image style classification. Inspired by that, correlation between feature maps can effectively describe image texture, and we design various correlations and transform them into style vectors, and investigate classification performance brought by different variants. In addition to intralayer correlation, interlayer correlation is proposed as well, and its effectiveness is verified. After showing the effectiveness of deep correlation features, we further propose a learning framework to automatically learn correlations between feature maps. Through extensive experiments on image style classification and artist classification, we demonstrate that the proposed learnt deep correlation features outperform several variants of convolutional neural network features by a large margin, and achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "283d3f1ff0ca4f9c0a2a6f4beb1f7771",
"text": "As a proof-of-concept for the vision “SSD as SQL Engine” (SaS in short), we demonstrate that SQLite [4], a popular mobile database engine, in its entirety can run inside a real SSD development platform. By turning storage device into database engine, SaS allows applications to directly interact with full SQL database server running inside storage device. In SaS, the SQL language itself, not the traditional dummy block interface, will be provided as new interface between applications and storage device. In addition, since SaS plays the role of the uni ed platform of database computing node and storage node, the host and the storage need not be segregated any more as separate physical computing components.",
"title": ""
},
{
"docid": "62d39d41523bca97939fa6a2cf736b55",
"text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "a9a22c9c57e9ba8c3deefbea689258d5",
"text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.",
"title": ""
},
{
"docid": "c5f749c36b3d8af93c96bee59f78efe5",
"text": "INTRODUCTION\nMolecular diagnostics is a key component of laboratory medicine. Here, the authors review key triggers of ever-increasing automation in nucleic acid amplification testing (NAAT) with a focus on specific automated Polymerase Chain Reaction (PCR) testing and platforms such as the recently launched cobas® 6800 and cobas® 8800 Systems. The benefits of such automation for different stakeholders including patients, clinicians, laboratory personnel, hospital administrators, payers, and manufacturers are described. Areas Covered: The authors describe how molecular diagnostics has achieved total laboratory automation over time, rivaling clinical chemistry to significantly improve testing efficiency. Finally, the authors discuss how advances in automation decrease the development time for new tests enabling clinicians to more readily provide test results. Expert Commentary: The advancements described enable complete diagnostic solutions whereby specific test results can be combined with relevant patient data sets to allow healthcare providers to deliver comprehensive clinical recommendations in multiple fields ranging from infectious disease to outbreak management and blood safety solutions.",
"title": ""
},
{
"docid": "87eb54a981fca96475b73b3dfa99b224",
"text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.",
"title": ""
}
] |
scidocsrr
|
c6b5355d71b9f6a9ce670dea43e2f9d5
|
Software defined environments: An introduction
|
[
{
"docid": "ac8c48688c0dfa60c2b268bfc7aab74a",
"text": "management in software defined environments A. Alba G. Alatorre C. Bolik A. Corrao T. Clark S. Gopisetty R. Haas R. I. Kat B. S. Langston N. S. Mandagere D. Noll S. Padbidri R. Routray Y. Song C.-H. Tan A. Traeger The IT industry is experiencing a disruptive trend for which the entire data center infrastructure is becoming software defined and programmable. IT resources are provisioned and optimized continuously according to a declarative and expressive specification of the workload requirements. The software defined environments facilitate agile IT deployment and responsive data center configurations that enable rapid creation and optimization of value-added services for clients. However, this fundamental shift introduces new challenges to existing data center management solutions. In this paper, we focus on the storage aspect of the IT infrastructure and investigate its unique challenges as well as opportunities in the emerging software defined environments. Current state-of-the-art software defined storage (SDS) solutions are discussed, followed by our novel framework to advance the existing SDS solutions. In addition, we study the interactions among SDS, software defined compute (SDC), and software defined networking (SDN) to demonstrate the necessity of a holistic orchestration and to show that joint optimization can significantly improve the effectiveness and efficiency of the overall software defined environments.",
"title": ""
},
{
"docid": "9683bb5dc70128d3981b10503cf3261a",
"text": "This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different vendors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors.\n As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtualization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques.\n VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to efficiently virtualize the x86 architecture and support most commodity operating systems. By relying on x86 hardware segmentation as a protection mechanism, the binary translator could execute translated code at near hardware speeds. The binary translator also relied on partial evaluation and adaptive retranslation to reduce the overall overheads of virtualization.\n Written with the benefit of hindsight, this article shares the key lessons we learned from building the original system and from its later evolution.",
"title": ""
}
] |
[
{
"docid": "b01028ef40b1fda74d0621c430ce9141",
"text": "ETRI Journal, Volume 29, Number 2, April 2007 A novel low-voltage CMOS current feedback operational amplifier (CFOA) is presented. This realization nearly allows rail-to-rail input/output operations. Also, it provides high driving current capabilities. The CFOA operates at supply voltages of ±0.75 V with a total standby current of 304 μA. The circuit exhibits a bandwidth better than 120 MHz and a current drive capability of ±1 mA. An application of the CFOA to realize a new all-pass filter is given. PSpice simulation results using 0.25 μm CMOS technology parameters for the proposed CFOA and its application are given.",
"title": ""
},
{
"docid": "907940110f89714bf20a8395cd8932d5",
"text": "Polyphonic sound event detection (polyphonic SED) is an interesting but challenging task due to the concurrence of multiple sound events. Recently, SED methods based on convolutional neural networks (CNN) and recurrent neural networks (RNN) have shown promising performance. Generally, CNN are designed for local feature extraction while RNN are used to model the temporal dependency among these local features. Despite their success, it is still insufficient for existing deep learning techniques to separate individual sound event from their mixture, largely due to the overlapping characteristic of features. Motivated by the success of Capsule Networks (CapsNet), we propose a more suitable capsule based approach for polyphonic SED. Specifically, several capsule layers are designed to effectively select representative frequency bands for each individual sound event. The temporal dependency of capsule's outputs is then modeled by a RNN. And a dynamic threshold method is proposed for making the final decision based on RNN outputs. Experiments on the TUT-SED Synthetic 2016 dataset show that the proposed approach obtains an F1-score of 68.8% and an error rate of 0.45, outperforming the previous state-of-the-art method of 66.4% and 0.48, respectively.",
"title": ""
},
{
"docid": "e5241f16c4bebf7c87d8dcc99ff38bc4",
"text": "Several techniques for estimating the reliability of estimated error rates and for estimating the signicance of observed dierences in error rates are explored in this paper. Textbook formulas which assume a large test set, i.e., a normal distribution, are commonly used to approximate the condence limits of error rates or as an approximate signicance test for comparing error rates. Expressions for determining more exact limits and signicance levels for small samples are given here, and criteria are also given for determining when these more exact methods should be used. The assumed normal distribution gives a poor approximation to the condence interval in most cases, but is usually useful for signicance tests when the proper mean and variance expressions are used. A commonly used 62 signicance test uses an improper expression for , which is too low and leads to a high likelihood of Type I errors. Common machine learning methods for estimating signicance from observations on a single sample may be unreliable.",
"title": ""
},
{
"docid": "6131fdbfe28aaa303b1ee4c29a65f766",
"text": "Destination prediction is an essential task for many emerging location based applications such as recommending sightseeing places and targeted advertising based on destination. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, existing techniques using this approach suffer from the “data sparsity problem”, i.e., the available historical trajectories is far from being able to cover all possible trajectories. This problem considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) algorithm to address the data sparsity problem. SubSyn algorithm first decomposes historical trajectories into sub-trajectories comprising two neighbouring locations, and then connects the sub-trajectories into “synthesised” trajectories. The number of query trajectories that can have predicted destinations is exponentially increased by this means. Experiments based on real datasets show that SubSyn algorithm can predict destinations for up to ten times more query trajectories than a baseline algorithm while the SubSyn prediction algorithm runs over two orders of magnitude faster than the baseline algorithm. In this paper, we also consider the privacy protection issue in case an adversary uses SubSyn algorithm to derive sensitive location information of users. We propose an efficient algorithm to select a minimum number of locations a user has to hide on her trajectory in order to avoid privacy leak. Experiments also validate the high efficiency of the privacy protection algorithm.",
"title": ""
},
{
"docid": "4b43203c83b46f0637d048c7016cce17",
"text": "Efficient detection of three dimensional (3D) objects in point clouds is a challenging problem. Performing 3D descriptor matching or 3D scanning-window search with detector are both time-consuming due to the 3-dimensional complexity. One solution is to project 3D point cloud into 2D images and thus transform the 3D detection problem into 2D space, but projection at multiple viewpoints and rotations produce a large amount of 2D detection tasks, which limit the performance and complexity of the 2D detection algorithm choice. We propose to use convolutional neural network (CNN) for the 2D detection task, because it can handle all viewpoints and rotations for the same class of object together, as well as predicting multiple classes of objects with the same network, without the need for individual detector for each object class. We further improve the detection efficiency by concatenating two extra levels of early rejection networks with binary outputs before the multi-class detection network. Experiments show that our method has competitive overall performance with at least one-order of magnitude speedup comparing with latest 3D point cloud detection methods.",
"title": ""
},
{
"docid": "3fe9dfb8334111ea56d40010ff7a70fa",
"text": "1 Summary. The paper presents the LINK application, which is a decision-support system dedicated for operational and investigational activities of homeland security services. The paper briefly discusses issues of criminal analysis, possibilities of utilizing spatial (geographical) information together with crime mapping and spatial analyses. LINK – ŚRODOWISKO ANALIZ KRYMINALNYCH WYKORZYSTUJĄCE NARZĘRZIA ANALIZ GEOPRZESTRZENNYCH Streszczenie. Artykuł prezentuje system LINK będący zintegrowanym środowi-skiem wspomagania analizy kryminalnej przeznaczonym do działań operacyjnych i śledczych służb bezpieczeństwa wewnętrznego. W artykule omówiono problemy analizy kryminalnej, możliwość wykorzystania informacji o charakterze przestrzen-nym oraz narzędzia i metody analiz geoprzestrzennych.",
"title": ""
},
{
"docid": "f25bf9cdbe3330dcb450a66ae25d19bd",
"text": "The hypoplastic, weak lateral crus of the nose may cause concave alar rim deformity, and in severe cases, even alar rim collapse. These deformities may lead to both aesthetic disfigurement and functional impairment of the nose. The cephalic part of the lateral crus was folded and fixed to reinforce the lateral crus. The study included 17 women and 15 men with a median age of 24 years. The average follow-up period was 12 months. For 23 patients, the described technique was used to treat concave alar rim deformity, whereas for 5 patients, who had thick and sebaceous skin, it was used to prevent weakness of the alar rim. The remaining 4 patients underwent surgery for correction of a collapsed alar valve. Satisfactory results were achieved without any complications. Turn-in folding of the cephalic portion of lateral crus not only functionally supports the lateral crus, but also provides aesthetic improvement of the nasal tip as successfully as cephalic excision of the lateral crura.",
"title": ""
},
{
"docid": "d03a86459dd461dcfac842ae55ae4ebb",
"text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.",
"title": ""
},
{
"docid": "2052b47be2b5e4d0c54ab0be6ae1958b",
"text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .",
"title": ""
},
{
"docid": "2a7002f1c3bf4460ca535966698c12b9",
"text": "In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1%, 79.7% and 60.9%, respectively.",
"title": ""
},
{
"docid": "dbf3a58ffe71e6ef61d6c69e85a3c743",
"text": "A conventional automatic speech recognizer does not perform well in the presence of noise, while human listeners are able to segregate and recognize speech in noisy conditions. We study a novel feature based on an auditory periphery model for robust speech recognition. Specifically, gammatone frequency cepstral coefficients are derived by applying a cepstral analysis on gammatone filterbank responses. Our evaluations show that the proposed feature performs considerably better than conventional acoustic features. We further demonstrate that integrating the proposed feature with a computational auditory scene analysis system yields promising recognition performance.",
"title": ""
},
{
"docid": "f92a7d9451f9d1213e9b1e479a4df006",
"text": "Cet article passe en revue les vingt dernieÁ res anne es de recherche sur la culture et la ne gociation et pre sente les progreÁ s qui ont e te faits, les pieÁ ges dont il faut se de fier et les perspectives pour de futurs travaux. On a remarque que beaucoup de recherches avaient tendance aÁ suivre ces deux modeÁ les implicites: (1) l'influence de la culture sur les strate gies et l'aboutissement de la ne gociation et/ou (2) l'interaction de la culture et d'autres aspects de la situation imme diate sur les re sultats de la ne gociation. Cette recherche a porte sur un grand nombre de cultures et a mis en e vidence plus d'un modeÁ le inte ressant. Nous signalons cependant trois pieÁ ge caracte ristiques de cette litte rature, pieÁ ges qui nous ont handicape s. Tout d'abord, la plupart des travaux se satisfont de de nominations ge ographiques pour de signer les cultures et il est par suite souvent impossible de de terminer les dimensions culturelles qui rendent compte des diffe rences observe es. Ensuite, beaucoup de recherches ignorent les processus psychologiques (c'est-aÁ -dire les motivations et les cognitions) qui sont en jeu dans les ne gociations prenant place dans des cultures diffe rentes si bien que nous apprenons peu de choses aÁ propos de la psychologie de la ne gociation dans des contextes culturels diversifie s. On se heurte ainsi aÁ une « boõà te noire » que les travaux sur la culture et la ne gociation se gardent ge ne ralement d'ouvrir. Enfin, notre travail n'a recense qu'un nombre restreint de variables situationnelles imme diates intervenant dans des ne gociations prenant place dans des cultures diffe rentes; notre compre hension des effets mode rateurs de la culture sur la ne gociation est donc limite e. Nous proposons un troisieÁ me modeÁ le, plus complet, de la culture et de la ne gociation, pre sentons quelques donne es re centes en sa faveur et esquissons quelques perspectives pour l'avenir.",
"title": ""
},
{
"docid": "cf0a4f12c23b42c08b6404fe897ed646",
"text": "By performing computation at the location of data, non-Von Neumann (VN) computing should provide power and speed benefits over conventional (e.g., VN-based) approaches to data-centric workloads such as deep learning. For the on-chip training of largescale deep neural networks using nonvolatile memory (NVM) based synapses, success will require performance levels (e.g., deep neural network classification accuracies) that are competitive with conventional approaches despite the inherent imperfections of such NVM devices, and will also require massively parallel yet low-power read and write access. In this paper, we focus on the latter requirement, and outline the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices to implement this acceleration through what is, at least locally, analog computing. We address how the circuit requirements for this new neuromorphic computing approach are somewhat reminiscent of, yet significantly different from, the well-known requirements found in conventional memory applications. We discuss tradeoffs that can influence both the effective acceleration factor (“speed”) and power requirements of such on-chip learning accelerators. P. Narayanan A. Fumarola L. L. Sanches K. Hosokawa S. C. Lewis R. M. Shelby G. W. Burr",
"title": ""
},
{
"docid": "63eaccbbf34bc68cefa119056d488402",
"text": "Interactive Image Generation User edits Generated images User edits Generated images User edits Generated images [1] Zhu et al. Learning a Discriminative Model for the Perception of Realism in Composite Images. ICCV 2015. [2] Goodfellow et al. Generative Adversarial Nets. NIPS 2014 [3] Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR 2016 Reference : Natural images 0, I , Unif 1, 1",
"title": ""
},
{
"docid": "e5ce1ddd50a728fab41043324938a554",
"text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.",
"title": ""
},
{
"docid": "bb240f2e536e5e5cd80fcca8c9d98171",
"text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.",
"title": ""
},
{
"docid": "b75dd43655a70eaf0aaef43826de4337",
"text": "Plagiarism detection has been considered as a classification problem which can be approximated with intrinsic strategies, considering self-based information from a given document, and external strategies, considering comparison techniques between a suspicious document and different sources. In this work, both intrinsic and external approaches for plagiarism detection are presented. First, the main contribution for intrinsic plagiarism detection is associated to the outlier detection approach for detecting changes in the author’s style. Then, the main contribution for the proposed external plagiarism detection is the space reduction technique to reduce the complexity of this plagiarism detection task. Results shows that our approach is highly competitive with respect to the leading research teams in plagiarism detection.",
"title": ""
},
{
"docid": "cc8b0cd938bc6315864925a7a057e211",
"text": "Despite the continuous growth in the number of smartphones around the globe, Short Message Service (SMS) still remains as one of the most popular, cheap and accessible ways of exchanging text messages using mobile phones. Nevertheless, the lack of security in SMS prevents its wide usage in sensitive contexts such as banking and health-related applications. Aiming to tackle this issue, this paper presents SMSCrypto, a framework for securing SMS-based communications in mobile phones. SMSCrypto encloses a tailored selection of lightweight cryptographic algorithms and protocols, providing encryption, authentication and signature services. The proposed framework is implemented both in Java (target at JVM-enabled platforms) and in C (for constrained SIM Card processors) languages, thus being suitable",
"title": ""
},
{
"docid": "0ca588e42d16733bc8eef4e7957e01ab",
"text": "Three-dimensional (3D) finite element (FE) models are commonly used to analyze the mechanical behavior of the bone under different conditions (i.e., before and after arthroplasty). They can provide detailed information but they are numerically expensive and this limits their use in cases where large or numerous simulations are required. On the other hand, 2D models show less computational cost, but the precision of results depends on the approach used for the simplification. Two main questions arise: Are the 3D results adequately represented by a 2D section of the model? Which approach should be used to build a 2D model that provides reliable results compared to the 3D model? In this paper, we first evaluate if the stem symmetry plane used for generating the 2D models of bone-implant systems adequately represents the results of the full 3D model for stair climbing activity. Then, we explore three different approaches that have been used in the past for creating 2D models: (1) without side-plate (WOSP), (2) with variable thickness side-plate and constant cortical thickness (SPCT), and (3) with variable thickness side-plate and variable cortical thickness (SPVT). From the different approaches investigated, a 2D model including a side-plate best represents the results obtained with the full 3D model with much less computational cost. The side-plate needs to have variable thickness, while the cortical bone thickness can be kept constant.",
"title": ""
},
{
"docid": "f94385118e9fca123bae28093b288723",
"text": "One of the major restrictions on the performance of videobased person re-id is partial noise caused by occlusion, blur and illumination. Since different spatial regions of a single frame have various quality, and the quality of the same region also varies across frames in a tracklet, a good way to address the problem is to effectively aggregate complementary information from all frames in a sequence, using better regions from other frames to compensate the influence of an image region with poor quality. To achieve this, we propose a novel Region-based Quality Estimation Network (RQEN), in which an ingenious training mechanism enables the effective learning to extract the complementary region-based information between different frames. Compared with other feature extraction methods, we achieved comparable results of 92.4%, 76.1% and 77.83% on the PRID 2011, iLIDS-VID and MARS, respectively. In addition, to alleviate the lack of clean large-scale person re-id datasets for the community, this paper also contributes a new high-quality dataset, named “Labeled Pedestrian in the Wild (LPW)” which contains 7,694 tracklets with over 590,000 images. Despite its relatively large scale, the annotations also possess high cleanliness. Moreover, it’s more challenging in the following aspects: the age of characters varies from childhood to elderhood; the postures of people are diverse, including running and cycling in addition to the normal walking state.",
"title": ""
}
] |
scidocsrr
|
b6353659632254427774f5450abf6624
|
A competition on generalized software-based face presentation attack detection in mobile scenarios
|
[
{
"docid": "db5865f8f8701e949a9bb2f41eb97244",
"text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.",
"title": ""
},
{
"docid": "2c9138a706f316a10104f2da9a054e44",
"text": "Research on face spoofing detection has mainly been focused on analyzing the luminance of the face images, hence discarding the chrominance information which can be useful for discriminating fake faces from genuine ones. In this work, we propose a new face anti-spoofing method based on color texture analysis. We analyze the joint color-texture information from the luminance and the chrominance channels using a color local binary pattern descriptor. More specifically, the feature histograms are extracted from each image band separately. Extensive experiments on two benchmark datasets, namely CASIA face anti-spoofing and Replay-Attack databases, showed excellent results compared to the state-of-the-art. Most importantly, our inter-database evaluation depicts that the proposed approach showed very promising generalization capabilities.",
"title": ""
},
{
"docid": "2967df08ad0b9987ce2d6cb6006d3e69",
"text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.",
"title": ""
}
] |
[
{
"docid": "b7a4eec912eb32b3b50f1b19822c44a1",
"text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "b02cfc336a6e1636dbcba46d4ee762e8",
"text": "Peter C. Verhoef a,∗, Katherine N. Lemon b, A. Parasuraman c, Anne Roggeveen d, Michael Tsiros c, Leonard A. Schlesinger d a University of Groningen, Faculty of Economics and Business, P.O. Box 800, NL-9700 AV Groningen, The Netherlands b Boston College, Carroll School of Management, Fulton Hall 510, 140 Commonwealth Avenue, Chestnut Hill, MA 02467 United States c University of Miami, School of Business Administration, P.O. Box 24814, Coral Gables, FL 33124, United States d Babson College, 231 Forest Street, Wellesley, Massachusetts, United States",
"title": ""
},
{
"docid": "34ceb0e84b4e000b721f87bcbec21094",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
},
{
"docid": "3bbc633650b9010ef5c76ea1d634a495",
"text": "It is well known that significant metabolic change take place as cells are transformed from normal to malignant. This review focuses on the use of different bioinformatics tools in cancer metabolomics studies. The article begins by describing different metabolomics technologies and data generation techniques. Overview of the data pre-processing techniques is provided and multivariate data analysis techniques are discussed and illustrated with case studies, including principal component analysis, clustering techniques, self-organizing maps, partial least squares, and discriminant function analysis. Also included is a discussion of available software packages.",
"title": ""
},
{
"docid": "2c0b3b58da77cc217e4311142c0aa196",
"text": "In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures.",
"title": ""
},
{
"docid": "0190bdc5eafae72620f7fabbcdcc223c",
"text": "Breast cancer is regarded as one of the most frequent mortality causes among women. As early detection of breast cancer increases the survival chance, creation of a system to diagnose suspicious masses in mammograms is important. In this paper, two automated methods are presented to diagnose mass types of benign and malignant in mammograms. In the first proposed method, segmentation is done using an automated region growing whose threshold is obtained by a trained artificial neural network (ANN). In the second proposed method, segmentation is performed by a cellular neural network (CNN) whose parameters are determined by a genetic algorithm (GA). Intensity, textural, and shape features are extracted from segmented tumors. GA is used to select appropriate features from the set of extracted features. In the next stage, ANNs are used to classify the mammograms as benign or malignant. To evaluate the performance of the proposed methods different classifiers (such as random forest, naïve Bayes, SVM, and KNN) are used. Results of the proposed techniques performed on MIAS and DDSM databases are promising. The obtained sensitivity, specificity, and accuracy rates are 96.87%, 95.94%, and 96.47%, respectively. 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "58925e0088e240f42836f0c5d29f88d3",
"text": "SUMMARY\nDnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (approximately 5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results.\n\n\nAVAILABILITY\nThe software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp",
"title": ""
},
{
"docid": "0e2989631390dc57d0bce81fb7b633c9",
"text": "Among the most powerful tools for knowledge representation, we cite the ontology which allows knowledge structuring and sharing. In order to achieve efficient domain knowledge bases content, the latter has to establish well linked and knowledge between its components. In parallel, data mining techniques are used to discover hidden structures within large databases. In particular, association rules are used to discover co-occurrence relationships from past experiences. In this context, we propose, to develop a method to enrich existing ontologies with the identification of novel semantic relations between concepts in order to have a better coverage of the domain knowledge. The enrichment process is realized through discovered association rules. Nevertheless, this technique generates a large number of rules, where some of them, may be evident or already declared in the knowledge base. To this end, the generated association rules are categorized into three main classes: known knowledge, novel knowledge and unexpected rules. We demonstrate the applicability of this method using an existing mammographic ontology and patient’s records.",
"title": ""
},
{
"docid": "a13a50d552572d08b4d1496ca87ac160",
"text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.",
"title": ""
},
{
"docid": "144c11393bef345c67595661b5b20772",
"text": "BACKGROUND\nAppropriate placement of the bispectral index (BIS)-vista montage for frontal approach neurosurgical procedures is a neuromonitoring challenge. The standard bifrontal application interferes with the operative field; yet to date, no other placements have demonstrated good agreement. The purpose of our study was to compare the standard BIS montage with an alternate BIS montage across the nasal dorsum for neuromonitoring.\n\n\nMATERIALS AND METHODS\nThe authors performed a prospective study, enrolling patients and performing neuromonitoring using both the standard and the alternative montage on each patient. Data from the 2 placements were compared and analyzed using a Bland-Altman analysis, a Scatter plot analysis, and a matched-pair analysis.\n\n\nRESULTS\nOverall, 2567 minutes of data from each montage was collected on 28 subjects. Comparing the overall difference in score, the alternate BIS montage score was, on average, 2.0 (6.2) greater than the standard BIS montage score (P<0.0001). The Bland-Altman analysis revealed a difference in score of -2.0 (95% confidence interval, -14.1, 10.1), with 108/2567 (4.2%) of the values lying outside of the limit of agreement. The scatter plot analysis overall produced a trend line with the equation y=0.94x+0.82, with an R coefficient of 0.82.\n\n\nCONCLUSIONS\nWe determined that the nasal montage produces values that have slightly more variability compared with that ideally desired, but the variability is not clinically significant. In cases where the standard BIS-vista montage would interfere with the operative field, an alternative positioning of the BIS montage across the nasal bridge and under the eye can be used.",
"title": ""
},
{
"docid": "7a9419f17bcdfd2f6e361bd97d487d9f",
"text": "2. Relations 4. Dataset and Evaluation Cause-Effect Smoking causes cancer. Instrument-Agency The murderer used an axe. Product-Producer Bees make honey. Content-Container The cat is in the hat. Entity-Origin Vinegar is made from wine. Entity-Destination The car arrived at the station. Component-Whole The laptop has a fast processor. Member-Collection There are ten cows in the herd. Communication-Topic You interrupted a lecture on maths. Each example consists of two (base) NPs marked with tags <e1> and <e2>:",
"title": ""
},
{
"docid": "80c522a65fafb98886d1d3d848605e77",
"text": "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: //github.com/ramprs/grad-cam/ along with a demo on CloudCV [2] and video at youtu.be/COjUB9Izk6E.",
"title": ""
},
{
"docid": "e29c6d0c4d5b82d7e968ab48d076a7ba",
"text": "In recent years, a large number of researchers are endeavoring to develop wireless sensing and related applications as Wi-Fi devices become ubiquitous. As a significant research branch, gesture recognition has become one of the research hotspots. In this paper, we propose WiCatch, a novel device free gesture recognition system which utilizes the channel state information to recognize the motion of hands. First of all, with the aim of catching the weak signals reflected from hands, a novel data fusion-based interference elimination algorithm is proposed to diminish the interference caused by signals reflected from stationary objects and the direct signal from transmitter to receiver. Second, the system catches the signals reflected from moving hands and rebuilds the motion locus of the gesture by constructing the virtual antenna array based on signal samples in time domain. Finally, we adopt support vector machines to complete the classification. The extensive experimental results demonstrate that the WiCatch can achieves a recognition accuracy over 0.96. Furthermore, the WiCatch can be applied to two-hand gesture recognition and reach a recognition accuracy of 0.95.",
"title": ""
},
{
"docid": "1ff73fcdeba269bc2bf9f45279cb3e45",
"text": "The Internet of Things has attracted a plenty of research in this decade and imposed fascinating services where large numbers of heterogeneous-features entities socially collaborate together to solve complex scenarios. However, these entities need to trust each other prior to exchanging data or offering services. In this paper, we briefly present our ongoing project called Trust Service Platform, which offers trust assessment of any two entities in the Social Internet of Things to applications and services. We propose a trust model that incorporates both reputation properties as Recommendation and Reputation trust metrics; and knowledge-based property as Knowledge trust metric. For the trust service platform deployment, we propose a reputation system and a functional architecture with Trust Agent, Trust Broker and Trust Analysis and Management modules along with mechanisms and algorithms to deal with the three trust metrics. We also present a utility theory-based mechanism for trust calculation. To clarify our trust service platform, we describe the trust models and mechanisms in accordance with a trust car-sharing service. We believe this study offers the better understanding of the trust as a service in the platform and will impose many trustrelated research challenges as the future work. Keywords—Social Internet of Things; Trust as a Service; TaaS; Trust Model; Trust Metric; Trust Management; Recommendation; Reputation; Knowledge; Fuzzy; Utility Theory;",
"title": ""
},
{
"docid": "fcf0ac3b52a1db116463e7376dae4950",
"text": "Although the ability to perform complex cognitive operations is assumed to be impaired following acute marijuana smoking, complex cognitive performance after acute marijuana use has not been adequately assessed under experimental conditions. In the present study, we used a within-participant double-blind design to evaluate the effects acute marijuana smoking on complex cognitive performance in experienced marijuana smokers. Eighteen healthy research volunteers (8 females, 10 males), averaging 24 marijuana cigarettes per week, completed this three-session outpatient study; sessions were separated by at least 72-hrs. During sessions, participants completed baseline computerized cognitive tasks, smoked a single marijuana cigarette (0%, 1.8%, or 3.9% Δ9-THC w/w), and completed additional cognitive tasks. Blood pressure, heart rate, and subjective effects were also assessed throughout sessions. Marijuana cigarettes were administered in a double-blind fashion and the sequence of Δ9-THC concentration order was balanced across participants. Although marijuana significantly increased the number of premature responses and the time participants required to complete several tasks, it had no effect on accuracy on measures of cognitive flexibility, mental calculation, and reasoning. Additionally, heart rate and several subjective-effect ratings (e.g., “Good Drug Effect,” “High,” “Mellow”) were significantly increased in a Δ9-THC concentration-dependent manner. These data demonstrate that acute marijuana smoking produced minimal effects on complex cognitive task performance in experienced marijuana users.",
"title": ""
},
{
"docid": "7bdbfd11a4aa723d3b5361f689d93698",
"text": "We discuss the characteristics of constructive news comments, and present methods to identify them. First, we define the notion of constructiveness. Second, we annotate a corpus for constructiveness. Third, we explore whether available argumentation corpora can be useful to identify constructiveness in news comments. Our model trained on argumentation corpora achieves a top accuracy of 72.59% (baseline=49.44%) on our crowdannotated test data. Finally, we examine the relation between constructiveness and toxicity. In our crowd-annotated data, 21.42% of the non-constructive comments and 17.89% of the constructive comments are toxic, suggesting that non-constructive comments are not much more toxic than constructive comments.",
"title": ""
},
{
"docid": "4a817638751fdfe46dfccc43eea76cbd",
"text": "In this article we present a classification scheme for quantum computing technologies that is based on the characteristics most relevant to computer systems architecture. The engineering trade-offs of execution speed, decoherence of the quantum states, and size of systems are described. Concurrency, storage capacity, and interconnection network topology influence algorithmic efficiency, while quantum error correction and necessary quantum state measurement are the ultimate drivers of logical clock speed. We discuss several proposed technologies. Finally, we use our taxonomy to explore architectural implications for common arithmetic circuits, examine the implementation of quantum error correction, and discuss cluster-state quantum computation.",
"title": ""
},
{
"docid": "f141bd66dc2a842c21f905e3e01fa93c",
"text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature",
"title": ""
},
{
"docid": "4fbc692a4291a92c6fa77dc78913e587",
"text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.",
"title": ""
}
] |
scidocsrr
|
69ad47bebd6b43816e3e7acba16c3c1b
|
Smartphone addiction and its relationship with social anxiety and loneliness
|
[
{
"docid": "21bb289fb932b23d95fee7d40401d70c",
"text": "Mobile phone use is banned or regulated in some circumstances. Despite recognized safety concerns and legal regulations, some people do not refrain from using mobile phones. Such problematic mobile phone use can be considered to be an addiction-like behavior. To find the potential predictors, we examined the correlation between problematic mobile phone use and personality traits reported in addiction literature, which indicated that problematic mobile phone use was a function of gender, self-monitoring, and approval motivation but not of loneliness. These findings suggest that the measurements of these addictive personality traits would be helpful in the screening and intervention of potential problematic users of mobile phones.",
"title": ""
},
{
"docid": "d72db190e011d0e8260465ce259111df",
"text": "This study developed a Smartphone Addiction Proneness Scale (SAPS) based on the existing internet and cellular phone addiction scales. For the development of this scale, 29 items (1.5 times the final number of items) were initially selected as preliminary items, based on the previous studies on internet/phone addiction as well as the clinical experience of involved experts. The preliminary scale was administered to a nationally representative sample of 795 students in elementary, middle, and high schools across South Korea. Then, final 15 items were selected according to the reliability test results. The final scale consisted of four subdomains: (1) disturbance of adaptive functions, (2) virtual life orientation, (3) withdrawal, and (4) tolerance. The final scale indicated a high reliability with Cronbach's α of .880. Support for the scale's criterion validity has been demonstrated by its relationship to the internet addiction scale, KS-II (r = .49). For the analysis of construct validity, we tested the Structural Equation Model. The results showed the four-factor structure to be valid (NFI = .943, TLI = .902, CFI = .902, RMSEA = .034). Smartphone addiction is gaining a greater spotlight as possibly a new form of addiction along with internet addiction. The SAPS appears to be a reliable and valid diagnostic scale for screening adolescents who may be at risk of smartphone addiction. Further implications and limitations are discussed.",
"title": ""
},
{
"docid": "1ebb827b9baf3307bc20de78538d23e7",
"text": "0747-5632/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2013.07.003 ⇑ Corresponding author. Address: University of North Texas, College of Business, 1155 Union Circle #311160, Denton, TX 76203-5017, USA. E-mail addresses: mohammad.salehan@unt.edu (M. Salehan), arash.negah ban@unt.edu (A. Negahban). 1 These authors contributed equally to the work. Mohammad Salehan 1,⇑, Arash Negahban 1",
"title": ""
}
] |
[
{
"docid": "db79c4fc00f18c3d7822c9f79d1a4a83",
"text": "We propose a new pipeline for optical flow computation, based on Deep Learning techniques. We suggest using a Siamese CNN to independently, and in parallel, compute the descriptors of both images. The learned descriptors are then compared efficiently using the L2 norm and do not require network processing of patch pairs. The success of the method is based on an innovative loss function that computes higher moments of the loss distributions for each training batch. Combined with an Approximate Nearest Neighbor patch matching method and a flow interpolation technique, state of the art performance is obtained on the most challenging and competitive optical flow benchmarks.",
"title": ""
},
{
"docid": "e8fcd0e7e27a4f17f963bbdbd94e6406",
"text": "Visual Interpretation of gestures can be useful in accomplishing natural Human Computer Interactions (HCI). In this paper we proposed a method for recognizing hand gestures. We have designed a system which can identify specific hand gestures and use them to convey information. At any time, a user can exhibit his/her hand doing a specific gesture in front of a web camera linked to a computer. Firstly, we captured the hand gesture of a user and stored it on disk. Then we read those videos captured one by one, converted them to binary images and created 3D Euclidian Space of binary values. We have used supervised feed-forward neural net based training and back propagation algorithm for classifying hand gestures into ten categories: hand pointing up, pointing down, pointing left, pointing right and pointing front and number of fingers user was showing. We could achieve up to 89% correct results on a typical test set.",
"title": ""
},
{
"docid": "98926294ff7f9e13f8187e8f261639e9",
"text": "The resistive cross-point array architecture has been proposed for on-chip implementation of weighted sum and weight update operations in neuro-inspired learning algorithms. However, several limiting factors potentially hamper the learning accuracy, including the nonlinearity and device variations in weight update, and the read noise, limited ON/OFF weight ratio and array parasitics in weighted sum. With unsupervised sparse coding as a case study algorithm, this paper employs device-algorithm co-design methodologies to quantify and mitigate the impact of these non-ideal properties on the accuracy. Our analysis shows that the realistic properties in weight update are tolerable, while those in weighted sum are detrimental to the accuracy. With calibration of realistic synaptic behaviors from experimental data, our study shows that the recognition accuracy of MNIST handwriting digits degrades from ∼96 to ∼30 percent. The strategies to mitigate this accuracy loss include 1) redundant cells to alleviate the impact of device variations; 2) a dummy column to eliminate the off-state current; and 3) selector and larger wire width to reduce IR drop along interconnects. The selector also reduces the leakage power in weight update. With improved properties by these strategies, the accuracy increases back to ∼95 percent, enabling reliable integration of realistic synaptic devices in neuromorphic systems.",
"title": ""
},
{
"docid": "1f02f9dae964a7e326724faa79f5ddc3",
"text": "The purpose of this review was to examine published research on small-group development done in the last ten years that would constitute an empirical test of Tuckman’s (1965) hypothesis that groups go through these stages of “forming,” “storming,” “norming,” and “performing.” Of the twenty-two studies reviewed, only one set out to directly test this hypothesis, although many of the others could be related to it. Following a review of these studies, a fifth stage, “adjourning.” was added to the hypothesis, and more empirical work was recommended.",
"title": ""
},
{
"docid": "bcd8757af7d00d198a1799a3bc145c2c",
"text": "Trust is a critical social process that helps us to cooperate with others and is present to some degree in all human interaction. However, the underlying brain mechanisms of conditional and unconditional trust in social reciprocal exchange are still obscure. Here, we used hyperfunctional magnetic resonance imaging, in which two strangers interacted online with one another in a sequential reciprocal trust game while their brains were simultaneously scanned. By designing a nonanonymous, alternating multiround game, trust became bidirectional, and we were able to quantify partnership building and maintenance. Using within- and between-brain analyses, an examination of functional brain activity supports the hypothesis that the preferential activation of different neuronal systems implements these two trust strategies. We show that the paracingulate cortex is critically involved in building a trust relationship by inferring another person's intentions to predict subsequent behavior. This more recently evolved brain region can be differently engaged to interact with more primitive neural systems in maintaining conditional and unconditional trust in a partnership. Conditional trust selectively activated the ventral tegmental area, a region linked to the evaluation of expected and realized reward, whereas unconditional trust selectively activated the septal area, a region linked to social attachment behavior. The interplay of these neural systems supports reciprocal exchange that operates beyond the immediate spheres of kinship, one of the distinguishing features of the human species.",
"title": ""
},
{
"docid": "477e4a6930d147a598e1e0c453062ed2",
"text": "Stock markets are driven by a multitude of dynamics in which facts and beliefs play a major role in affecting the price of a company’s stock. In today’s information age, news can spread around the globe in some cases faster than they happen. While it can be beneficial for many applications including disaster prevention, our aim in this thesis is to use the timely release of information to model the stock market. We extract facts and beliefs from the population using one of the fastest growing social networking tools on the Internet, namely Twitter. We examine the use of Natural Language Processing techniques with a predictive machine learning approach to analyze millions of Twitter posts from which we draw distinctive features to create a model that enables the prediction of stock prices. We selected several stocks from the NASDAQ stock exchange and collected Intra-Day stock quotes during a period of two weeks. We build different feature representations from the raw Twitter posts and combined them with the stock price in order to build a regression model using the Support Vector Regression algorithm. We were able to build models of the stocks which predicted discrete prices that were close to a strong baseline. We further investigated the prediction of future prices, on average predicting 15 minutes ahead of the actual price, and evaluated the results using a Virtual Stock Trading Engine. These results were in general promising, but contained also some random variations across the different datasets.",
"title": ""
},
{
"docid": "172f105b7b09f19b278742af95a8d9bb",
"text": "50 AI MAGAZINE The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern, 2012) was proposed by Hector Levesque in 2011 as an alternative to the Turing test. Turing (1950) had first introduced the notion of testing a computer system’s intelligence by assessing whether it could fool a human judge into thinking that it was conversing with a human rather a computer. Although intuitively appealing and arbitrarily flexible — in theory, a human can ask the computer system that is being tested wide-ranging questions about any subject desired — in practice, the execution of the Turing test turns out to be highly susceptible to systems that few people would wish to call intelligent. The Loebner Prize Competition (Christian 2011) is in particular associated with the development of chatterbots that are best viewed as successors to ELIZA (Weizenbaum 1966), the program that fooled people into thinking that they were talking to a human psychotherapist by cleverly turning a person’s statements into questions of the sort a therapist would ask. The knowledge and inference that characterize conversations of substance — for example, discussing alternate metaphors in sonnets of Shakespeare — and which Turing presented as examples of the sorts of conversation that an intelligent system should be able to produce, are absent in these chatterbots. The focus is merely on engaging in surfacelevel conversation that can fool some humans who do not delve too deeply into a conversation, for at least a few minutes, into thinking that they are speaking to another person. The widely reported triumph of the chatterbot Eugene Goostman in fooling 10 out of 30 judges to judge, after a fiveminute conversation, that it was human (University of Read-",
"title": ""
},
{
"docid": "a0c240efadc361ea36b441d34fc10a26",
"text": "We describe a single-feed stacked patch antenna design that is capable of simultaneously receiving both right hand circularly polarized (RHCP) satellite signals within the GPS LI frequency band and left hand circularly polarized (LHCP) satellite signals within the SDARS frequency band. In addition, the design provides improved SDARS vertical linear polarization (VLP) gain for terrestrial repeater signal reception at low elevation angles as compared to a current state of the art SDARS patch antenna.",
"title": ""
},
{
"docid": "0a414cd886ebf2a311d27b17c53e535f",
"text": "We consider the problem of classifying documents not by topic, but by overall sentiment. Previous approaches to sentiment classification have favored domain-specific, supervised machine learning (Naive Bayes, maximum entropy classification, and support vector machines). Inherent in these methodologies is the need for annotated training data. Building on previous work, we examine an unsupervised system of iteratively extracting positive and negative sentiment items which can be used to classify documents. Our method is completely unsupervised and only requires linguistic insight into the semantic orientation of sentiment.",
"title": ""
},
{
"docid": "305f0c417d1e6f6189c431078b359793",
"text": "Sentence relation extraction aims to extract relational facts from sentences, which is an important task in natural language processing field. Previous models rely on the manually labeled supervised dataset. However, the human annotation is costly and limits to the number of relation and data size, which is difficult to scale to large domains. In order to conduct largely scaled relation extraction, we utilize an existing knowledge base to heuristically align with texts, which not rely on human annotation and easy to scale. However, using distant supervised data for relation extraction is facing a new challenge: sentences in the distant supervised dataset are not directly labeled and not all sentences that mentioned an entity pair can represent the relation between them. To solve this problem, we propose a novel model with reinforcement learning. The relation of the entity pair is used as distant supervision and guide the training of relation extractor with the help of reinforcement learning method. We conduct two types of experiments on a publicly released dataset. Experiment results demonstrate the effectiveness of the proposed method compared with baseline models, which achieves 13.36% improvement.",
"title": ""
},
{
"docid": "2b30506690acbae9240ef867e961bc6c",
"text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.",
"title": ""
},
{
"docid": "f52073ddb9c4507d11190cd13637b91d",
"text": "The application of fuzzy-based control strategies has recently gained enormous recognition as an approach for the rapid development of effective controllers for nonlinear time-variant systems. This paper describes the preliminary research and implementation of a fuzzy logic based controller to control the wheel slip for electric vehicle antilock braking systems (ABSs). As the dynamics of the braking systems are highly nonlinear and time variant, fuzzy control offers potential as an important tool for development of robust traction control. Simulation studies are employed to derive an initial rule base that is then tested on an experimental test facility representing the dynamics of a braking system. The test facility is composed of an induction machine load operating in the generating region. It is shown that the torque-slip characteristics of an induction motor provides a convenient platform for simulating a variety of tire/road driving conditions, negating the initial requirement for skid-pan trials when developing algorithms. The fuzzy membership functions were subsequently refined by analysis of the data acquired from the test facility while simulating operation at a high coefficient of friction. The robustness of the fuzzy-logic slip regulator is further tested by applying the resulting controller over a wide range of operating conditions. The results indicate that ABS/traction control may substantially improve longitudinal performance and offer significant potential for optimal control of driven wheels, especially under icy conditions where classical ABS/traction control schemes are constrained to operate very conservatively.",
"title": ""
},
{
"docid": "b43c4d5d97120963a3ea84a01d029819",
"text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.",
"title": ""
},
{
"docid": "16946ba4be3cf8683bee676b5ac5e0de",
"text": "1. The types of perfect Interpretation-wise, several types of perfect expressions have been recognized in the literature (e. To illustrate, a present perfect can have one of at least three interpretations: (1) a. Since 2000, Alexandra has lived in LA. UNIVERSAL b. Alexandra has been in LA (before). EXPERIENTIAL c. Alexandra has (just) arrived in LA. RESULTATIVE The three types of perfect make different claims about the temporal location of the underlying eventuality, i.e., of live in LA in (1a), be in LA in (1b), arrive in LA in (1c), with respect to a reference time. The UNIVERSAL perfect, as in (1a), asserts that the underlying eventuality holds throughout an interval, delimited by the time of utterance and a certain time in the past (in this case, the year 2000). The EXPERIENTIAL perfect, as in (1b), asserts that the underlying eventuality holds at a proper subset of an interval, extending back from the utterance time. The RESULTATIVE perfect makes the same assertion as the Experiential perfect, with the added meaning that the result of the underlying eventuality (be in LA is the result of arrive in LA) holds at the utterance time. The distinction between the Experiential and the Resultative perfects is rather subtle. The two are commonly grouped together as the EXISTENTIAL perfect (McCawley 1971, Mittwoch 1988) and this terminology is adopted here as well. 1 Two related questions arise: (i) Is the distinction between the three types of perfect grammatically based? (ii) If indeed so, then is it still possible to posit a common representation for the perfect – a uniform structure with a single meaning – which, in combination with certain other syntactic components , each with a specialized meaning, results in the three different readings? This paper suggests that the answer to both questions is yes. To start addressing these questions, let us look at some of the known factors behind the various interpretations of the perfect. It has to be noted that the different perfect readings are not a peculiarity of the present perfect despite the fact that they are primarily discussed in relation to that form. The same interpretations are available to the past, future and nonfinite per",
"title": ""
},
{
"docid": "8660613f0c17aef86bffe1107257e316",
"text": "The enumeration and characterization of circulating tumor cells (CTCs) in the peripheral blood and disseminated tumor cells (DTCs) in bone marrow may provide important prognostic information and might help to monitor efficacy of therapy. Since current assays cannot distinguish between apoptotic and viable DTCs/CTCs, it is now possible to apply a novel ELISPOT assay (designated 'EPISPOT') that detects proteins secreted/released/shed from single epithelial cancer cells. Cells are cultured for a short time on a membrane coated with antibodies that capture the secreted/released/shed proteins which are subsequently detected by secondary antibodies labeled with fluorochromes. In breast cancer, we measured the release of cytokeratin-19 (CK19) and mucin-1 (MUC1) and demonstrated that many patients harbored viable DTCs, even in patients with apparently localized tumors (stage M(0): 54%). Preliminary clinical data showed that patients with DTC-releasing CK19 have an unfavorable outcome. We also studied CTCs or CK19-secreting cells in the peripheral blood of M1 breast cancer patients and showed that patients with CK19-SC had a worse clinical outcome. In prostate cancer, we used prostate-specific antigen (PSA) secretion as marker and found that a significant fraction of CTCs secreted fibroblast growth factor-2 (FGF2), a known stem cell growth factor. In conclusion, the EPISPOT assay offers a new opportunity to detect and characterize viable DTCs/CTCs in cancer patients and it can be extended to a multi-parameter analysis revealing a CTC/DTC protein fingerprint.",
"title": ""
},
{
"docid": "45fe8a9188804b222df5f12bc9a486bc",
"text": "There is renewed interest in the application of gypsum to agricultural lands, particularly of gypsum produced during flue gas desulfurization (FGD) at coal-burning power plants. We studied the effects of land application of FGD gypsum to corn ( L.) in watersheds draining to the Great Lakes. The FGD gypsum was surface applied at 11 sites at rates of 0, 1120, 2240, and 4480 kg ha after planting to 3-m by 7.6-m field plots. Approximately 12 wk after application, penetration resistance and hydraulic conductivity were measured in situ, and samples were collected for determination of bulk density and aggregate stability. No treatment effect was detected for penetration resistance or hydraulic conductivity. A positive treatment effect was seen for bulk density at only 2 of 10 sites tested. Aggregate stability reacted similarly across all sites and was decreased with the highest application of FGD gypsum, whereas the lower rates were not different from the control. Overall, there were few beneficial effects of the FGD gypsum to soil physical properties in the year of application.",
"title": ""
},
{
"docid": "8891a6c47a7446bb7597471796900867",
"text": "The component \"thing\" of the Internet of Things does not yet exist in current business process modeling standards. The \"thing\" is the essential and central concept of the Internet of Things, and without its consideration we will not be able to model the business processes of the future, which will be able to measure or change states of objects in our real-world environment. The presented approach focuses on integrating the concept of the Internet of Things into the meta-model of the process modeling standard BPMN 2.0 as standard-conform as possible. By a terminological and conceptual delimitation, three components of the standard are examined and compared towards a possible expansion. By implementing the most appropriate solution, the new thing concept becomes usable for modelers, both as a graphical and machine-readable element.",
"title": ""
},
{
"docid": "1f130c43ca2dd1431923ef1bbe44d049",
"text": "BACKGROUND\nCeaseFire, using an infectious disease approach, addresses violence by partnering hospital resources with the community by providing violence interruption and community-based services for an area roughly composed of a single city zip code (70113). Community-based violence interrupters start in the trauma center from the moment penetrating trauma occurs, through hospital stay, and in the community after release. This study interprets statistics from this pilot program, begun May 2012. We hypothesize a decrease in penetrating trauma rates in the target area compared with others after program implementation.\n\n\nMETHODS\nThis was a 3-year prospective data collection of trauma registry from May 2010 to May 2013. All intentional, target area, penetrating trauma treated at our Level I trauma center received immediate activation of CeaseFire personnel. Incidences of violent trauma and rates of change, by zip code, were compared with the same period for 2 years before implementation.\n\n\nRESULTS\nDuring this period, the yearly incidence of penetrating trauma in Orleans Parish increased. Four of the highest rates were found in adjacent zip codes: 70112, 70113, 70119, and 70125. Average rates per 100,000 were 722.7, 523.6, 286.4, and 248, respectively. These areas represent four of the six zip codes citywide that saw year-to-year increases in violent trauma during this period. Zip 70113 saw a lower rate of rise in trauma compared with 70112 and a higher but comparable rise compared with that of 70119 and 70125.\n\n\nCONCLUSION\nHospital-based intervention programs that partner with culturally appropriate personnel and resources outside the institution walls have potential to have meaningful impact over the long term. While few conclusions of the effect of such a program can be drawn in a 12-month period, we anticipate long-term changes in the numbers of penetrating injuries in the target area and in the rest of the city as this program expands.\n\n\nLEVEL OF EVIDENCE\nTherapeutic study, level IV.",
"title": ""
},
{
"docid": "1509a06ce0b2395466fe462b1c3bd333",
"text": "This paper addresses mechanics, design, estimation and control for aerial grasping. We present the design of several light-weight, low-complexity grippers that allow quadrotors to grasp and perch on branches or beams and pick up and transport payloads. We then show how the robot can use rigid body dynamic models and sensing to verify a grasp, to estimate the the inertial parameters of the grasped object, and to adapt the controller and improve performance during flight. We present experimental results with different grippers and different payloads and show the robot's ability to estimate the mass, the location of the center of mass and the moments of inertia to improve tracking performance.",
"title": ""
},
{
"docid": "488b0adfe43fc4dbd9412d57284fc856",
"text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.",
"title": ""
}
] |
scidocsrr
|
009a4972275cc44fe7e4cc46b69d8a05
|
Employees' Information Security Awareness and Behavior: A Literature Review
|
[
{
"docid": "b99b9f80b4f0ca4a8d42132af545be76",
"text": "By: Catherine L. Anderson Decision, Operations, and Information Technologies Department Robert H. Smith School of Business University of Maryland Van Munching Hall College Park, MD 20742-1815 U.S.A. Catherine_Anderson@rhsmith.umd.edu Ritu Agarwal Center for Health Information and Decision Systems University of Maryland 4327 Van Munching Hall College Park, MD 20742-1815 U.S.A. ragarwal@rhsmith.umd.edu",
"title": ""
}
] |
[
{
"docid": "3fcce3664db5812689c121138e2af280",
"text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "de569abb181a993a6da91b7da0baf3cf",
"text": "The field of image denoising is currently dominated by discriminative deep learning methods that are trained on pairs of noisy input and clean target images. Recently it has been shown that such methods can also be trained without clean targets. Instead, independent pairs of noisy images can be used, in an approach known as NOISE2NOISE (N2N). Here, we introduce NOISE2VOID (N2V), a training scheme that takes this idea one step further. It does not require noisy image pairs, nor clean target images. Consequently, N2V allows us to train directly on the body of data to be denoised and can therefore be applied when other methods cannot. Especially interesting is the application to biomedical image data, where the acquisition of training targets, clean or noisy, is frequently not possible. We compare the performance of N2V to approaches that have either clean target images and/or noisy image pairs available. Intuitively, N2V cannot be expected to outperform methods that have more information available during training. Still, we observe that the denoising performance of NOISE2VOID drops in moderation and compares favorably to training-free denoising methods.",
"title": ""
},
{
"docid": "bd8470bab582c3742f5382831431ddb0",
"text": "Roaming users who use untrusted machines to access password protected accounts have few good options. An internet café machine can easily be running a keylogger. The roaming user has no reliable way of determining whether it is safe, and has no alternative to typing the password. We describe a simple trick the user can employ that is entirely effective in concealing the password. We verify its efficacy against the most popular keylogging programs.",
"title": ""
},
{
"docid": "d286afa5ef0e67904d78883080fe073a",
"text": "As the cellular networks continue to progress between generations, the expectations of 5G systems are planned toward high-capacity communication links that can provide users access to numerous types of applications (e.g., augmented reality and holographic multimedia streaming). The demand for higher bandwidth has led the research community to investigate unexplored frequency spectrums, such as the terahertz band for 5G. However, this particular spectrum is strived with numerous challenges, which includes the need for line-of-sight (LoS) links as reflections will deflect the waves as well as molecular absorption that can affect the signal strength. This is further amplified when a high quality of service has to be maintained over infrastructure that supports mobility, as users (or groups of users) migrate between locations, requiring frequent handover for roaming. In this paper, the concept of mirror-assisted wireless coverage is introduced, where smart antennas are utilized with dielectric mirrors that act as reflectors for the terahertz waves. The objective is to utilize information such as the user's location and to direct the reflective beam toward the highest concentration of users. A multiray model is presented in order to develop the propagation models for both indoor and outdoor scenarios in order to validate the proposed use of the reflectors. An office and a pedestrian-walking scenarios are used for indoor and outdoor scenarios, respectively. The results from the simulation work show an improvement with the usage of mirror-assisted wireless coverage, improving the overall capacity, the received power, the path loss, and the probability of LoS.",
"title": ""
},
{
"docid": "df04a11d82e8ccf8ea5af180f77bc5f3",
"text": "More and more cities are looking for service providers able to deliver 3D city models in a short time. Airborne laser scanning techniques make it possible to acquire a three-dimensional point cloud leading almost instantaneously to digital surface models (DSM), but these models are far from a topological 3D model needed by geographers or land surveyors. The aim of this paper is to present the pertinence and advantages of combining simultaneously the point cloud and the normalized DSM (nDSM) in the main steps of a building reconstruction approach. This approach has been implemented in order to exempt any additional data and to automate the process. The proposed workflow firstly extracts the off-terrain mask based on DSM. Then, it combines the point cloud and the DSM for extracting a building mask from the off-terrain. At last, based on the previously extracted building mask, the reconstruction of 3D flat roof models is carried out and analyzed.",
"title": ""
},
{
"docid": "83a8e06926e25b256db367df6df6b3d9",
"text": "The proposed System assists the sensor based mobile robot navigation in an indoor environment using Fuzzy logic controller. Fuzzy logic control is well suited for controlling a mobile robot because it is capable of making inferences even under uncertainty. It assists rules generation and decision-making. It uses set of linguistic Fuzzy rules to implement expert knowledge under various situations. A Fuzzy logic system is designed with two basic behaviors- obstacle avoidance and a target seeking behavior. The inputs to the Fuzzy logic controller are the desired direction of motion and the readings from the sensors. The outputs from the Fuzzy logic controller are the accelerations of robot wheels. Under the proposed Fuzzy model, a mobile robot avoids the obstacles and generates the path towards the target.",
"title": ""
},
{
"docid": "338e037f4ec9f6215f48843b9d03f103",
"text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).",
"title": ""
},
{
"docid": "350d1717a5192873ef9e0ac9ed3efc7b",
"text": "OBJECTIVE\nTo describe the effects of percutaneously implanted valve-in-valve in the tricuspid position for patients with pre-existing transvalvular device leads.\n\n\nMETHODS\nIn this case series, we describe implantation of the Melody valve and SAPIEN XT valve within dysfunctional bioprosthetic tricuspid valves in three patients with transvalvular device leads.\n\n\nRESULTS\nIn all cases, the valve was successfully deployed and device lead function remained unchanged. In 1/3 cases with 6-month follow-up, device lead parameters remain unchanged and transcatheter valve-in-valve function remains satisfactory.\n\n\nCONCLUSIONS\nTranscatheter tricuspid valve-in-valve is feasible in patients with pre-existing transvalvular devices leads. Further study is required to determine the long-term clinical implications of this treatment approach.",
"title": ""
},
{
"docid": "7927dffe38cec1ce2eb27dbda644a670",
"text": "This paper describes our system for SemEval-2010 Task 8 on multi-way classification of semantic relations between nominals. First, the type of semantic relation is classified. Then a relation typespecific classifier determines the relation direction. Classification is performed using SVM classifiers and a number of features that capture the context, semantic role affiliation, and possible pre-existing relations of the nominals. This approach achieved an F1 score of 82.19% and an accuracy of 77.92%.",
"title": ""
},
{
"docid": "522ea0f60a2c010747bb90005f86cb91",
"text": "The first part of this paper intends to give an overview of the maximum power point tracking methods for photovoltaic (PV) inverters presently reported in the literature. The most well-known and popular methods, like the perturb and observe (P&O), the incremental conductance (INC) and the constant voltage (CV), are presented. These methods, especially the P&O, have been treated by many works, which aim to overcome their shortcomings, either by optimizing the methods, or by combining them. In the second part of the paper an improvement for the P&O and INC method is proposed, which prevents these algorithms to get confused during rapidly changing irradiation conditions, and it considerably increases the efficiency of the MPPT",
"title": ""
},
{
"docid": "c17240d9adc3720020adff6d7ab3b59f",
"text": "class LifeCycleBindingMixin { String bindFc (...) { if (getFcState() != STOPPED) throw new Exception(); return _super_bindFc(...); } abstract String _super_bindFc (...);} class XYZ extends BasicBindingController { String bindFc (...) { if (getFcState() != STOPPED) throw new Exception(); return super.bindFc(...); } } Result of of mixins composition depends on the order in which they are composed Controllers are builts as composition of control classes and mixins T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 37 Interceptors Interceptors Most control aspects have two parts A generic part (a.k.a. “advice”) A specific part based on interception of interactions between components (a.k.a. « hooks ») Interceptors have to be inserted in functional (applicative) code Interceptor classes are generated in bytecode form by an generator which relies on ASM Interceptor class generator G(class, interface(s), aspect code weaver(s)) -> subclass of class which implement interface(s) and aspect(s) Transformations are composed (in the class) in the order aspects code weavers are given Aspect code weaver An object that can manipulate the bytecode of operations arbitrarily Example: Transformation of void m { return delegate.m } Into void m { // pre code... try {delegate.m();} finally {//post code... }} Configuration Interceptors associated to a component are specified at component creation time Julia comes with a library of code weavers: life cycle, trace, reification of operation names, reification of operation names and arguments T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 38 Life Cycle Management Approach based on invocation count Interceptors behind all interfaces increment and decrement a counter in LifeCycle controller LifeCycle controller waits for counters to be nil to stop the component (STARTED->STOPPED) when then component is in sate STOPPED, all activities (includind new incoming ones) are blocked activities (and counter increment) are unblocked when the component is started again Composite components stop recursively the primitive components in their content and primitive client components of these components Because of inter-component optimization (detailed later) Same algorithm with n counters NB: needs to wait for n counters to be nil at the same time with a risk of livelock Limitations Risk of livelock when waiting for n counters to be nil at the same time No state management hence integrity is not fully guaranteed during reconfigurations T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 39 Intra-component optimization 3 possibilities for memory optimization Fusion of controller objects (left) Fusion of controller objects and interceptors (middle) if interceptors do all delegate to the same object Fusion of controllers and contents (right) for primitive components Merging is done in bytecode form by generating a class based on lexicographic patterns in concerned controller classes weavableX for a required interface of type X in controller is replaced by this in the generated class weavableOptY for a required interface of type Y is replaced by this or null in the generated class T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 40 Inter-component Optimization Shortcut algorithm Optimized links for performance (“shortcuts”) subtituted to implementation ( ) and delegate links ( ) in binding chains NB: behaviour is hazardous if components exchange references directly (e.g. this) instead of always using the Fractal API Shorcuts must be recomputed each time a binding is changed Initial path",
"title": ""
},
{
"docid": "94aeb6dad00f174f89b709feab3db21f",
"text": "We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris’ distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness.",
"title": ""
},
{
"docid": "de721f4b839b0816f551fa8f8ee2065e",
"text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.",
"title": ""
},
{
"docid": "3eff4654a3bbf9aa3fbfe15033383e67",
"text": "Pizza is a strict superset of Java that incorporates three ideas from the academic community: parametric polymorphism, higher-order functions, and algebraic data types. Pizza is defined by translation into Java and compiles into the Java Virtual Machine, requirements which strongly constrain the design space. Nonetheless, Pizza fits smoothly to Java, with only a few rough edges.",
"title": ""
},
{
"docid": "b71477154243283819d499c381119c2d",
"text": "Indonesia is one of countries well-known as the biggest palm oil producers in the world. In 2015, this country succeeded to produce 32.5 million tons of palm oil, and used 26.4 million of it to export to other countries. The quality of Indonesia's palm oil production has become the reason why Indonesia becomes the famous exporter in a global market. For this reason, many Indonesian palm oil companies are trying to improve their quality through smart farming. One of the ways to improve is by using technology such as Internet of Things (IoT). In order to have the actual and real-time condition of the land, using the IoT concept by connecting some sensors. A previous research has accomplished to create some Application Programming Interfaces (API), which can be used to support the use of technology. However, these APIs have not been integrated to a User Interface (UI), as it can only be used by developers or programmers. These APIs have not been able to be used as a monitoring information system for palm oil plantation, which can be understood by the employees. Based on those problems, this research attempts to develop a monitoring information system, which will be integrated with the APIs from the previous research by using the Progressive Web App (PWA) approach. So, this monitoring information system can be accessed by the employees, either by using smartphone or by using desktop. Even, it can work similar with a native application.",
"title": ""
},
{
"docid": "590931691f16239904733befab24e70a",
"text": "In a neural network, neuron computation is achieved through the summation of input signals fed by synaptic connections. The synaptic activity (weight) is dictated by the synchronous firing of neurons, inducing potentiation/depression of the synaptic connection. This learning function can be supported by the resistive switching memory (RRAM), which changes its resistance depending on the amplitude, the pulse width and the bias polarity of the applied signal. This work shows a new synapse circuit comprising a MOS transistor as a selector and a RRAM as a variable resistance, displaying spike-timing dependent plasticity (STDP) similar to the one originally experienced in biological neural networks. We demonstrate long-term potentiation and long-term depression by simulations with an analytical model of resistive switching. Finally, the experimental demonstration of the new STDP scheme is presented.",
"title": ""
},
{
"docid": "3c6dcd92cbbf0cf4a5175dc61b401aae",
"text": "Increased number of malware samples have created many challenges for Antivirus companies. One of these challenges is clustering the large number of malware samples they receive daily. Malware authors use malware generation kits to create different instances of the same malware. So most of these malicious samples are polymorphic instances of previously known malware family only. Clustering these large number of samples rapidly and accurately without spending much time on processing the sample have become a critical requirement. In this paper we proposed, implemented and evaluated a method, called ByteFreq that can cluster large number of samples using byte frequency. Byte frequency is represented as time series and SAX (Symbolic Aggregation approXimation)[1] is used to convert the time series in symbolic representation. We evaluated proposed system on real world malware samples and achieved 0.92 precision and 0.96 recall accuracy.",
"title": ""
},
{
"docid": "53ab91cdff51925141c43c4bc1c6aade",
"text": "Floods are the most common natural disasters, and cause significant damage to life, agriculture and economy. Research has moved on from mathematical modeling or physical parameter based flood forecasting schemes, to methodologies focused around algorithmic approaches. The Internet of Things (IoT) is a field of applied electronics and computer science where a system of devices collects data in real time and transfers it through a Wireless Sensor Network (WSN) to the computing device for analysis. IoT generally combines embedded system hardware techniques along with data science or machine learning models. In this work, an IoT and machine learning based embedded system is proposed to predict the probability of floods in a river basin. The model uses a modified mesh network connection over ZigBee for the WSN to collect data, and a GPRS module to send the data over the internet. The data sets are evaluated using an artificial neural network model. The results of the analysis which are also appended show a considerable improvement over the currently existing methods.",
"title": ""
},
{
"docid": "b9d25bdbb337a9d16a24fa731b6b479d",
"text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.",
"title": ""
},
{
"docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd",
"text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.",
"title": ""
}
] |
scidocsrr
|
d98a66e5784413ede737ed404d1bb790
|
Wikipedia in the eyes of its beholders: A systematic review of scholarly research on Wikipedia readers and readership
|
[
{
"docid": "ee08bd4b35b875bd9c12b6707406fdde",
"text": "I here give an overview of Wikipedia and wiki research and tools. Well over 1,000 reports have been published in the field and there exist dedicated scientific meetings for Wikipedia research. It is not possible to give a complete review of all material published. This overview serves to describe some key areas of research.",
"title": ""
}
] |
[
{
"docid": "cdeaf14d18c32ca534e8e76b9025db42",
"text": "A broadband dual-polarized base station antenna with sturdy construction is presented in this letter. The antenna mainly contains four parts: main radiator, feeding baluns, bedframe, and reflector. First, two orthogonal dipoles are etched on a substrate as main radiator forming dual polarization. Two baluns are then introduced to excite the printed dipoles. Each balun has four bumps on the edges for electrical connection and fixation. The bedframe is designed to facilitate the installation, and the reflector is finally used to gain unidirectional radiation. Measured results show that the antenna has a 48% impedance bandwidth with reflection coefficient less than –15 dB and port isolation more than 22 dB. A four-element antenna array with 6° ± 2° electrical down tilt is also investigated for wideband base station application. The antenna and its array have the advantages of sturdy construction, high machining accuracy, ease of integration, and low cost. They can be used for broadband base station in the next-generation wireless communication system.",
"title": ""
},
{
"docid": "541055772a5c2bed70649d2ca9a6c584",
"text": "This report discusses methods for forecasting hourly loads of a US utility as part of the load forecasting track of the Global Energy Forecasting Competition 2012 hosted on Kaggle. The methods described (gradient boosting machines and Gaussian processes) are generic machine learning / regression algorithms and few domain specific adjustments were made. Despite this, the algorithms were able to produce highly competitive predictions and hopefully they can inspire more refined techniques to compete with state-of-the-art load forecasting methodologies.",
"title": ""
},
{
"docid": "e0ec22fcdc92abe141aeb3fa67e9e55a",
"text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack",
"title": ""
},
{
"docid": "12524304546ca59b7e8acb2a7f6d6699",
"text": "Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4or 5-option items per testing time while improving content coverage, without detrimental effects on psychometric quality of test scores. Researchers have endorsed 3-option items for over 80 years with empirical evidence—the results of which have been synthesized in an effort to unify this endorsement and encourage its adoption.",
"title": ""
},
{
"docid": "6f125b0a1f7de3402c1a6e2af72af506",
"text": "The location-based service (LBS) of mobile communication and the personalization of information recommendation are two important trends in the development of electric commerce. However, many previous researches have only emphasized on one of the two trends. In this paper, we integrate the application of LBS with recommendation technologies to present a location-based service recommendation model (LBSRM) and design a prototype system to simulate and measure the validity of LBSRM. Due to the accumulation and variation of preference, in the recommendation model we conduct an adaptive method including long-term and short-term preference adjustment to enhance the result of recommendation. Research results show, with the assessments of relative index, the rate of recommendation precision could be 85.48%. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c74b967ecf7844843ee9389ba591b84e",
"text": "We present an approach to human-robot interaction through gesture-free spoken dialogue. Our approach is based on passive knowledge rarefication through goal disambiguation, a technique that allows a human operator to collaborate with a mobile robot on various tasks through spoken dialogue without making bodily gestures. A key assumption underlying our approach is that the operator and the robot share a common set of goals. Another key idea is that language, vision, and action share common memory structures.We discuss how our approach achieves four types of human-robot interaction: command, goal disambiguation, introspection, and instruction-based learning. We describe the system we developed to implement our approach and present experimental results.",
"title": ""
},
{
"docid": "7af9293fbe12f3e859ee579d0f8739a5",
"text": "We present the findings from a Dutch field study of 30 outsourcing deals totaling to more than 100 million Euro, where both customers and corresponding IT-outsourcing providers participated. The main objective of the study was to examine from a number of well-known factors whether they discriminate between IT-outsourcing success and failure in the early phase of service delivery and to determine their impact on the chance on a successful deal. We investigated controllable factors to increase the odds during sourcing and rigid factors as a warning sign before closing a deal. Based on 250 interviews we collected 28 thousand data points. From the data and the perceived failure or success of the closed deals we investigated the discriminative power of the determinants (ex post). We found three statistically significant controllable factors that discriminated in an early phase between failure and success. They are: working according to the transition plan, demand management and, to our surprise, communication within the supplier organisation (so not between client and supplier). These factors also turned out to be the only significant factors for a (logistic) model predicting the chance of a successful IT-outsourcing. Improving demand management and internal communication at the supplier increases the odds the most. Sticking to the transition plan only modestly. Other controllable factors were not significant in our study. They are managing the business case, transfer of staff or assets, retention of expertise and communication within the client organisation. Of the rigid factors, the motive to outsource, cultural differences, and the type of work were insignificant. The motive of the supplier was significant: internal motivations like increasing profit margins or business volume decreased the chance of success while external motivations like increasing market share or becoming a player increased the success rate. From the data we inferred that the degree of experience with sourcing did not show to be a convincing factor of success. Hiring sourcing consultants worked contra-productive: it lowered chances of success.",
"title": ""
},
{
"docid": "06abf54df209e736ada3a9a951b14300",
"text": "In this paper we present arguments supported by research examples for a fundamental shift of emphasis in education and its relation to technology, in particular AItechnology. No longer the ITS-paradigm dominates the field of AI and Education. New educational and pedagogic paradigms are being proposed and investigated, stressing the importance of learning how to learn instead of merely learning domain facts and rules of application. New uses of technology accompany this shift. We present trends and issues in this area exemplified by research projects and characterise three pedagogical scenarios in order to situate different modelling options for AI & Education.",
"title": ""
},
{
"docid": "0b22284d575fb5674f61529c367bb724",
"text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.",
"title": ""
},
{
"docid": "25e0dfd4ad96bc80050a399f6355bfec",
"text": "Advances in information technology and near ubiquity of the Internet have spawned novel modes of communication and unprecedented insights into human behavior via the digital footprint. Health behavior randomized controlled trials (RCTs), especially technology-based, can leverage these advances to improve the overall clinical trials management process and benefit from improvements at every stage, from recruitment and enrollment to engagement and retention. In this paper, we report the results for recruitment and retention of participants in the SMART study and introduce a new model for clinical trials management that is a result of interdisciplinary team science. The MARKIT model brings together best practices from information technology, marketing, and clinical research into a single framework to maximize efforts for recruitment, enrollment, engagement, and retention of participants into a RCT. These practices may have contributed to the study's on-time recruitment that was within budget, 86% retention at 24 months, and a minimum of 57% engagement with the intervention over the 2-year RCT. Use of technology in combination with marketing practices may enable investigators to reach a larger and more diverse community of participants to take part in technology-based clinical trials, help maximize limited resources, and lead to more cost-effective and efficient clinical trial management of study participants as modes of communication evolve among the target population of participants.",
"title": ""
},
{
"docid": "9a9f54a0c7c561772d56e471cc1ab47d",
"text": "Reliable and timely delivery of periodic V2V (vehicle-to-vehicle) broadcast messages is essential for realizing the benefits of connected vehicles. Existing MAC protocols for ad hoc networks fall short of meeting these requirements. In this paper, we present, CoReCast, the first collision embracing protocol for vehicular networks. CoReCast provides high reliability and low delay by leveraging two unique opportunities: no strict constraint on energy consumption, and availability of GPS clocks to achieve near-perfect time and frequency synchronization.\n Due to low coherence time, the channel changes rapidly in vehicular networks. CoReCast embraces packet collisions and takes advantage of the channel dynamics to decode collided packets. The design of CoReCast is based on a preamble detection scheme that estimates channels from multiple transmitters without any prior information about them. The proposed scheme reduces the space and time requirement exponentially than the existing schemes. The system is evaluated through experiments with USRP N210 and GPS devices placed in vehicles driven on roads in different environments as well as using trace-driven simulations. It provides 15x and 2x lower delay than 802.11p and OCP (Omniscient Clustering Protocol), respectively. Reliability of CoReCast is 8x and 2x better than 802.11p and OCP, respectively.",
"title": ""
},
{
"docid": "2c27fc786dadb6c0d048fcf66b22ed59",
"text": "Changes in DNA copy number contribute to cancer pathogenesis. We now show that high-density single nucleotide polymorphism (SNP) arrays can detect copy number alterations. By hybridizing genomic representations of breast and lung carcinoma cell line and lung tumor DNA to SNP arrays, and measuring locus-specific hybridization intensity, we detected both known and novel genomic amplifications and homozygous deletions in these cancer samples. Moreover, by combining genotyping with SNP quantitation, we could distinguish loss of heterozygosity events caused by hemizygous deletion from those that occur by copy-neutral events. The simultaneous measurement of DNA copy number changes and loss of heterozygosity events by SNP arrays should strengthen our ability to discover cancer-causing genes and to refine cancer diagnosis.",
"title": ""
},
{
"docid": "ced697994e4e8f8c65b4a06dae42ddeb",
"text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction.",
"title": ""
},
{
"docid": "c4171bd7b870d26e0b2520fc262e7c88",
"text": "Each year, the treatment decisions for more than 230, 000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100×100 pixels in gigapixel microscopy images sized 100, 000×100, 000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.",
"title": ""
},
{
"docid": "d473619f76f81eced041df5bc012c246",
"text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.",
"title": ""
},
{
"docid": "cff459bd217bdbecefeceb70e3be5065",
"text": "In this article we present FLUX-CiM, a novel method for extracting components (e.g., author names, article titles, venues, page numbers) from bibliographic citations. Our method does not rely on patterns encoding specific delimiters used in a particular citation style.This feature yields a high degree of automation and flexibility, and allows FLUX-CiM to extract from citations in any given format. Differently from previous methods that are based on models learned from user-driven training, our method relies on a knowledge base automatically constructed from an existing set of sample metadata records from a given field (e.g., computer science, health sciences, social sciences, etc.). These records are usually available on the Web or other public data repositories. To demonstrate the effectiveness and applicability of our proposed method, we present a series of experiments in which we apply it to extract bibliographic data from citations in articles of different fields. Results of these experiments exhibit precision and recall levels above 94% for all fields, and perfect extraction for the large majority of citations tested. In addition, in a comparison against a stateof-the-art information-extraction method, ours produced superior results without the training phase required by that method. Finally, we present a strategy for using bibliographic data resulting from the extraction process with FLUX-CiM to automatically update and expand the knowledge base of a given domain. We show that this strategy can be used to achieve good extraction results even if only a very small initial sample of bibliographic records is available for building the knowledge base.",
"title": ""
},
{
"docid": "40c90bf58aae856c7c72bac573069173",
"text": "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.",
"title": ""
},
{
"docid": "d9c4e90d9538c99206cc80bea2c1f808",
"text": "Practical aspects of a real time auto parking controller are considered. A parking algorithm which can guarantee to find a parking path with any initial positions is proposed. The algorithm is theoretically proved and successfully applied to the OSU-ACT in the DARPA Urban Challenge 2007.",
"title": ""
},
{
"docid": "86c3aefe7ab3fa2178da219f57bedf81",
"text": "We present a model constructed for a large consumer products company to assess their vulnerability to disruption risk and quantify its impact on customer service. Risk profiles for the locations and connections in the supply chain are developed using Monte Carlo simulation, and the flow of material and network interactions are modeled using discrete-event simulation. Capturing both the risk profiles and material flow with simulation allows for a clear view of the impact of disruptions on the system. We also model various strategies for coping with the risk in the system in order to maintain product availability to the customer. We discuss the dynamic nature of risk in the network and the importance of proactive planning to mitigate and recover from disruptions.",
"title": ""
},
{
"docid": "de34cb3489e58366f4aff7f05ba558c9",
"text": "Current initiatives in the field of Business Process Management (BPM) strive for the development of a BPM standard notation by pushing the Business Process Modeling Notation (BPMN). However, such a proposed standard notation needs to be carefully examined. Ontological analysis is an established theoretical approach to evaluating modelling techniques. This paper reports on the outcomes of an ontological analysis of BPMN and explores identified issues by reporting on interviews conducted with BPMN users in Australia. Complementing this analysis we consolidate our findings with previous ontological analyses of process modelling notations to deliver a comprehensive assessment of BPMN.",
"title": ""
}
] |
scidocsrr
|
0271486339a3185615f54cda636d8fbc
|
Semi-Supervised Generation with Cluster-aware Generative Models
|
[
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
},
{
"docid": "ecd8f70442aa40cd2088f4324fe0d247",
"text": "Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.",
"title": ""
},
{
"docid": "5245cdc023c612de89f36d1573d208fe",
"text": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.",
"title": ""
},
{
"docid": "de018dc74dd255cf54d9c5597a1f9f73",
"text": "Smoothness regularization is a popular method to decrease generalization error. We propose a novel regularization technique that rewards local distributional smoothness (LDS), a KLdistance based measure of the model’s robustness against perturbation. The LDS is defined in terms of the direction to which the model distribution is most sensitive in the input space. We call the training with LDS regularization virtual adversarial training (VAT). VAT resembles the adversarial training (Goodfellow et al., 2015), but distinguishes itself in that it determines the adversarial direction from the model distribution alone, and does not use the label information. The technique is therefore applicable even to semi-supervised learning. When we applied our technique to the classification task of the permutation invariant MNIST dataset, it not only eclipsed all the models that are not dependent on generative models and pre-training, but also performed well even in comparison to the state of the art method (Rasmus et al., 2015) that uses a highly advanced generative model.",
"title": ""
}
] |
[
{
"docid": "08473b813d0c9e3441d5293c8d1f1a12",
"text": "We present the design, implementation, and informal evaluation of tactile interfaces for small touch screens used in mobile devices. We embedded a tactile apparatus in a Sony PDA touch screen and enhanced its basic GUI elements with tactile feedback. Instead of observing the response of interface controls, users can feel it with their fingers as they press the screen. In informal evaluations, tactile feedback was greeted with enthusiasm. We believe that tactile feedback will become the next step in touch screen interface design and a standard feature of future mobile devices.",
"title": ""
},
{
"docid": "8df0689ffe5c730f7a6ef6da65bec57e",
"text": "Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.",
"title": ""
},
{
"docid": "90033efd960bf121e7041c9b3cd91cbd",
"text": "In this paper, we propose a novel framework for integrating geometrical measurements of monocular visual simultaneous localization and mapping (SLAM) and depth prediction using a convolutional neural network (CNN). In our framework, SLAM-measured sparse features and CNN-predicted dense depth maps are fused to obtain a more accurate dense 3D reconstruction including scale. We continuously update an initial 3D mesh by integrating accurately tracked sparse features points. Compared to prior work on integrating SLAM and CNN estimates [26], there are two main differences: Using a 3D mesh representation allows as-rigid-as-possible update transformations. We further propose a system architecture suitable for mobile devices, where feature tracking and CNN-based depth prediction modules are separated, and only the former is run on the device. We evaluate the framework by comparing the 3D reconstruction result with 3D measurements obtained using an RGBD sensor, showing a reduction in the mean residual error of 38% compared to CNN-based depth map prediction alone.",
"title": ""
},
{
"docid": "c385054322970c86d3f08b298aa811e2",
"text": "Recently, a small number of papers have appeared in which the authors implement stochastic search algorithms, such as evolutionary computation, to generate game content, such as levels, rules and weapons. We propose a taxonomy of such approaches, centring on what sort of content is generated, how the content is represented, and how the quality of the content is evaluated. The relation between search-based and other types of procedural content generation is described, as are some of the main research challenges in this new field. The paper ends with some successful examples of this approach.",
"title": ""
},
{
"docid": "9afd6e40fa049a27876dda7a714cc9db",
"text": "PHP is a server-side scripting programming language that is widely used to develop website services. However, web-based PHP applications are distributed in source code so that the security is vulnerable and weak because the lines of source code can be easily copied, modified, or used in other applications. These research aims to implement obfuscation techniques design in PHP extension code using AES algorithm. The AES algorithm recommended by NIST (National Institute of Standards and Technology) to protect the US government's national information security system. Through obfuscation technique using encryption, it is expected that programmers have an option to protect the PHP source code so that the copyright or intellectual property of the program can be protected",
"title": ""
},
{
"docid": "9fff08cf60bb5f6ec538080719aa8224",
"text": "This research represents the runner BIB number recognition system to develop image processing study which solves problems and increases efficiency about runner image management in running fairs. The runner BIB number recognition system processes runner image to recognize BIB number and time when runner appears in media. The information from processing has collected to applicative later. BIB number position is on BIB tag which attach on runner body. To recognize BIB number, the system detects runner position first. This process emphasize on runner face detection in images following to concept of researcher then find BIB number in body-thigh area of runner. The system recognizes BIB number from BIB tag which represents in media. This processing presents 0.80 in precision value, 0.81 in recall value and F-measure is 0.80. The results display the runner BIB number recognition system has developed with high efficiency and can be applied for runner online communities in actual situation. The runner BIB number recognition system decreases problems about runner image processing and increases comfortable for runners when find images from running fairs. Moreover, the system can be applied in commercial to increase benefits in running business.",
"title": ""
},
{
"docid": "ba5b5732dd7c48874e4f216903bba0b1",
"text": "This article presents a review of the application of insole plantar pressure sensor system in recognition and analysis of the hemiplegic gait in stroke patients. Based on the review, tailor made 3D insoles for plantar pressure measurement were designed and fabricated. The function is to compare with that of conventional flat insoles. Tailor made 3D contour of the insole can improve the contact between insole and foot and enable sampling plantar pressure at a high reproducibility.",
"title": ""
},
{
"docid": "aa29b992a92f958b7ac8ff8e1cb8cd19",
"text": "Physically unclonable functions (PUFs) provide a device-unique challenge-response mapping and are employed for authentication and encryption purposes. Unpredictability and reliability are the core requirements of PUFs: unpredictability implies that an adversary cannot sufficiently predict future responses from previous observations. Reliability is important as it increases the reproducibility of PUF responses and hence allows validation of expected responses. However, advanced machine-learning algorithms have been shown to be a significant threat to the practical validity of PUFs, as they are able to accurately model PUF behavior. The most effective technique was shown to be the XOR-based combination of multiple PUFs, but as this approach drastically reduces reliability, it does not scale well against software-based machine-learning attacks. In this paper, we analyze threats to PUF security and propose PolyPUF, a scalable and secure architecture to introduce polymorphic PUF behavior. This architecture significantly increases model-building resistivity while maintaining reliability. An extensive experimental evaluation and comparison demonstrate that the PolyPUF architecture can secure various PUF configurations and is the only evaluated approach to withstand highly complex neural network machine-learning attacks. Furthermore, we show that PolyPUF consumes less energy and has less implementation overhead in comparison to lightweight reference architectures.",
"title": ""
},
{
"docid": "b1d1571bbb260272e8679cc7a3f92cfe",
"text": "This article overviews the enzymes produced by microorganisms, which have been extensively studied worldwide for their isolation, purification and characterization of their specific properties. Researchers have isolated specific microorganisms from extreme sources under extreme culture conditions, with the objective that such isolated microbes would possess the capability to bio-synthesize special enzymes. Various Bio-industries require enzymes possessing special characteristics for their applications in processing of substrates and raw materials. The microbial enzymes act as bio-catalysts to perform reactions in bio-processes in an economical and environmentally-friendly way as opposed to the use of chemical catalysts. The special characteristics of enzymes are exploited for their commercial interest and industrial applications, which include: thermotolerance, thermophilic nature, tolerance to a varied range of pH, stability of enzyme activity over a range of temperature and pH, and other harsh reaction conditions. Such enzymes have proven their utility in bio-industries such as food, leather, textiles, animal feed, and in bio-conversions and bio-remediations.",
"title": ""
},
{
"docid": "df114396d546abfc9b6f1767e3bab8db",
"text": "I briefly highlight the salient properties of modified-inertia formulations of MOND, contrasting them with those of modified-gravity formulations, which describe practically all theories propounded to date. Future data (e.g. the establishment of the Pioneer anomaly as a new physics phenomenon) may prefer one of these broad classes of theories over the other. I also outline some possible starting ideas for modified inertia. 1 Modified MOND inertia vs. modified MOND gravity MOND is a modification of non-relativistic dynamics involving an acceleration constant a 0. In the formal limit a 0 → 0 standard Newtonian dynamics is restored. In the deep MOND limit, a 0 → ∞, a 0 and G appear in the combination (Ga 0). Much of the NR phenomenology follows from this simple prescription, including the asymptotic flatness of rotation curves, the mass-velocity relations (baryonic Tully-fisher and Faber Jackson relations), mass discrepancies in LSB galaxies, etc.. There are many realizations (theories) that embody the above dictates, relativistic and non-relativistic. The possibly very significant fact that a 0 ∼ cH 0 ∼ c(Λ/3) 1/2 may hint at the origin of MOND, and is most probably telling us that a. MOND is an effective theory having to do with how the universe at large shapes local dynamics, and b. in a Lorentz universe (with H 0 = 0, Λ = 0) a 0 = 0 and standard dynamics holds. We can broadly classify modified theories into two classes (with the boundary not so sharply defined): In modified-gravity (MG) formulations the field equation of the gravitational field (potential, metric) is modified; the equations of motion of other degrees of freedom (DoF) in the field are not. In modified-inertia (MI) theories the opposite it true. More precisely, in theories derived from an action modifying inertia is tantamount to modifying the kinetic (free) actions of the non-gravitational degrees of freedom. Local, relativistic theories in which the kinetic",
"title": ""
},
{
"docid": "fb173d15e079fcdf0cc222f558713f9c",
"text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.",
"title": ""
},
{
"docid": "2332c8193181b5ad31e9424ca37b0f5a",
"text": "The ability to grasp ordinary and potentially never-seen objects is an important feature in both domestic and industrial robotics. For a system to accomplish this, it must autonomously identify grasping locations by using information from various sensors, such as Microsoft Kinect 3D camera. Despite numerous progress, significant work still remains to be done in this field. To this effect, we propose a dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of determining such good grasping locations. In contrast to previously proposed approaches that relied on sophisticated regularization or very large datasets, the derived perception system has a fast training phase and can work with small datasets. It is also theoretically founded for dealing with masked-out entries, which are common with 3D sensors. We contribute by presenting a comparative study of several DLSR approach combinations for recognizing and detecting grasp candidates on the standard Cornell dataset. Importantly, experimental results show a performance improvement of 1.69% in detection and 3.16% in recognition over current state-of-the-art convolutional neural network (CNN). Even though nowadays most popular vision-based approach is CNN, this suggests that DLSR is also a viable alternative with interesting advantages that CNN has not.",
"title": ""
},
{
"docid": "e8ecb3597e3019691f128cf6a50239d9",
"text": "Unmanned Aerial Vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping and 3D modeling issues. As UAVs can be considered as a lowcost alternative to the classical manned aerial photogrammetry, new applications in the shortand close-range domain are introduced. Rotary or fixed wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semiautomated and autonomous modes. Following a typical photogrammetric workflow, 3D results like Digital Surface or Terrain Models (DTM/DSM), contours, textured 3D models, vector information, etc. can be produced, even on large areas. The paper reports the state of the art of UAV for Geomatics applications, giving an overview of different UAV platforms, applications and case studies, showing also the latest developments of UAV image processing. New perspectives are also addressed.",
"title": ""
},
{
"docid": "d9ef259a2a2997a8b447b7c711f7da32",
"text": "Wireless Sensor Networks (WSNs) have attracted much attention in recent years. The potential applications of WSNs are immense. They are used for collecting, storing and sharing sensed data. WSNs have been used for various applications including habitat monitoring, agriculture, nuclear reactor control, security and tactical surveillance. The WSN system developed in this paper is for use in precision agriculture applications, where real time data of climatologically and other environmental properties are sensed and control decisions are taken based on it to modify them. The architecture of a WSN system comprises of a set of sensor nodes and a base station that communicate with each other and gather local information to make global decisions about the physical environment. The sensor network is based on the IEEE 802.15.4 standard and two topologies for this application.",
"title": ""
},
{
"docid": "59b7afc5c2af7de75248c90fdf5c9cd3",
"text": "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.",
"title": ""
},
{
"docid": "dd270ffa800d633a7a354180eb3d426c",
"text": "I have taken an experimental approach to this question. Freely voluntary acts are pre ceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "5b763dbb9f06ff67e44b5d38920e92bf",
"text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.",
"title": ""
},
{
"docid": "f5128625b3687c971ba3bef98d7c2d2a",
"text": "In three experiments, we investigated the influence of juror, victim, and case factors on mock jurors' decisions in several types of child sexual assault cases (incest, day care, stranger abduction, and teacher-perpetrated abuse). We also validated and tested the ability of several scales measuring empathy for child victims, children's believability, and opposition to adult/child sex, to mediate the effect of jurors' gender on case judgments. Supporting a theoretical model derived from research on the perceived credibility of adult rape victims, women compared to men were more empathic toward child victims, more opposed to adult/child sex, more pro-women, and more inclined to believe children generally. In turn, women (versus men) made more pro-victim judgments in hypothetical abuse cases; that is, attitudes and empathy generally mediated this juror gender effect that is pervasive in this literature. The experiments also revealed that strength of case evidence is a powerful factor in determining judgments, and that teen victims (14 years old) are blamed more for sexual abuse than are younger children (5 years old), but that perceptions of 5 and 10 year olds are largely similar. Our last experiment illustrated that our findings of mediation generalize to a community member sample.",
"title": ""
},
{
"docid": "ff707f7c041a13ff3fcd1efd91c7103a",
"text": "We conceptualize and propose a theoretical model of sellers’ trust in buyers in the cross border ecommerce context. This model is based on by signalling theory, which is further refined by using trust theories and empirical findings from prior e-commerce trust research.",
"title": ""
}
] |
scidocsrr
|
34f1c513c4eaa53c1b3e8a5cf849f62a
|
Crowdsourcing in the cultural heritage domain: opportunities and challenges
|
[
{
"docid": "393d3f3061940f98e5f3e4ed919f7f6d",
"text": "Through online games, people can collectively solve large-scale computational problems. E ach year, people around the world spend billions of hours playing computer games. What if all this time and energy could be channeled into useful work? What if people playing computer games could, without consciously doing so, simultaneously solve large-scale problems? Despite colossal advances over the past 50 years, computers still don't possess the basic conceptual intelligence or perceptual capabilities that most humans take for granted. If we treat human brains as processors in a distributed system, each can perform a small part of a massive computation. Such a \" human computation \" paradigm has enormous potential to address problems that computers can't yet tackle on their own and eventually teach computers many of these human talents. Unlike computer processors, humans require some incentive to become part of a collective computation. Online games are a seductive method for encouraging people to participate in the process. Such games constitute a general mechanism for using brain power to solve open problems. In fact, designing such a game is much like designing an algorithm—it must be proven correct, its efficiency can be analyzed, a more efficient version can supersede a less efficient one, and so on. Instead of using a silicon processor, these \" algorithms \" run on a processor consisting of ordinary humans interacting with computers over the Internet. \" Games with a purpose \" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering , and Internet search. Two such games under development at Carnegie Mellon University, the ESP Game and Peekaboom, demonstrate how humans , as they play, can solve problems that computers can't yet solve. Several important online applications such as search engines and accessibility programs for the visually impaired require accurate image descriptions. However, there are no guidelines about providing appropriate textual descriptions for the millions of images on the Web, and computer vision can't yet accurately determine their content. Current techniques used to categorize images for these applications are inadequate, largely because they assume that image content on a Web page is related to adjacent text. Unfortunately, the text near an image is often scarce or misleading and can be hard to process. Manual labeling is traditionally the only method for obtaining precise image descriptions, but this tedious and labor-intensive process is extremely costly. The ESP Game …",
"title": ""
}
] |
[
{
"docid": "262d91525f42ead887c8f8d50a5782fd",
"text": "Over the past decade, machine learning techniques especially predictive modeling and pattern recognition in biomedical sciences from drug delivery system [7] to medical imaging has become one of the important methods which are assisting researchers to have deeper understanding of entire issue and to solve complex medical problems. Deep learning is power learning machine learning algorithm in classification while extracting high-level features. In this paper, we used convolutional neural network to classify Alzheimer’s brain from normal healthy brain. The importance of classifying this kind of medical data is to potentially develop a predict model or system in order to recognize the type disease from normal subjects or to estimate the stage of the disease. Classification of clinical data such as Alzheimer’s disease has been always challenging and most problematic part has been always selecting the most discriminative features. Using Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimer’s subjects from normal controls where the accuracy of test data on trained data reached 96.85%. This experiment suggests us the shift and scale invariant features extracted by CNN followed by deep learning classification is most powerful method to distinguish clinical data from healthy data in fMRI. This approach also enables us to expand our methodology to predict more complicated systems.",
"title": ""
},
{
"docid": "90d06c97cdf3b67a81345f284d839c25",
"text": "Open information extraction is an important task in Biomedical domain. The goal of the OpenIE is to automatically extract structured information from unstructured text with no or little supervision. It aims to extract all the relation tuples from the corpus without requiring pre-specified relation types. The existing tools may extract ill-structured or incomplete information, or fail on the Biomedical literature due to the long and complicated sentences. In this paper, we propose a novel pattern-based information extraction method for the wide-window entities (WW-PIE). WW-PIE utilizes dependency parsing to break down the long sentences first and then utilizes frequent textual patterns to extract the high-quality information. The pattern hierarchical grouping organize and structure the extractions to be straightforward and precise. Consequently, comparing with the existing OpenIE tools, WW-PIE produces structured output that can be directly used for downstream applications. The proposed WW-PIE is also capable in extracting n-ary and nested relation structures, which is less studied in the existing methods. Extensive experiments on real-world biomedical corpus from PubMed abstracts demonstrate the power of WW-PIE at extracting precise and well-structured information.",
"title": ""
},
{
"docid": "f2a9d15d9b38738d563f9d9f9fa5d245",
"text": "Mobile cellular networks have become both the generators and carriers of massive data. Big data analytics can improve the performance of mobile cellular networks and maximize the revenue of operators. In this paper, we introduce a unified data model based on the random matrix theory and machine learning. Then, we present an architectural framework for applying the big data analytics in the mobile cellular networks. Moreover, we describe several illustrative examples, including big signaling data, big traffic data, big location data, big radio waveforms data, and big heterogeneous data, in mobile cellular networks. Finally, we discuss a number of open research challenges of the big data analytics in the mobile cellular networks.",
"title": ""
},
{
"docid": "252f5488232f7437ff886b79e2e7014e",
"text": "Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.",
"title": ""
},
{
"docid": "467637b1f55d4673d0ddd5322a130979",
"text": "In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU.",
"title": ""
},
{
"docid": "47ea90e34fc95a941bc127ad8ccd2ca9",
"text": "The ever increasing number of cyber attacks requires the cyber security and forensic specialists to detect, analyze and defend against the cyber threats in almost real-time. In practice, timely dealing with such a large number of attacks is not possible without deeply perusing the attack features and taking corresponding intelligent defensive actions—this in essence defines cyber threat intelligence notion. However, such an intelligence would not be possible without the aid of artificial intelligence, machine learning and advanced data mining techniques to collect, analyse, and interpret cyber attack evidences. In this introductory chapter we first discuss the notion of cyber threat intelligence and its main challenges and opportunities, and then briefly introduce the chapters of the book which either address the identified challenges or present opportunistic solutions to provide threat intelligence.",
"title": ""
},
{
"docid": "1db42d9d65737129fa08a6ad4d52d27e",
"text": "This study introduces a unique prototype system for structural health monitoring (SHM), SmartSync, which uses the building’s existing Internet backbone as a system of virtual instrumentation cables to permit modular and largely plug-and-play deployments. Within this framework, data streams from distributed heterogeneous sensors are pushed through network interfaces in real time and seamlessly synchronized and aggregated by a centralized server, which performs basic data acquisition, event triggering, and database management while also providing an interface for data visualization and analysis that can be securely accessed. The system enables a scalable approach to monitoring tall and complex structures that can readily interface a variety of sensors and data formats (analog and digital) and can even accommodate variable sampling rates. This study overviews the SmartSync system, its installation/operation in theworld’s tallest building, Burj Khalifa, and proof-of-concept in triggering under dual excitations (wind and earthquake).DOI: 10.1061/(ASCE)ST.1943-541X.0000560. © 2013 American Society of Civil Engineers. CE Database subject headings: High-rise buildings; Structural health monitoring; Wind loads; Earthquakes. Author keywords: Tall buildings; Structural health monitoring; System identification.",
"title": ""
},
{
"docid": "4ee6894fade929db82af9cb62fecc0f9",
"text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.",
"title": ""
},
{
"docid": "e660a3407d3ae46995054764549adc35",
"text": "The factors predicting stress, anxiety and depression in the parents of children with autism remain poorly understood. In this study, a cohort of 250 mothers and 229 fathers of one or more children with autism completed a questionnaire assessing reported parental mental health problems, locus of control, social support, perceived parent-child attachment, as well as autism symptom severity and perceived externalizing behaviours in the child with autism. Variables assessing parental cognitions and socioeconomic support were found to be more significant predictors of parental mental health problems than child-centric variables. A path model, describing the relationship between the dependent and independent variables, was found to be a good fit with the observed data for both mothers and fathers.",
"title": ""
},
{
"docid": "db7bc8bbfd7dd778b2900973f2cfc18d",
"text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "79b74db73c30fa38239f3d6b84ee5443",
"text": "Optimizing an interactive system against a predefined online metric is particularly challenging, especially when the metric is computed from user feedback such as clicks and payments. The key challenge is the counterfactual nature: in the case of Web search, any change to a component of the search engine may result in a different search result page for the same query, but we normally cannot infer reliably from search log how users would react to the new result page. Consequently, it appears impossible to accurately estimate online metrics that depend on user feedback, unless the new engine is actually run to serve live users and compared with a baseline in a controlled experiment. This approach, while valid and successful, is unfortunately expensive and time-consuming. In this paper, we propose to address this problem using causal inference techniques, under the contextual-bandit framework. This approach effectively allows one to run potentially many online experiments offline from search log, making it possible to estimate and optimize online metrics quickly and inexpensively. Focusing on an important component in a commercial search engine, we show how these ideas can be instantiated and applied, and obtain very promising results that suggest the wide applicability of these techniques.",
"title": ""
},
{
"docid": "ead461ea8f716f6fab42c08bb7b54728",
"text": "Despite the increasing importance of data quality and the rich theoretical and practical contributions in all aspects of data cleaning, there is no single end-to-end off-the-shelf solution to (semi-)automate the detection and the repairing of violations w.r.t. a set of heterogeneous and ad-hoc quality constraints. In short, there is no commodity platform similar to general purpose DBMSs that can be easily customized and deployed to solve application-specific data quality problems. In this paper, we present NADEEF, an extensible, generalized and easy-to-deploy data cleaning platform. NADEEF distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface allows the users to specify multiple types of data quality rules, which uniformly define what is wrong with the data and (possibly) how to repair it through writing code that implements predefined classes. We show that the programming interface can be used to express many types of data quality rules beyond the well known CFDs (FDs), MDs and ETL rules. Treating user implemented interfaces as black-boxes, the core provides algorithms to detect errors and to clean data. The core is designed in a way to allow cleaning algorithms to cope with multiple rules holistically, i.e. detecting and repairing data errors without differentiating between various types of rules. We showcase two implementations for core repairing algorithms. These two implementations demonstrate the extensibility of our core, which can also be replaced by other user-provided algorithms. Using real-life data, we experimentally verify the generality, extensibility, and effectiveness of our system.",
"title": ""
},
{
"docid": "422b5a17be6923df4b90eaadf3ed0748",
"text": "Hate speech is currently of broad and current interest in the domain of social media. The anonymity and flexibility afforded by the Internet has made it easy for users to communicate in an aggressive manner. And as the amount of online hate speech is increasing, methods that automatically detect hate speech is very much required. Moreover, these problems have also been attracting the Natural Language Processing and Machine Learning communities a lot. Therefore, the goal of this paper is to look at how Natural Language Processing applies in detecting hate speech. Furthermore, this paper also applies a current technique in this field on a dataset. As neural network approaches outperforms existing methods for text classification problems, a deep learning model has been introduced, namely the Convolutional Neural Network. This classifier assigns each tweet to one of the categories of a Twitter dataset: hate, offensive language, and neither. The performance of this model has been tested using the accuracy, as well as looking at the precision, recall and F-score. The final model resulted in an accuracy of 91%, precision of 91%, recall of 90% and a F-measure of 90%. However, when looking at each class separately, it should be noted that a lot of hate tweets have been misclassified. Therefore, it is recommended to further analyze the predictions and errors, such that more insight is gained on the misclassification.",
"title": ""
},
{
"docid": "97382e18c9ca7c42d8b6c908cde761f2",
"text": "In recent years, heatmap regression based models have shown their effectiveness in face alignment and pose estimation. However, Conventional Heatmap Regression (CHR) is not accurate nor stable when dealing with high-resolution facial videos, since it finds the maximum activated location in heatmaps which are generated from rounding coordinates, and thus leads to quantization errors when scaling back to the original high-resolution space. In this paper, we propose a Fractional Heatmap Regression (FHR) for high-resolution video-based face alignment. The proposed FHR can accurately estimate the fractional part according to the 2D Gaussian function by sampling three points in heatmaps. To further stabilize the landmarks among continuous video frames while maintaining the precise at the same time, we propose a novel stabilization loss that contains two terms to address time delay and non-smooth issues, respectively. Experiments on 300W, 300VW and Talking Face datasets clearly demonstrate that the proposed method is more accurate and stable than the state-ofthe-art models. Introduction Face alignment aims to estimate a set of facial landmarks given a face image or video sequence. It is a classic computer vision problem that has attributed to many advanced machine learning algorithms Fan et al. (2018); Bulat and Tzimiropoulos (2017); Trigeorgis et al. (2016); Peng et al. (2015, 2016); Kowalski, Naruniec, and Trzcinski (2017); Chen et al. (2017); Liu et al. (2017); Hu et al. (2018). Nowadays, with the rapid development of consumer hardwares (e.g., mobile phones, digital cameras), High-Resolution (HR) video sequences can be easily collected. Estimating facial landmarks on such highresolution facial data has tremendous applications, e.g., face makeup Chen, Shen, and Jia (2017), editing with special effects Korshunova et al. (2017) in live broadcast videos. However, most existing face alinement methods work on faces with medium image resolutions Chen et al. (2017); Bulat and Tzimiropoulos (2017); Peng et al. (2016); Liu et al. (2017). Therefore, developing face alignment algorithms for high-resolution videos is at the core of this paper. To this end, we propose an accurate and stable algorithm for high-resolution video-based face alignment, named Fractional Heatmap Regression (FHR). It is well known that ∗ indicates equal contributions. Conventional Heatmap Regression (CHR) Loss Fractional Heatmap Regression (FHR) Loss 930 744 411",
"title": ""
},
{
"docid": "22eb9b1de056d03d15c0a3774a898cfd",
"text": "Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.",
"title": ""
},
{
"docid": "462256d2d428f8c77269e4593518d675",
"text": "This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher, and E. Fatemi, we decompose a given (possible textured) image f into a sum of two functions u+v, where u ¥ BV is a function of bounded variation (a cartoon or sketchy approximation of f), while v is a function representing the texture or noise. To model v we use the space of oscillating functions introduced by Yves Meyer, which is in some sense the dual of the BV space. The new algorithm is very simple, making use of differential equations and is easily solved in practice. Finally, we implement the method by finite differences, and we present various numerical results on real textured images, showing the obtained decomposition u+v, but we also show how the method can be used for texture discrimination and texture segmentation.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "9a10716e1d7e24b790fb5dd48ad254ab",
"text": "Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website.",
"title": ""
},
{
"docid": "b53e5d6054b684990e9c5c1e5d2b6b7d",
"text": "Automatic Dependent Surveillance-Broadcast (ADS-B) is one of the key technologies for future “e-Enabled” aircrafts. ADS-B uses avionics in the e-Enabled aircrafts to broadcast essential flight data such as call sign, altitude, heading, and other extra positioning information. On the one hand, ADS-B brings significant benefits to the aviation industry, but, on the other hand, it could pose security concerns as channels between ground controllers and aircrafts for the ADS-B communication are not secured, and ADS-B messages could be captured by random individuals who own ADS-B receivers. In certain situations, ADS-B messages contain sensitive information, particularly when communications occur among mission-critical civil airplanes. These messages need to be protected from any interruption and eavesdropping. The challenge here is to construct an encryption scheme that is fast enough for very frequent encryption and that is flexible enough for effective key management. In this paper, we propose a Staged Identity-Based Encryption (SIBE) scheme, which modifies Boneh and Franklin's original IBE scheme to address those challenges, that is, to construct an efficient and functional encryption scheme for ADS-B system. Based on the proposed SIBE scheme, we provide a confidentiality framework for future e-Enabled aircraft with ADS-B capability.",
"title": ""
}
] |
scidocsrr
|
91755439aad358564ff668278390cb45
|
Radio Frequency Energy Harvesting and Management for Wireless Sensor Networks
|
[
{
"docid": "3c4e1c7fd5dbdf5ea50eeed1afe23ff9",
"text": "Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.",
"title": ""
}
] |
[
{
"docid": "600673953f89f29f2f9c3fe73cac1d13",
"text": "The multivariate regression model is considered with p regressors. A latent vector with p binary entries serves to identify one of two types of regression coef®cients: those close to 0 and those not. Specializing our general distributional setting to the linear model with Gaussian errors and using natural conjugate prior distributions, we derive the marginal posterior distribution of the binary latent vector. Fast algorithms aid its direct computation, and in high dimensions these are supplemented by a Markov chain Monte Carlo approach to sampling from the known posterior distribution. Problems with hundreds of regressor variables become quite feasible. We give a simple method of assigning the hyperparameters of the prior distribution. The posterior predictive distribution is derived and the approach illustrated on compositional analysis of data involving three sugars with 160 near infra-red absorbances as regressors.",
"title": ""
},
{
"docid": "4c0557527bb445c7d641028e2d88005f",
"text": "Small printed antennas will replace the commonly used normal-mode helical antennas of mobile handsets and systems in the future. This paper presents a novel small planar inverted-F antenna (PIFA) which is a common PIFA in which a U-shaped slot is etched to form a dual band operation for wearable and ubiquitous computing equipment. Health issues are considered in selecting suitable antenna topology and the placement of the antenna. Various applications are presented while the paper mainly discusses about the GSM applications.",
"title": ""
},
{
"docid": "2d4fd6da60cad3b6a427bd406f16d6fa",
"text": "BACKGROUND\nVarious cutaneous side-effects, including, exanthema, pruritus, urticaria and Lyell or Stevens-Johnson syndrome, have been reported with meropenem (carbapenem), a rarely-prescribed antibiotic. Levofloxacin (fluoroquinolone), a more frequently prescribed antibiotic, has similar cutaneous side-effects, as well as photosensitivity. We report a case of cutaneous hyperpigmentation induced by meropenem and levofloxacin.\n\n\nPATIENTS AND METHODS\nA 67-year-old male was treated with meropenem (1g×4 daily), levofloxacin (500mg twice daily) and amikacin (500mg daily) for 2 weeks, followed by meropenem, levofloxacin and rifampicin (600mg twice daily) for 4 weeks for osteitis of the fifth metatarsal. Three weeks after initiation of antibiotic therapy, dark hyperpigmentation appeared on the lower limbs, predominantly on the anterior aspects of the legs. Histology revealed dark, perivascular and interstitial deposits throughout the dermis, which stained with both Fontana-Masson and Perls stains. Infrared microspectroscopy revealed meropenem in the dermis of involved skin. After withdrawal of the antibiotics, the pigmentation subsided slowly.\n\n\nDISCUSSION\nSimilar cases of cutaneous hyperpigmentation have been reported after use of minocycline. In these cases, histological examination also showed iron and/or melanin deposits within the dermis, but the nature of the causative pigment remains unclear. In our case, infrared spectroscopy enabled us to identify meropenem in the dermis. Two cases of cutaneous hyperpigmentation have been reported following use of levofloxacin, and the results of histological examination were similar. This is the first case of cutaneous hyperpigmentation induced by meropenem.",
"title": ""
},
{
"docid": "6f0a2dee696eab0fb42113af2c8a2ad7",
"text": "OBJECTIVES\nTo evaluate whether the overgrowth of costal cartilage may cause pectus carinatum using three-dimensional (3D) computed tomography (CT).\n\n\nMETHODS\nTwenty-two patients with asymmetric pectus carinatum were included. The fourth, fifth and sixth ribs and costal cartilages were semi-automatically traced, and their full lengths were measured on three-dimensional CT images using curved multi-planar reformatted (MPR) techniques. The rib length and costal cartilage length, the total combined length of the rib and costal cartilage and the ratio of the cartilage and rib lengths (C/R ratio) in each patient were compared between the protruding side and the opposite side at the levels of the fourth, fifth and sixth ribs.\n\n\nRESULTS\nThe length of the costal cartilage was not different between the more protruded side and the contralateral side (55.8 ± 9.8 mm vs 55.9 ± 9.3 mm at the fourth, 70 ± 10.8 mm vs 71.6 ± 10.8 mm at the fifth and 97.8 ± 13.2 mm vs 99.8 ± 15.5 mm at the sixth; P > 0.05). There were also no significant differences between the lengths of ribs. (265.8 ± 34.9 mm vs 266.3 ± 32.9 mm at the fourth, 279.7 ± 32.7 mm vs 280.6 ± 32.4 mm at the fifth and 283.8 ± 33.9 mm vs 283.9 ± 32.3 mm at the sixth; P > 0.05). There was no statistically significant difference in either the total length of rib and costal cartilage or the C/R ratio according to side of the chest (P > 0.05).\n\n\nCONCLUSIONS\nIn patients with asymmetric pectus carinatum, the lengths of the fourth, fifth and sixth costal cartilage on the more protruded side were not different from those on the contralateral side. These findings suggest that overgrowth of costal cartilage cannot explain the asymmetric protrusion of anterior chest wall and may not be the main cause of pectus carinatum.",
"title": ""
},
{
"docid": "ba3636b17e9a5d1cb3d8755afb1b3500",
"text": "Anabolic-androgenic steroids (AAS) are used as ergogenic aids by athletes and non-athletes to enhance performance by augmenting muscular development and strength. AAS administration is often associated with various adverse effects that are generally dose related. High and multi-doses of AAS used for athletic enhancement can lead to serious and irreversible organ damage. Among the most common adverse effects of AAS are some degree of reduced fertility and gynecomastia in males and masculinization in women and children. Other adverse effects include hypertension and atherosclerosis, blood clotting, jaundice, hepatic neoplasms and carcinoma, tendon damage, psychiatric and behavioral disorders. More specifically, this article reviews the reproductive, hepatic, cardiovascular, hematological, cerebrovascular, musculoskeletal, endocrine, renal, immunologic and psychologic effects. Drug-prevention counseling to athletes is highlighted and the use of anabolic steroids is must be avoided, emphasizing that sports goals may be met within the framework of honest competition, free of doping substances.",
"title": ""
},
{
"docid": "a36fae7ccd3105b58a4977b5a2366ee8",
"text": "As the number of big data management systems continues to grow, users increasingly seek to leverage multiple systems in the context of a single data analysis task. To efficiently support such hybrid analytics, we develop a tool called PipeGen for efficient data transfer between database management systems (DBMSs). PipeGen automatically generates data pipes between DBMSs by leveraging their functionality to transfer data via disk files using common data formats such as CSV. PipeGen creates data pipes by extending such functionality with efficient binary data transfer capabilities that avoid file system materialization, include multiple important format optimizations, and transfer data in parallel when possible. We evaluate our PipeGen prototype by generating 20 data pipes automatically between five different DBMSs. The results show that PipeGen speeds up data transfer by up to 3.8× as compared to transferring using disk files.",
"title": ""
},
{
"docid": "b8fa50df3c76c2192c67cda7ae4d05f5",
"text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.",
"title": ""
},
{
"docid": "971398019db2fb255769727964f1e38a",
"text": "Scaling down to deep submicrometer (DSM) technology has made noise a metric of equal importance as compared to power, speed, and area. Smaller feature size, lower supply voltage, and higher frequency are some of the characteristics for DSM circuits that make them more vulnerable to noise. New designs and circuit techniques are required in order to achieve robustness in presence of noise. Novel methodologies for designing energy-efficient noise-tolerant exclusive-OR-exclusive- NOR circuits that can operate at low-supply voltages with good signal integrity and driving capability are proposed. The circuits designed, after applying the proposed methodologies, are characterized and compared with previously published circuits for reliability, speed and energy efficiency. To test the driving capability of the proposed circuits, they are embedded in an existing 5-2 compressor design. The average noise threshold energy (ANTE) is used for quantifying the noise immunity of the proposed circuits. Simulation results show that, compared with the best available circuit in literature, the proposed circuits exhibit better noise-immunity, lower power-delay product (PDP) and good driving capability. All of the proposed circuits prove to be faster and successfully work at all ranges of supply voltage starting from 3.3 V down to 0.6 V. The savings in the PDP range from 94% to 21% for the given supply voltage range respectively and the average improvement in the ANTE is 2.67X.",
"title": ""
},
{
"docid": "87fe73a5bc0b80fd0af1d0e65d1039c1",
"text": "Reactive programming improves the design of reactive applications by relocating the logic for managing dependencies between dependent values away from the application logic to the language implementation. Many distributed applications are reactive. Yet, existing change propagation algorithms are not suitable in a distributed setting.\n We propose Distributed REScala, a reactive language with a change propagation algorithm that works without centralized knowledge about the topology of the dependency structure among reactive values and avoids unnecessary propagation of changes, while retaining safety guarantees (glitch freedom). Distributed REScala enables distributed reactive programming, bringing the benefits of reactive programming to distributed applications. We demonstrate the enabled design improvements by a case study. We also empirically evaluate the performance of our algorithm in comparison to other algorithms in a simulated distributed setting.",
"title": ""
},
{
"docid": "f12749ba8911e8577fbde2327c9dc150",
"text": "Regardless of successful applications of the convolutional neural networks (CNNs) in different fields, its application to seismic waveform classification and first-break (FB) picking has not been explored yet. This letter investigates the application of CNNs for classifying time-space waveforms from seismic shot gathers and picking FBs of both direct wave and refracted wave. We use representative subimage samples with two types of labeled waveform classification to supervise CNNs training. The goal is to obtain the optimal weights and biases in CNNs, which are solved by minimizing the error between predicted and target label classification. The trained CNNs can be utilized to automatically extract a set of time-space attributes or features from any subimage in shot gathers. These attributes are subsequently inputted to the trained fully connected layer of CNNs to output two values between 0 and 1. Based on the two-element outputs, a discriminant score function is defined to provide a single indication for classifying input waveforms. The FB is then located from the calculated score maps by sequentially using a threshold, the first local minimum rule of every trace and a median filter. Finally, we adopt synthetic and real shot data examples to demonstrate the effectiveness of CNNs-based waveform classification and FB picking. The results illustrate that CNN is an efficient automatic data-driven classifier and picker.",
"title": ""
},
{
"docid": "11c117d839be466c369274f021caba13",
"text": "Android smartphones are becoming increasingly popular. The open nature of Android allows users to install miscellaneous applications, including the malicious ones, from third-party marketplaces without rigorous sanity checks. A large portion of existing malwares perform stealthy operations such as sending short messages, making phone calls and HTTP connections, and installing additional malicious components. In this paper, we propose a novel technique to detect such stealthy behavior. We model stealthy behavior as the program behavior that mismatches with user interface, which denotes the user's expectation of program behavior. We use static program analysis to attribute a top level function that is usually a user interaction function with the behavior it performs. Then we analyze the text extracted from the user interface component associated with the top level function. Semantic mismatch of the two indicates stealthy behavior. To evaluate AsDroid, we download a pool of 182 apps that are potentially problematic by looking at their permissions. Among the 182 apps, AsDroid reports stealthy behaviors in 113 apps, with 28 false positives and 11 false negatives.",
"title": ""
},
{
"docid": "f672df401b24571f81648066b3181890",
"text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.",
"title": ""
},
{
"docid": "0403bb8e2b96e3ad1ebfbbc0fa9434a7",
"text": "Sarcasm detection from text has gained increasing attention. While one thread of research has emphasized the importance of affective content in sarcasm detection, another avenue of research has explored the effectiveness of word representations. In this paper, we introduce a novel model for automated sarcasm detection in text, called Affective Word Embeddings for Sarcasm (AWES), which incorporates affective information into word representations. Extensive evaluation on sarcasm detection on six datasets across three domains of text (tweets, reviews and forum posts) demonstrates the effectiveness of the proposed model. The experimental results indicate that while sentiment affective representations yield best results on datasets comprising of short length text such as tweets, richer representations derived from fine-grained emotions are more suitable for detecting sarcasm from longer length documents such as product reviews and discussion forum posts.",
"title": ""
},
{
"docid": "107b95c3bb00c918c73d82dd678e46c0",
"text": "Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).",
"title": ""
},
{
"docid": "473f80115b7fa9979d6d6ffa2995c721",
"text": "Context Olive oil, the main fat in the Mediterranean diet, contains polyphenols, which have antioxidant properties and may affect serum lipid levels. Contribution The authors studied virgin olive oil (high in polyphenols), refined olive oil (low in polyphenols), and a mixture of the 2 oils in equal parts. Two hundred healthy young men consumed 25 mL of an olive oil daily for 3 weeks followed by the other olive oils in a randomly assigned sequence. Olive oils with greater polyphenol content increased high-density lipoprotein (HDL) cholesterol levels and decreased serum markers of oxidation. Cautions The increase in HDL cholesterol level was small. Implications Virgin olive oil might have greater health benefits than refined olive oil. The Editors Polyphenol intake has been associated with low cancer and coronary heart disease (CHD) mortality rates (1). Antioxidant and anti-inflammatory properties and improvements in endothelial dysfunction and the lipid profile have been reported for dietary polyphenols (2). Studies have recently suggested that Mediterranean health benefits may be due to a synergistic combination of phytochemicals and fatty acids (3). Olive oil, rich in oleic acid (a monounsaturated fatty acid), is the main fat of the Mediterranean diet (4). To date, most of the protective effect of olive oil within the Mediterranean diet has been attributed to its high monounsaturated fatty acid content (5). However, if the effect of olive oil can be attributed solely to its monounsaturated fatty acid content, any type of olive oil, rapeseed or canola oil, or monounsaturated fatty acidenriched fat would provide similar health benefits. Whether the beneficial effects of olive oil on the cardiovascular system are exclusively due to oleic acid remains to be elucidated. The minor components, particularly the phenolic compounds, in olive oil may contribute to the health benefits derived from the Mediterranean diet. Among olive oils usually present on the market, virgin olive oils produced by direct-press or centrifugation methods have higher phenolic content (150 to 350 mg/kg of olive oil) (6). In experimental studies, phenolic compounds in olive oil showed strong antioxidant properties (7, 8). Oxidized low-density lipoprotein (LDL) is currently thought to be more damaging to the arterial wall than native LDL cholesterol (9). Results of randomized, crossover, controlled clinical trials on the antioxidant effect of polyphenols from real-life daily doses of olive oil in humans are, however, conflicting (10). Growing evidence suggests that dietary phenols (1115) and plant-based diets (16) can modulate lipid and lipoprotein metabolism. The Effect of Olive Oil on Oxidative Damage in European Populations (EUROLIVE) Study is a multicenter, randomized, crossover, clinical intervention trial that aims to assess the effect of sustained daily doses of olive oil, as a function of its phenolic content, on the oxidative damage to lipid and LDL cholesterol levels and the lipid profile as cardiovascular risk factors. Methods Participants We recruited healthy men, 20 to 60 years of age, from 6 European cities through newspaper and university advertisements. Of the 344 persons who agreed to be screened, 200 persons were eligible (32 men from Barcelona, Spain; 33 men from Copenhagen, Denmark; 30 men from Kuopio, Finland; 31 men from Bologna, Italy; 40 men from Postdam, Germany; and 34 men from Berlin, Germany) and were enrolled from September 2002 through June 2003 (Figure 1). Participants were eligible for study inclusion if they provided written informed consent, were willing to adhere to the protocol, and were in good health. We preselected volunteers when clinical record, physical examination, and blood pressure were strictly normal and the candidate was a nonsmoker. Next, we performed a complete blood count, biochemical laboratory analyses, and urinary dipstick tests to measure levels of serum glucose, total cholesterol, creatinine, alanine aminotransferase, and triglycerides. We included candidates with values within the reference range. Exclusion criteria were smoking; use of antioxidant supplements, aspirin, or drugs with established antioxidant properties; hyperlipidemia; obesity; diabetes; hypertension; intestinal disease; or any other disease or condition that would impair adherence. We excluded women to avoid the possible interference of estrogens, which are considered to be potential antioxidants (17). All participants provided written informed consent, and the local institutional ethics committees approved the protocol. Figure 1. Study flow diagram. Sequence of olive oil administration: 1) high-, medium-, and low-polyphenol olive oil; 2) medium-, low-, and high-polyphenol olive oil; and 3) low-, high-, and medium-polyphenol olive oil. Design and Study Procedure The trial was a randomized, crossover, controlled study. We randomly assigned participants consecutively to 1 of 3 sequences of olive oil administration. Participants received a daily dose of 25 mL (22 g) of 3 olive oils with high (366 mg/kg), medium (164 mg/kg), and low (2.7 mg/kg) polyphenol content (Figure 1) in replacement of other raw fats. Sequences were high-, medium-, and low-polyphenol olive oil (sequence 1); medium-, low-, and high-polyphenol olive oil (sequence 2); and low-, high-, and medium-polyphenol olive oil (sequence 3). In the coordinating center, we prepared random allocation to each sequence, taken from a Latin square, for each center by blocks of 42 participants (14 persons in each sequence), using specific software that was developed at the Municipal Institute for Medical Research, Barcelona, Spain (Aleator, Municipal Institute for Medical Research). The random allocation was faxed to the participating centers upon request for each individual included in the study. Treatment containers were assigned a code number that was concealed from participants and investigators, and the coordinating center disclosed the code number only after completion of statistical analyses. Olive oils were specially prepared for the trial. We selected a virgin olive oil with high natural phenolic content (366 mg/kg) and measured its fatty acid and vitamin E composition. We tested refined olive oil harvested from the same cultivar and soil to find an olive oil with similar quantities of fatty acid and a similar micronutrient profile. Vitamin E was adjusted to values similar to those of the selected virgin olive oil. Because phenolic compounds are lost in the refinement process, the refined olive oil had a low phenolic content (2.7 mg/kg). By mixing virgin and refined olive oil, we obtained an olive oil with an intermediate phenolic content (164 mg/kg). Olive oils did not differ in fat and micronutrient composition (that is, vitamin E, triterpenes, and sitosterols), with the exception of phenolic content. Three-week interventions were preceded by 2-week washout periods, in which we requested that participants avoid olive and olive oil consumption. We chose the 2-week washout period to reach the equilibrium in the plasma lipid profile because longer intervention periods with fat-rich diets did not modify the lipid concentrations (18). Daily doses of 25 mL of olive oil were blindly prepared in containers delivered to the participants at the beginning of each intervention period. We instructed participants to return the 21 containers at the end of each intervention period so that the daily amount of unconsumed olive oil could be registered. Dietary Adherence We measured tyrosol and hydroxytyrosol, the 2 major phenolic compounds in olive oil as simple forms or conjugates (7), by gas chromatography and mass spectrometry in 24-hour urine before and after each intervention period as biomarkers of adherence to the type of olive oil ingested. We asked participants to keep a 3-day dietary record at baseline and after each intervention period. We requested that participants in all centers avoid a high intake of foods that contain antioxidants (that is, vegetables, legumes, fruits, tea, coffee, chocolate, wine, and beer). A nutritionist also personally advised participants to replace all types of habitually consumed raw fats with the olive oils (for example, spread the assigned olive oil on bread instead of butter, put the assigned olive oil on boiled vegetables instead of margarine, and use the assigned olive oil on salads instead of other vegetable oils or standard salad dressings). Data Collection Main outcome measures were changes in biomarkers of oxidative damage to lipids. Secondary outcomes were changes in lipid levels and in biomarkers of the antioxidant status of the participants. We assessed outcome measures at the beginning of the study (baseline) and before (preintervention) and after (postintervention) each olive oil intervention period. We collected blood samples at fasting state together with 24-hour urine and recorded anthropometric variables. We measured blood pressure with a mercury sphygmomanometer after at least a 10-minute rest in the seated position. We recorded physical activity at baseline and at the end of the study and assessed it by using the Minnesota Leisure Time Physical Activity Questionnaire (19). We measured 1) glucose and lipid profile, including serum glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, and triglyceride levels determined by enzymatic methods (2023) and LDL cholesterol levels calculated by the Friedewald formula; 2) oxidative damage to lipids, including plasma-circulating oxidized LDL measured by enzyme immunoassay, plasma total F2-isoprostanes determined by using high-performance liquid chromatography and stable isotope-dilution and mass spectrometry, plasma C18 hydroxy fatty acids measured by gas chromatography and mass spectrometry, and serum LDL cholesterol uninduced conjugated dienes measured by spectrophotometry and adjusted for the cholesterol concentration in LDL cholesterol levels; 3) antioxidant sta",
"title": ""
},
{
"docid": "19ee4367e4047f45b60968e3374cae7a",
"text": "BACKGROUND\nFusion zones between superficial fascia and deep fascia have been recognized by surgical anatomists since 1938. Anatomical dissection performed by the author suggested that additional superficial fascia fusion zones exist.\n\n\nOBJECTIVES\nA study was performed to evaluate and define fusion zones between the superficial and the deep fascia.\n\n\nMETHODS\nDissection of fresh and minimally preserved cadavers was performed using the accepted technique for defining anatomic spaces: dye injection combined with cross-sectional anatomical dissection.\n\n\nRESULTS\nThis study identified bilaminar membranes traveling from deep to superficial fascia at consistent locations in all specimens. These membranes exist as fusion zones between superficial and deep fascia, and are referred to as SMAS fusion zones.\n\n\nCONCLUSIONS\nNerves, blood vessels and lymphatics transition between the deep and superficial fascia of the face by traveling along and within these membranes, a construct that provides stability and minimizes shear. Bilaminar subfascial membranes continue into the subcutaneous tissues as unilaminar septa on their way to skin. This three-dimensional lattice of interlocking horizontal, vertical, and oblique membranes defines the anatomic boundaries of the fascial spaces as well as the deep and superficial fat compartments of the face. This information facilitates accurate volume augmentation; helps to avoid facial nerve injury; and provides the conceptual basis for understanding jowls as a manifestation of enlargement of the buccal space that occurs with age.",
"title": ""
},
{
"docid": "28b796954834230a0e8218e24bab0d35",
"text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).",
"title": ""
},
{
"docid": "77f8f90edd85f1af6de8089808153dd7",
"text": "Distributed coding is a new paradigm for video compression, based on Slepian and Wolf's and Wyner and Ziv's information-theoretic results from the 1970s. This paper reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. A Wyner-Ziv MPEG encoder that protects the video waveform rather than the compressed bit stream achieves graceful degradation under deteriorating channel conditions without a layered signal representation.",
"title": ""
},
{
"docid": "aec48ddea7f21cabb9648eec07c31dcd",
"text": "High voltage Marx generator implementation using IGBT (Insulated Gate Bipolar Transistor) stacks is proposed in this paper. To protect the Marx generator at the moment of breakdown, AOCP (Active Over-Current Protection) part is included. The Marx generator is composed of 12 stages and each stage is made of IGBT stacks, two diode stacks, and capacitors. IGBT stack is used as a single switch. Diode stacks and inductors are used to charge the high voltage capacitor at each stage without power loss. These are also used to isolate input and high voltage negative output in high voltage generation mode. The proposed Marx generator implementation uses IGBT stack with a simple driver and has modular design. This system structure gives compactness and easiness to implement the total system. Some experimental and simulated results are included to verify the system performances in this paper.",
"title": ""
},
{
"docid": "d0b16a75fb7b81c030ab5ce1b08d8236",
"text": "It is unquestionable that successive hardware generations have significantly improved GPU computing workload performance over the last several years. Moore's law and DRAM scaling have respectively increased single-chip peak instruction throughput by 3X and off-chip bandwidth by 2.2X from NVIDIA's GeForce 8800 GTX in November 2006 to its GeForce GTX 580 in November 2010. However, raw capability numbers typically underestimate the improvements in real application performance over the same time period, due to significant architectural feature improvements. To demonstrate the effects of architecture features and optimizations over time, we conducted experiments on a set of benchmarks from diverse application domains for multiple GPU architecture generations to understand how much performance has truly been improving for those workloads. First, we demonstrate that certain architectural features make a huge difference in the performance of unoptimized code, such as the inclusion of a general cache which can improve performance by 2-4× in some situations. Second, we describe what optimization patterns have been most essential and widely applicable for improving performance for GPU computing workloads across all architecture generations. Some important optimization patterns included data layout transformation, converting scatter accesses to gather accesses, GPU workload regularization, and granularity coarsening, each of which improved performance on some benchmark by over 20%, sometimes by a factor of more than 5×. While hardware improvements to baseline unoptimized code can reduce the speedup magnitude, these patterns remain important for even the most recent GPUs. Finally, we identify which added architectural features created significant new optimization opportunities, such as increased register file capacity or reduced bandwidth penalties for misaligned accesses, which increase performance by 2× or more in the optimized versions of relevant benchmarks.",
"title": ""
}
] |
scidocsrr
|
391e6550267acdb9f833f7898eb65d00
|
Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
|
[
{
"docid": "cc0a875eca7237f786b81889f028f1f2",
"text": "Online photo services such as Flickr and Zooomr allow users to share their photos with family, friends, and the online community at large. An important facet of these services is that users manually annotate their photos using so called tags, which describe the contents of the photo or provide additional contextual and semantical information. In this paper we investigate how we can assist users in the tagging phase. The contribution of our research is twofold. We analyse a representative snapshot of Flickr and present the results by means of a tag characterisation focussing on how users tags photos and what information is contained in the tagging. Based on this analysis, we present and evaluate tag recommendation strategies to support the user in the photo annotation task by recommending a set of tags that can be added to the photo. The results of the empirical evaluation show that we can effectively recommend relevant tags for a variety of photos with different levels of exhaustiveness of original tagging.",
"title": ""
},
{
"docid": "edf560968135e9083bdc3d4c1ebc230f",
"text": "We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of co-occurrences between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the document as follows. If the probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of bias of a distribution is measured by the χ2-measure. Our algorithm shows comparable performance to tfidf without using a corpus.",
"title": ""
}
] |
[
{
"docid": "cf30e30d7683fd2b0dec2bd6cc354620",
"text": "As online courses such as MOOCs become increasingly popular, there has been a dramatic increase for the demand for methods to facilitate this type of organisation. While resources for new courses are often freely available, they are generally not suitably organised into easily manageable units. In this paper, we investigate how state-of-the-art topic segmentation models can be utilised to automatically transform unstructured text into coherent sections, which are suitable for MOOCs content browsing. The suitability of this method with regards to course organisation is confirmed through experiments with a lecture corpus, configured explicitly according to MOOCs settings. Experimental results demonstrate the reliability and scalability of this approach over various academic disciplines. The findings also show that the topic segmentation model which used discourse cues displayed the best results overall.",
"title": ""
},
{
"docid": "5474bdc85c226ab613ce281b239a5e6a",
"text": "This paper summarizes theoretical framework and preliminary data for a planned action research project in a secondary education institution with the intention to improve teachers` digital skills and capacity for educational ICT use through establishment of a professional learning community and development of cooperation between the school and a university. This study aims to fill the gap of knowledge about how engaging in professional learning communities (PLC) fosters teachers` skills and confidence with ICT. Based on the theoretical assumptions and review of previous research, initial ideas are drafted for an action research project.",
"title": ""
},
{
"docid": "5cdb99bf928039bd5377b3eca521d534",
"text": "Thanks to advances in information and communication technologies, there is a prominent increase in the amount of information produced specifically in the form of text documents. In order to, effectively deal with this “information explosion” problem and utilize the huge amount of text databases, efficient and scalable tools and techniques are indispensable. In this study, text clustering which is one of the most important techniques of text mining that aims at extracting useful information by processing data in textual form is addressed. An improved variant of spherical K-Means (SKM) algorithm named multi-cluster SKM is developed for clustering high dimensional document collections with high performance and efficiency. Experiments were performed on several document data sets and it is shown that the new algorithm provides significant increase in clustering quality without causing considerable difference in CPU time usage when compared to SKM algorithm.",
"title": ""
},
{
"docid": "f4b48bdf794bc0e5672cc9efb2c5b48b",
"text": "In this paper, we formulate the deep residual network (ResNet) as a control problem of transport equation. In ResNet, the transport equation is solved along the characteristics. Based on this observation, deep neural network is closely related to the control problem of PDEs on manifold. We propose several models based on transport equation, Hamilton-Jacobi equation and Fokker-Planck equation. The discretization of these PDEs on point cloud is also discussed. keywords: Deep residual network; control problem; manifold learning; point cloud; transport equation; Hamilton-Jacobi equation 1 Deep Residual Network Deep convolution neural networks have achieved great successes in image classification. Recently, an approach of deep residual learning is proposed to tackle the degradation in the classical deep neural network [7, 8]. The deep residual network can be realized by adding shortcut connections in the classical CNN. A building block is shown in Fig. 1. Formally, a building block is defined as: y = F (x, {Wi}) + x. Here x and y are the input and output vectors of the layers. The function F (x, {Wi}) represents the residual mapping to be learned. In Fig. 1, F = W2 · σ(W1 · σ(x)) in which σ = ReLU ◦ BN denotes composition of ReLU and Batch-Normalization. ∗Department of Mathematics, Hong Kong University of Science & Technology, Hong Kong. Email: mazli@ust.hk. †Yau Mathematical Sciences Center, Tsinghua University, Beijing, China, 100084. Email: zqshi@tsinghua.edu.cn. 1 ar X iv :1 70 8. 05 11 5v 3 [ cs .I T ] 2 5 Ja n 20 18",
"title": ""
},
{
"docid": "c3f81c5e4b162564b15be399b2d24750",
"text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.",
"title": ""
},
{
"docid": "560c2e21bc72cb75b1a802939cc1fd40",
"text": "Social comparison theory maintains that people think about themselves compared with similar others. Those in one culture, then, compare themselves with different others and standards than do those in another culture, thus potentially confounding cross-cultural comparisons. A pilot study and Study 1 demonstrated the problematic nature of this reference-group effect: Whereas cultural experts agreed that East Asians are more collectivistic than North Americans, cross-cultural comparisons of trait and attitude measures failed to reveal such a pattern. Study 2 found that manipulating reference groups enhanced the expected cultural differences, and Study 3 revealed that people from different cultural backgrounds within the same country exhibited larger differences than did people from different countries. Cross-cultural comparisons using subjective Likert scales are compromised because of different reference groups. Possible solutions are discussed.",
"title": ""
},
{
"docid": "32b4b275dc355dff2e3e168fe6355772",
"text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.",
"title": ""
},
{
"docid": "628840e66a3ea91e75856b7ae43cb9bb",
"text": "Optimal shape design of structural elements based on boundary variations results in final designs that are topologically equivalent to the initial choice of design, and general, stable computational schemes for this approach often require some kind of remeshing of the finite element approximation of the analysis problem. This paper presents a methodology for optimal shape design where both these drawbacks can be avoided. The method is related to modern production techniques and consists of computing the optimal distribution in space of an anisotropic material that is constructed by introducing an infimum of periodically distributed small holes in a given homogeneous, i~otropic material, with the requirement that the resulting structure can carry the given loads as well as satisfy other design requirements. The computation of effective material properties for the anisotropic material is carried out using the method of homogenization. Computational results are presented and compared with results obtained by boundary variations.",
"title": ""
},
{
"docid": "65a990303d1d6efd3aea5307e7db9248",
"text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org",
"title": ""
},
{
"docid": "3ea7700a4fff166c1a5bc8c6c5aa3ade",
"text": "ion-Based Intrusion Detection The implementation of many misuse detection approaches shares a common problem: Each system is written for a single environment and has proved difficult to use in other environments that may have similar policies and concerns. The primary goal of abstraction-based intrusion detection is to address this problem.",
"title": ""
},
{
"docid": "32a964bd36770b8c50a0e74289f4503b",
"text": "Several competing human behavior models have been proposed to model and protect against boundedly rational adversaries in repeated Stackelberg security games (SSGs). However, these existing models fail to address three main issues which are extremely detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries’ past actions (“attacks on targets”), they fail to take into account adversaries’ future adaptation based on successes or failures of these past actions. Second, they assume that sufficient data in the initial rounds will lead to a reliable model of the adversary. However, our analysis reveals that the issue is not the amount of data, but that there just is not enough of the attack surface exposed to the adversary to learn a reliable model. Third, current leading approaches have failed to include probability weighting functions, even though it is well known that human beings’ weighting of probability is typically nonlinear. The first contribution of this paper is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary’s past actions on exposed portions of the attack surface to model adversary adaptiveness; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary’s lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary’s true weighting of probability. Our second contribution is a first “longitudinal study” – at least in the context of SSGs – of competing models in settings involving repeated interaction between the attacker and the defender. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP.",
"title": ""
},
{
"docid": "680e9f3b5aeb02822c8889044517f2ec",
"text": "Currently, there are many large, automatically constructed knowledge bases (KBs). One interesting task is learning from a knowledge base to generate new knowledge either in the form of inferred facts or rules that define regularities. One challenge for learning is that KBs are necessarily open world: we cannot assume anything about the truth values of tuples not included in the KB. When a KB only contains facts (i.e., true statements), which is typically the case, we lack negative examples, which are often needed by learning algorithms. To address this problem, we propose a novel score function for evaluating the quality of a first-order rule learned from a KB. Our metric attempts to include information about the tuples not in the KB when evaluating the quality of a potential rule. Empirically, we find that our metric results in more precise predictions than previous approaches.",
"title": ""
},
{
"docid": "148f306c8c9a4170afcdc8a0b6ff902c",
"text": "Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering.",
"title": ""
},
{
"docid": "32200036224dab6e3a165376a1c7a254",
"text": "Modern graphics accelerators have embedded programmable components in the form of vertex and fragment shading units. Current APIs permit specification of the programs for these components using an assembly-language level interface. Compilers for high-level shading languages are available but these read in an external string specification, which can be inconvenient.It is possible, using standard C++, to define a high-level shading language directly in the API. Such a language can be nearly indistinguishable from a special-purpose shading language, yet permits more direct interaction with the specification of textures and parameters, simplifies implementation, and enables on-the-fly generation, manipulation, and specialization of shader programs. A shading language built into the API also permits the lifting of C++ host language type, modularity, and scoping constructs into the shading language without any additional implementation effort.",
"title": ""
},
{
"docid": "0950052c92b4526c253acc0d4f0f45a0",
"text": "Pictogram communication is successful when participants at both end of the communication channel share a common pictogram interpretation. Not all pictograms carry universal interpretation, however; the issue of ambiguous pictogram interpretation must be addressed to assist pictogram communication. To unveil the ambiguity possible in pictogram interpretation, we conduct a human subject experiment to identify culture-specific criteria employed by humans by detecting cultural differences in pictogram interpretations. Based on the findings, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given pictogram category. The proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. The WordNet, the ChaSen, and the EDR Electronic Dictionary registered to the Language Grid are utilized to merge synonymous pictogram interpretations and to categorize pictogram interpretations into super-concept categories. We show how the Language Grid can assist the crosscultural research process.",
"title": ""
},
{
"docid": "818862fc058767caa026ca58d0b9b1d2",
"text": "This paper proposes a novel outer-rotor flux-switching permanent-magnet (OR-FSPM) machine with specific wedge-shaped magnets for in-wheel light-weight traction applications. First, the geometric topology is introduced. Then, the combination principle of stator slots and rotor poles for OR-FSPM machines is investigated. Furthermore, to demonstrate the relationship between performance specifications (e.g., torque and speed) and key design parameters and dimensions (e.g., rotor outer diameter and stack length) of OR-FSPM machines at preliminary design stage, an analytical torque-sizing equation is proposed and verified by two-dimensional (2-D) finite-element analysis (FEA). Moreover, optimizations of key dimensions are conducted on an initially designed proof-of-principle three-phase 12-stator-slot/22-rotor-pole prototyped machine. Then, based on 2-D-FEA, a comprehensive comparison between a pair of OR-FSPM machines with rectangular- and wedge-shaped magnets and a surface-mounted permanent-magnet (SPM) machine is performed. The results indicate that the proposed OR-FSPM machine with wedge-shaped magnets exhibits better flux-weakening capability, higher efficiency, and wider speed range than the counterparts, especially for torque capability, where the proposed wedge-shaped magnets-based one could produce 40% and 61.5% more torque than the rectangular-shaped magnets-based machine and SPM machine, respectively, with the same rated current density (5 A/mm2). Finally, the predicted performance of the proposed OR-FSPM machine is verified by experiments on a prototyped machine.",
"title": ""
},
{
"docid": "ef264055e4bb6e6205e92ba6ed38d7bd",
"text": "3D printing or additive manufacturing is a novel method of manufacturing parts directly from digital model using layer-by-layer material build-up approach. This tool-less manufacturing method can produce fully dense metallic parts in short time, with high precision. Features of additive manufacturing like freedom of part design, part complexity, light weighting, part consolidation, and design for function are garnering particular interests in metal additive manufacturing for aerospace, oil and gas, marine, and automobile applications. Powder bed fusion, in which each powder bed layer is selectively fused using energy source like laser, is the most promising additive manufacturing technology that can be used for manufacturing small, low-volume, complex metallic parts. This review presents overview of 3D Printing technologies, materials, applications, advantages, disadvantages, challenges, economics, and applications of 3D metal printing technology, the DMLS process in detail, and also 3D metal printing perspectives in developing countries.",
"title": ""
},
{
"docid": "14fe7deaece11b3d4cd4701199a18599",
"text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.",
"title": ""
},
{
"docid": "eee9f9e1e8177b68a278eab025dae84b",
"text": "Herzberg et al. (1959) developed “Two Factors theory” to focus on working conditions necessary for employees to be motivated. Since Herzberg examined only white collars in his research, this article reviews later studies on motivation factors of blue collar workers verses white collars and suggests some hypothesis for further researches.",
"title": ""
}
] |
scidocsrr
|
55bd54d13a2ba4dd6ad7fd7d079f1b86
|
Logics for resource-bounded agents
|
[
{
"docid": "4285d9b4b9f63f22033ce9a82eec2c76",
"text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright 2001 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "12866e003093bc7d89d751697f2be93c",
"text": "We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems.",
"title": ""
}
] |
[
{
"docid": "fdff78b32803eb13904c128d8e011ea8",
"text": "The task of identifying when to take a conversational turn is an important function of spoken dialogue systems. The turn-taking system should also ideally be able to handle many types of dialogue, from structured conversation to spontaneous and unstructured discourse. Our goal is to determine how much a generalized model trained on many types of dialogue scenarios would improve on a model trained only for a specific scenario. To achieve this goal we created a large corpus of Wizard-of-Oz conversation data which consisted of several different types of dialogue sessions, and then compared a generalized model with scenario-specific models. For our evaluation we go further than simply reporting conventional metrics, which we show are not informative enough to evaluate turn-taking in a real-time system. Instead, we process results using a performance curve of latency and false cut-in rate, and further improve our model's real-time performance using a finite-state turn-taking machine. Our results show that the generalized model greatly outperformed the individual model for attentive listening scenarios but was worse in job interview scenarios. This implies that a model based on a large corpus is better suited to conversation which is more user-initiated and unstructured. We also propose that our method of evaluation leads to more informative performance metrics in a real-time system.",
"title": ""
},
{
"docid": "f6647e82741dfe023ee5159bd6ac5be9",
"text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.",
"title": ""
},
{
"docid": "766dd6c18f645d550d98f6e3e86c7b2f",
"text": "Licorice root has been used for years to regulate gastrointestinal function in traditional Chinese medicine. This study reveals the gastrointestinal effects of isoliquiritigenin, a flavonoid isolated from the roots of Glycyrrhiza glabra (a kind of Licorice). In vivo, isoliquiritigenin produced a dual dose-related effect on the charcoal meal travel, inhibitory at the low doses, while prokinetic at the high doses. In vitro, isoliquiritigenin showed an atropine-sensitive concentration-dependent spasmogenic effect in isolated rat stomach fundus. However, a spasmolytic effect was observed in isolated rabbit jejunums, guinea pig ileums and atropinized rat stomach fundus, either as noncompetitive inhibition of agonist concentration-response curves, inhibition of high K(+) (80 mM)-induced contractions, or displacement of Ca(2+) concentration-response curves to the right, indicating a calcium antagonist effect. Pretreatment with N(omega)-nitro-L-arginine methyl ester (L-NAME; 30 microM), indomethacin (10 microM), methylene blue (10 microM), tetraethylammonium chloride (0.5 mM), glibenclamide (1 microM), 4-aminopyridine (0.1 mM), or clotrimazole (1 microM) did not inhibit the spasmolytic effect. These results indicate that isoliquiritigenin plays a dual role in regulating gastrointestinal motility, both spasmogenic and spasmolytic. The spasmogenic effect may involve the activating of muscarinic receptors, while the spasmolytic effect is predominantly due to blockade of the calcium channels.",
"title": ""
},
{
"docid": "0022121142a2b3a2b627fcb1cfe48ccb",
"text": "Graph colouring and its generalizations are useful tools in modelling a wide variety of scheduling and assignment problems. In this paper we review several variants of graph colouring, such as precolouring extension, list colouring, multicolouring, minimum sum colouring, and discuss their applications in scheduling.",
"title": ""
},
{
"docid": "3c118c4f2b418f801faee08050e3a165",
"text": "Unsupervised learning from visual data is one of the most difficult challenges in computer vision. It is essential for understanding how visual recognition works. Learning from unsupervised input has an immense practical value, as huge quantities of unlabeled videos can be collected at low cost. Here we address the task of unsupervised learning to detect and segment foreground objects in single images. We achieve our goal by training a student pathway, consisting of a deep neural network that learns to predict, from a single input image, the output of a teacher pathway that performs unsupervised object discovery in video. Our approach is different from the published methods that perform unsupervised discovery in videos or in collections of images at test time. We move the unsupervised discovery phase during the training stage, while at test time we apply the standard feed-forward processing along the student pathway. This has a dual benefit: firstly, it allows, in principle, unlimited generalization possibilities during training, while remaining fast at testing. Secondly, the student not only becomes able to detect in single images significantly better than its unsupervised video discovery teacher, but it also achieves state of the art results on two current benchmarks, YouTube Objects and Object Discovery datasets. At test time, our system is two orders of magnitude faster than other previous methods.",
"title": ""
},
{
"docid": "44cf91a19b11fa62a5859ce236e7dc3f",
"text": "We previously reported an ultrasound-guided transversus thoracic abdominis plane (TTP) block, able to target many anterior branches of the intercostal nerve (Th2-6), releasing the pain in the internal mammary area [1–3]. The injection point for this TTP block was located between the transversus thoracic muscle and the internal intercostal muscle, amid the third and fourth left ribs next to the sternum. However, analgesia efficacy in the region of an anterior branch of the sixth intercostal nerve was unstable. We subsequently investigated a more appropriate injection point for an ultrasound-guided TTP block. We selected 10 healthy volunteers for this study. All volunteers received bilateral TTP blocks. Right lateral TTP blocks of all cases involved the injection of 20 mL of 0.375% levobupivacaine into the fascial plane between the transversus thoracic muscle and the internal intercostal muscle at between the third and fourth ribs connecting at the sternum. On the other hand, all left lateral TTP blocks were administered by injection of 20 mL of 0.375% levobupivacaine into the fascial plane between the transversus thoracic muscle and the internal intercostal muscle between the fourth and fifth connecting at the sternum. In 20 minutes after the injections, we investigated the spread of local anesthetic on the TTP by an ultrasound machine (Fig. 1) and the analgesic effect by a sense testing. The sense testing is blindly the cold testing. The spread of local anesthetic is detailed in Table 1. As for the analgesic effect of sense testing, both sides gain sensory extinction in the region of multiple anterior branches of inter-",
"title": ""
},
{
"docid": "4645d0d7b1dfae80657f75d3751ef72a",
"text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.",
"title": ""
},
{
"docid": "198d352bf0c044ceccddaeb630b3f9c7",
"text": "In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low-power write operations for the learning and implement short-term association using temporal coding and spike-timing-dependent plasticity–based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime.",
"title": ""
},
{
"docid": "bade68b8f95fc0ae5a377a52c8b04b5c",
"text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future",
"title": ""
},
{
"docid": "5db19f15ec148746613bdb48a4ca746a",
"text": "Wireless power transfer (WPT) system is a practical and promising way for charging electric vehicles due to its security, convenience, and reliability. The requirement for high-power wireless charging is on the rise, but implementing such a WPT system has been a challenge because of the constraints of the power semiconductors and the installation space limitation at the bottom of the vehicle. In this paper, bipolar coils and unipolar coils are integrated into the transmitting side and the receiving side to make the magnetic coupler more compact while delivering high power. The same-side coils are naturally decoupled; therefore, there is no magnetic coupling between the same-side coils. The circuit model of the proposed WPT system using double-sided LCC compensations is presented. Finite-element analysis tool ANSYS MAXWELL is adopted to simulate and design the magnetic coupler. Finally, an experimental setup is constructed to evaluate the proposed WPT system. The proposed WPT system achieved the dc–dc efficiency at 94.07% while delivering 4.73 kW to the load with a vertical air gap of 150 mm.",
"title": ""
},
{
"docid": "1acbb63a43218d216a2e850d9b3d3fa1",
"text": "In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner.",
"title": ""
},
{
"docid": "6073d07e5e6a05cbaa84ab8cd734bd12",
"text": "Microblogging websites, e.g. Twitter and Sina Weibo, have become a popular platform for socializing and sharing information in recent years. Spammers have also discovered this new opportunity to unfairly overpower normal users with unsolicited content, namely social spams. While it is intuitive for everyone to follow legitimate users, recent studies show that both legitimate users and spammers follow spammers for different reasons. Evidence of users seeking for spammers on purpose is also observed. We regard this behavior as a useful information for spammer detection. In this paper, we approach the problem of spammer detection by leveraging the \"carefulness\" of users, which indicates how careful a user is when she is about to follow a potential spammer. We propose a framework to measure the carefulness, and develop a supervised learning algorithm to estimate it based on known spammers and legitimate users. We then illustrate how spammer detection can be improved in the aid of the proposed measure. Evaluation on a real dataset with millions of users and an online testing are performed on Sina Weibo. The results show that our approach indeed capture the carefulness, and it is effective to detect spammers. In addition, we find that the proposed measure is also beneficial for other applications, e.g. link prediction.",
"title": ""
},
{
"docid": "7ea56b976524d77b7234340318f7e8dc",
"text": "Market Integration and Market Structure in the European Soft Drinks Industry: Always Coca-Cola? by Catherine Matraves* This paper focuses on the question of European integration, considering whether the geographic level at which competition takes place differs across the two major segments of the soft drinks industry: carbonated soft drinks and mineral water. Our evidence shows firms are competing at the European level in both segments. Interestingly, the European market is being integrated through corporate strategy, defined as increased multinationality, rather than increased trade flows. To interpret these results, this paper uses the new theory of market structure where the essential notion is that in endogenous sunk cost industries such as soft drinks, the traditional inverse structure-size relation may break down, due to the escalation of overhead expenditures.",
"title": ""
},
{
"docid": "129a85f7e611459cf98dc7635b44fc56",
"text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.",
"title": ""
},
{
"docid": "d662e37e868f686a31fda14d4676501a",
"text": "Gesture recognition has multiple applications in medical and engineering fields. The problem of hand gesture recognition consists of identifying, at any moment, a given gesture performed by the hand. In this work, we propose a new model for hand gesture recognition in real time. The input of this model is the surface electromyography measured by the commercial sensor the Myo armband placed on the forearm. The output is the label of the gesture executed by the user at any time. The proposed model is based on the Λ-nearest neighbor and dynamic time warping algorithms. This model can learn to recognize any gesture of the hand. To evaluate the performance of our model, we measured and compared its accuracy at recognizing 5 classes of gestures to the accuracy of the proprietary system of the Myo armband. As a result of this evaluation, we determined that our model performs better (86% accurate) than the Myo system (83%).",
"title": ""
},
{
"docid": "9f13ba2860e70e0368584bb4c36d01df",
"text": "Network log messages (e.g., syslog) are expected to be valuable and useful information to detect unexpected or anomalous behavior in large scale networks. However, because of the huge amount of system log data collected in daily operation, it is not easy to extract pinpoint system failures or to identify their causes. In this paper, we propose a method for extracting the pinpoint failures and identifying their causes from network syslog data. The methodology proposed in this paper relies on causal inference that reconstructs causality of network events from a set of time series of events. Causal inference can filter out accidentally correlated events, thus it outputs more plausible causal events than traditional cross-correlation-based approaches can. We apply our method to 15 months’ worth of network syslog data obtained from a nationwide academic network in Japan. The proposed method significantly reduces the number of pseudo correlated events compared with the traditional methods. Also, through three case studies and comparison with trouble ticket data, we demonstrate the effectiveness of the proposed method for practical network operation.",
"title": ""
},
{
"docid": "73a5fee293c2ae98e205fd5093cf8b9c",
"text": "Millimeter-wave (MMW) imaging techniques have been used for the detection of concealed weapons and contraband carried on personnel at airports and other secure locations. The combination of frequency-modulated continuous-wave (FMCW) technology and MMW imaging techniques should lead to compact, light-weight, and low-cost systems which are especially suitable for security and detection application. However, the long signal duration time leads to the failure of the conventional stop-and-go approximation of the pulsed system. Therefore, the motion within the signal duration time needs to be taken into account. Analytical threedimensional (3-D) backscattered signal model, without using the stop-and-go approximation, is developed in this paper. Then, a wavenumber domain algorithm, with motion compensation, is presented. In addition, conventional wavenumber domain methods use Stolt interpolation to obtain uniform wavenumber samples and compute the fast Fourier transform (FFT). This paper uses the 3D nonuniform fast Fourier transform (NUFFT) instead of the Stolt interpolation and FFT. The NUFFT-based method is much faster than the Stolt interpolation-based method. Finally, point target simulations are performed to verify the algorithm.",
"title": ""
},
{
"docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c",
"text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.",
"title": ""
},
{
"docid": "d4d48e7275191ab29f805ca86e626c04",
"text": "This paper addresses the problem of keyword extraction from conversations, with the goal of using these keywords to retrieve, for each short conversation fragment, a small number of potentially relevant documents, which can be recommended to participants. However, even a short fragment contains a variety of words, which are potentially related to several topics; moreover, using an automatic speech recognition (ASR) system introduces errors among them. Therefore, it is difficult to infer precisely the information needs of the conversation participants. We first propose an algorithm to extract keywords from the output of an ASR system (or a manual transcript for testing), which makes use of topic modeling techniques and of a submodular reward function which favors diversity in the keyword set, to match the potential diversity of topics and reduce ASR noise. Then, we propose a method to derive multiple topically separated queries from this keyword set, in order to maximize the chances of making at least one relevant recommendation when using these queries to search over the English Wikipedia. The proposed methods are evaluated in terms of relevance with respect to conversation fragments from the Fisher, AMI, and ELEA conversational corpora, rated by several human judges. The scores show that our proposal improves over previous methods that consider only word frequency or topic similarity, and represents a promising solution for a document recommender system to be used in conversations.",
"title": ""
},
{
"docid": "a9ff593d6eea9f28aa1d2b41efddea9b",
"text": "A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.",
"title": ""
}
] |
scidocsrr
|
eeb25d53134c4cc77a78e8cb6d6fabbe
|
An Intelligent Secure and Privacy-Preserving Parking Scheme Through Vehicular Communications
|
[
{
"docid": "fd61461d5033bca2fd5a2be9bfc917b7",
"text": "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.",
"title": ""
}
] |
[
{
"docid": "90b248a3b141fc55eb2e55d274794953",
"text": "The aerodynamic admittance function (AAF) has been widely invoked to relate wind pressures on building surfaces to the oncoming wind velocity. In current practice, strip and quasi-steady theories are generally employed in formulating wind effects in the along-wind direction. These theories permit the representation of the wind pressures on building surfaces in terms of the oncoming wind velocity field. Synthesis of the wind velocity field leads to a generalized wind load that employs the AAF. This paper reviews the development of the current AAF in use. It is followed by a new definition of the AAF, which is based on the base bending moment. It is shown that the new AAF is numerically equivalent to the currently used AAF for buildings with linear mode shape and it can be derived experimentally via high frequency base balance. New AAFs for square and rectangular building models were obtained and compared with theoretically derived expressions. Some discrepancies between experimentally and theoretically derived AAFs in the high frequency range were noted.",
"title": ""
},
{
"docid": "97c0dc54f51ebcfe041f18028a15c621",
"text": "Mobile learning or “m-learning” is the process of learning when learners are not at a fixed location or time and can exploit the advantage of learning opportunities using mobile technologies. Nowadays, speech recognition is being used in many mobile applications.!Speech recognition helps people to interact with the device as if were they talking to another person. This technology helps people to learn anything using computers by promoting self-study over extended periods of time. The objective of this study focuses on designing and developing a mobile application for the Arabic recognition of spoken Quranic verses. The application is suitable for Android-based devices. The application is called Say Quran and is available on Google Play Store. Moreover, this paper presents the results of a preliminary study to gather feedback from students regarding the developed application.",
"title": ""
},
{
"docid": "1b9f54b275252818f730858654dc4348",
"text": "We will demonstrate a conversational products recommendation agent. This system shows how we combine research in personalized recommendation systems with research in dialogue systems to build a virtual sales agent. Based on new deep learning technologies we developed, the virtual agent is capable of learning how to interact with users, how to answer user questions, what is the next question to ask, and what to recommend when chatting with a human user. Normally a descent conversational agent for a particular domain requires tens of thousands of hand labeled conversational data or hand written rules. This is a major barrier when launching a conversation agent for a new domain. We will explore and demonstrate the effectiveness of the learning solution even when there is no hand written rules or hand labeled training data.",
"title": ""
},
{
"docid": "fdc1beef8614e0c85e784597532a1ce4",
"text": "This article presents the hardware design and software algorithms of RoboSimian, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain. The robot has generalized limbs and hands capable of mobility and manipulation, along with almost fully hemispherical 3D sensing with passive stereo cameras. The system is semi-autonomous, enabling low-bandwidth, high latency control operated from a standard laptop. Because limbs are used for mobility and manipulation, a single unified mobile manipulation planner is used to generate autonomous behaviors, including walking, sitting, climbing, grasping, and manipulating. The remote operator interface is optimized to designate, parameterize, sequence, and preview behaviors, which are then executed by the robot. RoboSimian placed fifth in the DARPA Robotics Challenge (DRC) Trials, demonstrating its ability to perform disaster recovery tasks in degraded human environments.",
"title": ""
},
{
"docid": "6300f94dbfa58583e15741e5c86aa372",
"text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.",
"title": ""
},
{
"docid": "eb29f281b0237bea84ae26829f5545bd",
"text": "Using formal concept analysis, we propose a method for engineering ontology from MongoDB to effectively represent unstructured data. Our method consists of three main phases: (1) generating formal context from a MongoDB, (2) applying formal concept analysis to derive a concept lattice from that formal context, and (3) converting the obtained concept lattice to the first prototype of an ontology. We apply our method on NorthWind database and demonstrate how the proposed mapping rules can be used for learning an ontology from such database. At the end, we discuss about suggestions by which we can improve and generalize the method for more complex database examples.",
"title": ""
},
{
"docid": "51f2ba8b460be1c9902fb265b2632232",
"text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.",
"title": ""
},
{
"docid": "a8ddaed8209d09998159014307233874",
"text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.",
"title": ""
},
{
"docid": "c6bfdc5c039de4e25bb5a72ec2350223",
"text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.",
"title": ""
},
{
"docid": "5e6175d56150485d559d0c1a963e12b8",
"text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.",
"title": ""
},
{
"docid": "e70425a0b9d14ff4223f3553de52c046",
"text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.",
"title": ""
},
{
"docid": "d05c6ec4bfb24f283e7f8baa08985e70",
"text": "This paper describes a recently developed architecture for a Hardware-in-the-Loop simulator for Unmanned Aerial Vehicles. The principal idea is to use the advanced modeling capabilities of Simulink rather than hard-coded software as the flight dynamics simulating engine. By harnessing Simulink’s ability to precisely model virtually any dynamical system or phenomena this newly developed simulator facilitates the development, validation and verification steps of flight control algorithms. Although the presented architecture is used in conjunction with a particular commercial autopilot, the same approach can be easily implemented on a flight platform with a different autopilot. The paper shows the implementation of the flight modeling simulation component in Simulink supported with an interfacing software to a commercial autopilot. This offers the academic community numerous advantages for hardware-in-the-loop simulation of flight dynamics and control tasks. The developed setup has been rigorously tested under a wide variety of conditions. Results from hardware-in-the-loop and real flight tests are presented and compared to validate its adequacy and assess its usefulness as a rapid prototyping tool.",
"title": ""
},
{
"docid": "ce99ce3fb3860e140164e7971291f0fa",
"text": "We describe the development and psychometric characteristics of the Generalized Workplace Harassment Questionnaire (GWHQ), a 29-item instrument developed to assess harassing experiences at work in five conceptual domains: verbal aggression, disrespect, isolation/exclusion, threats/bribes, and physical aggression. Over 1700 current and former university employees completed the GWHQ at three time points. Factor analytic results at each wave of data suggested a five-factor solution that did not correspond to the original five conceptual factors. We suggest a revised scoring scheme for the GWHQ utilizing four of the empirically extracted factors: covert hostility, verbal hostility, manipulation, and physical hostility. Covert hostility was the most frequently experienced type of harassment, followed by verbal hostility, manipulation, and physical hostility. Verbal hostility, covert hostility, and manipulation were found to be significant predictors of psychological distress.",
"title": ""
},
{
"docid": "5f806baa9987146a642fbce106f43291",
"text": "Biofouling is generally undesirable for many applications. An overview of the medical, marine and industrial fields susceptible to fouling is presented. Two types of fouling include biofouling from organism colonization and inorganic fouling from non-living particles. Nature offers many solutions to control fouling through various physical and chemical control mechanisms. Examples include low drag, low adhesion, wettability (water repellency and attraction), microtexture, grooming, sloughing, various miscellaneous behaviours and chemical secretions. A survey of nature's flora and fauna was taken in order to discover new antifouling methods that could be mimicked for engineering applications. Antifouling methods currently employed, ranging from coatings to cleaning techniques, are described. New antifouling methods will presumably incorporate a combination of physical and chemical controls.",
"title": ""
},
{
"docid": "337a738d386fa66725fe9be620365d5f",
"text": "Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.",
"title": ""
},
{
"docid": "c41c38377b1a824e1d021794802c7aed",
"text": "This paper presents an optimization methodology that includes three important components necessary for a systematic approach to naval ship concept design. These are: • An efficient and effective search of design space for non-dominated designs • Well-defined and quantitative measures of objective attributes • An effective format to describe the design space and to present non-dominated concepts for rational selection by the customer A Multiple-Objective Genetic Optimization (MOGO) is used to search design parameter space and identify non-dominated design concepts based on life cycle cost and mission effectiveness. A nondominated frontier and selected generations of feasible designs are used to present results to the customer for selection of preferred alternatives. A naval ship design application is presented.",
"title": ""
},
{
"docid": "261f146b67fd8e13d1ad8c9f6f5a8845",
"text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.",
"title": ""
},
{
"docid": "3c95e090ab4e57f2fd21543226ad55ae",
"text": "Increase in the area and neuron number of the cerebral cortex over evolutionary time systematically changes its computational properties. One of the fundamental developmental mechanisms generating the cortex is a conserved rostrocaudal gradient in duration of neuron production, coupled with distinct asymmetries in the patterns of axon extension and synaptogenesis on the same axis. A small set of conserved sensorimotor areas with well-defined thalamic input anchors the rostrocaudal axis. These core mechanisms organize the cortex into two contrasting topographic zones, while systematically amplifying hierarchical organization on the rostrocaudal axis in larger brains. Recent work has shown that variation in 'cognitive control' in multiple species correlates best with absolute brain size, and this may be the behavioral outcome of this progressive organizational change.",
"title": ""
},
{
"docid": "172aaf47ee3f89818abba35a463ecc76",
"text": "I examined the relationship of recalled and diary recorded frequency of penile-vaginal intercourse (FSI), noncoital partnered sexual activity, and masturbation to measured waist and hip circumference in 120 healthy adults aged 19-38. Slimmer waist (in men and in the sexes combined) and slimmer hips (in men and women) were associated with greater FSI. Slimmer waist and hips were associated with rated importance of intercourse for men. Noncoital partnered sexual activity had a less consistent association with slimness. Slimmer waist and hips were associated with less masturbation (in men and in the sexes combined). I discuss the results in terms of differences between different sexual behaviors, attractiveness, emotional relatedness, physical sensitivity, sexual dysfunction, sociobiology, psychopharmacological aspects of excess fat and carbohydrate consumption, and implications for sex therapy.",
"title": ""
}
] |
scidocsrr
|
7c023d886d56a6eec9c34bd3f0f3e4f5
|
NVMalloc: Exposing an Aggregate SSD Store as a Memory Partition in Extreme-Scale Machines
|
[
{
"docid": "5aa8167b3aaf4d0b0a4753ad64354366",
"text": "New storage-class memory (SCM) technologies, such as phase-change memory, STT-RAM, and memristors, promise user-level access to non-volatile storage through regular memory instructions. These memory devices enable fast user-mode access to persistence, allowing regular in-memory data structures to survive system crashes.\n In this paper, we present Mnemosyne, a simple interface for programming with persistent memory. Mnemosyne addresses two challenges: how to create and manage such memory, and how to ensure consistency in the presence of failures. Without additional mechanisms, a system failure may leave data structures in SCM in an invalid state, crashing the program the next time it starts.\n In Mnemosyne, programmers declare global persistent data with the keyword \"pstatic\" or allocate it dynamically. Mnemosyne provides primitives for directly modifying persistent variables and supports consistent updates through a lightweight transaction mechanism. Compared to past work on disk-based persistent memory, Mnemosyne reduces latency to storage by writing data directly to memory at the granularity of an update rather than writing memory pages back to disk through the file system. In tests emulating the performance characteristics of forthcoming SCMs, we show that Mnemosyne can persist data as fast as 3 microseconds. Furthermore, it provides a 35 percent performance increase when applied in the OpenLDAP directory server. In microbenchmark studies we find that Mnemosyne can be up to 1400% faster than alternative persistence strategies, such as Berkeley DB or Boost serialization, that are designed for disks.",
"title": ""
}
] |
[
{
"docid": "de94c8531839326cc549b97855f8348a",
"text": "In this paper, we investigate the prediction of daily stock prices of the top five companies in the Thai SET50 index. A Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) is applied to forecast the next daily stock price (High, Low, Open, Close). Deep Belief Network (DBN) is applied to compare the result with LSTM. The test data are CPALL, SCB, SCC, KBANK, and PTT from the SET50 index. The purpose of selecting these five stocks is to compare how the model performs in different stocks with various volatility. There are two experiments of five stocks from the SET50 index. The first experiment compared the MAPE with different length of training data. The experiment is conducted by using training data for one, three, and five-year. PTT and SCC stock give the lowest median value of MAPE error for five-year training data. KBANK, SCB, and CPALL stock give the lowest median value of MAPE error for one-year training data. In the second experiment, the number of looks back and input are varied. The result with one look back and four inputs gives the best performance for stock price prediction. By comparing different technique, the result show that LSTM give the best performance with CPALL, SCB, and KTB with less than 2% error. DBN give the best performance with PTT and SCC with less than 2% error.",
"title": ""
},
{
"docid": "ea31a93d54e45eede5ba3e6263e8a13e",
"text": "Clustering methods for data-mining problems must be extremely scalable. In addition, several data mining applications demand that the clusters obtained be balanced, i.e., of approximately the same size or importance. In this paper, we propose a general framework for scalable, balanced clustering. The data clustering process is broken down into three steps: sampling of a small representative subset of the points, clustering of the sampled data, and populating the initial clusters with the remaining data followed by refinements. First, we show that a simple uniform sampling from the original data is sufficient to get a representative subset with high probability. While the proposed framework allows a large class of algorithms to be used for clustering the sampled set, we focus on some popular parametric algorithms for ease of exposition. We then present algorithms to populate and refine the clusters. The algorithm for populating the clusters is based on a generalization of the stable marriage problem, whereas the refinement algorithm is a constrained iterative relocation scheme. The complexity of the overall method is O(kN log N) for obtaining k balanced clusters from N data points, which compares favorably with other existing techniques for balanced clustering. In addition to providing balancing guarantees, the clustering performance obtained using the proposed framework is comparable to and often better than the corresponding unconstrained solution. Experimental results on several datasets, including high-dimensional (>20,000) ones, are provided to demonstrate the efficacy of the proposed framework.",
"title": ""
},
{
"docid": "6057638a2a1cfd07ab2e691baf93a468",
"text": "Cybersecurity in smart grids is of critical importance given the heavy reliance of modern societies on electricity and the recent cyberattacks that resulted in blackouts. The evolution of the legacy electric grid to a smarter grid holds great promises but also comes up with an increasesd attack surface. In this article, we review state of the art developments in cybersecurity for smart grids, both from a standardization as well technical perspective. This work shows the important areas of future research for academia, and collaboration with government and industry stakeholders to enhance smart grid cybersecurity and make this new paradigm not only beneficial and valuable but also safe and secure.",
"title": ""
},
{
"docid": "7499f88de9d2f76008dc38e96b08ca0a",
"text": "Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate.",
"title": ""
},
{
"docid": "30740e33cdb2c274dbd4423e8f56405e",
"text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.",
"title": ""
},
{
"docid": "3105a48f0b8e45857e8d48e26b258e04",
"text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.",
"title": ""
},
{
"docid": "804920bbd9ee11cc35e93a53b58e7e79",
"text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.",
"title": ""
},
{
"docid": "25bb1034b836e68ac1939265e33b0e22",
"text": "As it requires a huge number of parameters when exposed to high dimensional inputs in video detection and classification, there is a grand challenge to develop a compact yet accurate video comprehension at terminal devices. Current works focus on optimizations of video detection and classification in a separated fashion. In this paper, we introduce a video comprehension (object detection and action recognition) system for terminal devices, namely DEEPEYE. Based on You Only Look Once (YOLO), we have developed an 8-bit quantization method when training YOLO; and also developed a tensorized-compression method of Recurrent Neural Network (RNN) composed of features extracted from YOLO. The developed quantization and tensorization can significantly compress the original network model yet with maintained accuracy. Using the challenging video datasets: MOMENTS and UCF11 as benchmarks, the results show that the proposed DEEPEYE achieves 3.994× model compression rate with only 0.47% mAP decreased; and 15, 047× parameter reduction and 2.87× speed-up with 16.58% accuracy improvement.",
"title": ""
},
{
"docid": "a1757ee58eb48598d3cd6e257b53cd10",
"text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.",
"title": ""
},
{
"docid": "8b519431416a4bac96a8a975d8973ef9",
"text": "A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.",
"title": ""
},
{
"docid": "337ba912e6c23324ba2e996808a4b060",
"text": "Comprehensive investigations were conducted on identifying integration efforts needed to adapt plasma dicing technology in BEOL pre-production environments. First, the authors identified the suitable process flows. Within the process flow, laser grooving before plasma dicing was shown to be a key unit process to control resulting die sidewall quality. Significant improvement on laser grooving quality has been demonstrated. Through these efforts, extremely narrow kerfs and near ideal dies strengths were achieved on bare Si dies. Plasma dicing process generates fluorinated polymer residues on both Si die sidewalls and under the topography overhangs on wafer surfaces, such as under the solder balls or microbumps. Certain areas cannot be cleaned by in-chamber post-treatments. Multiple cleaning methods demonstrated process capability and compatibility to singulated dies-on-tape handling. Lastly, although many methods exist commercially for backmetal and DAF separations, the authors' investigation is still inconclusive on one preferred process for post-plasma dicing die separations.",
"title": ""
},
{
"docid": "ecb927989b504aa26fddca0c0ce76c76",
"text": "This dissertation presents an integrated system for producing Natural Logic inferences, which are used in a wide variety of natural language understanding tasks. Natural Logic is the process of creating valid inferences by making incremental edits to natural language expressions with respect to a universal monotonicity calculus, without resorting to logical representation of the expressions (using First Order Logic for instance). The system generates inferences from surface forms using a three-stage process. First, each sentence is subjected to syntactic analysis, using a purpose-built syntactic parser. Then the rules of the monotonicity calculus are applied, specifying the directionality of entailment for each sentence constituents. A constituent can be either upward or downward entailing, meaning that we may replace it with a semantically broader or narrower term. Finally, we can find all suitable replacement terms for each target word by using the WordNet lexical database, which contains hypernymic and hyponymic relations. Using Combinatory Categorial Grammar, we were able to incorporate the monotonicity determination step in the syntactic derivation process. In order to achieve wide coverage over English sentences we had to introduce statistical models into our syntactic parser. At the current stage we have implemented a simple statistical model similar to those of Probabilistic Context-Free Grammars. The system is intended to provide input to “deep” reasoning engines, used for higher-level Natural Language Processing applications such as Recognising Textual Entailment. In addition to independently evaluating each component of the system, we present our comprehensive findings using Cyc, a large-scale knowledge base, and we outline a solution for its relatively limited concept coverage.",
"title": ""
},
{
"docid": "96fb1910ed0127ad330fd427335b4587",
"text": "OBJECTIVES\nThe aim of this cross-sectional in vivo study was to assess the effect of green tea and honey solutions on the level of salivary Streptococcus mutans.\n\n\nSTUDY DESIGN\nA convenient sample of 30 Saudi boys aged 7-10 years were randomly assigned into 2 groups of 15 each. Saliva sample was collected for analysis of level of S. mutans before rinsing. Commercial honey and green tea were prepared for use and each child was asked to rinse for two minutes using 10 mL of the prepared honey or green tea solutions according to their group. Saliva samples were collected again after rinsing. The collected saliva samples were prepared and colony forming unit (CFU) of S. mutans per mL of saliva was calculated.\n\n\nRESULTS\nThe mean number of S. mutans before and after rinsing with honey and green tea solutions were 2.28* 10(8)(2.622*10(8)), 5.64 *10(7)(1.03*10(8)), 1.17*10(9)(2.012*10(9)) and 2.59*10(8) (3.668*10(8)) respectively. A statistically significant reduction in the average number of S. mutans at baseline and post intervention in the children who were assigned to the honey (P=0.001) and green tea (P=0.001) groups was found.\n\n\nCONCLUSIONS\nA single time mouth rinsing with honey and green tea solutions for two minutes effectively reduced the number of salivary S. mutans of 7-10 years old boys.",
"title": ""
},
{
"docid": "8de5b77f3cb4f1c20ff6cc11b323ba9c",
"text": "The Internet of Things (IoT) paradigm refers to the network of physical objects or \"things\" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with servers, centralized systems, and/or other connected devices based on a variety of communication infrastructures. IoT makes it possible to sense and control objects creating opportunities for more direct integration between the physical world and computer-based systems. IoT will usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity). However, because of its finegrained, continuous and pervasive data acquisition and control capabilities, IoT raises concerns about the security and privacy of data. Deploying existing data security solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in data security and privacy, we present initial approaches to securing IoT data, including efficient and scalable encryption protocols, software protection techniques for small devices, and fine-grained data packet loss analysis for sensor networks.",
"title": ""
},
{
"docid": "c1b5b1dcbb3e7ff17ea6ad125bbc4b4b",
"text": "This article focuses on a new type of wireless devices in the domain between RFIDs and sensor networks—Energy-Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, furniture, toys, produce, and clothing). Therefore, they will provide the infrastructure for various tracking applications and can serve as one of the enablers for the Internet of Things. We present the design considerations for the EnHANT prototypes, developed over the past 4 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. We describe a small-scale testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs. Then, we experimentally evaluate the performance of different energy-harvesting adaptive policies with organic solar cells and UWB-IR transceivers. Finally, we discuss the lessons learned during the prototype and testbed design process.",
"title": ""
},
{
"docid": "4147b26531ca1ec165735688481d2684",
"text": "Problem-based approaches to learning have a long history of advocating experience-based education. Psychological research and theory suggests that by having students learn through the experience of solving problems, they can learn both content and thinking strategies. Problem-based learning (PBL) is an instructional method in which students learn through facilitated problem solving. In PBL, student learning centers on a complex problem that does not have a single correct answer. Students work in collaborative groups to identify what they need to learn in order to solve a problem. They engage in self-directed learning (SDL) and then apply their new knowledge to the problem and reflect on what they learned and the effectiveness of the strategies employed. The teacher acts to facilitate the learning process rather than to provide knowledge. The goals of PBL include helping students develop 1) flexible knowledge, 2) effective problem-solving skills, 3) SDL skills, 4) effective collaboration skills, and 5) intrinsic motivation. This article discusses the nature of learning in PBL and examines the empirical evidence supporting it. There is considerable research on the first 3 goals of PBL but little on the last 2. Moreover, minimal research has been conducted outside medical and gifted education. Understanding how these goals are achieved with less skilled learners is an important part of a research agenda for PBL. The evidence suggests that PBL is an instructional approach that offers the potential to help students develop flexible understanding and lifelong learning skills.",
"title": ""
},
{
"docid": "1f7d0ccae4e9f0078eabb9d75d1a8984",
"text": "A social network is composed by communities of individuals or organizations that are connected by a common interest. Online social networking sites like Twitter, Facebook and Orkut are among the most visited sites in the Internet. Presently, there is a great interest in trying to understand the complexities of this type of network from both theoretical and applied point of view. The understanding of these social network graphs is important to improve the current social network systems, and also to develop new applications. Here, we propose a friend recommendation system for social network based on the topology of the network graphs. The topology of network that connects a user to his friends is examined and a local social network called Oro-Aro is used in the experiments. We developed an algorithm that analyses the sub-graph composed by a user and all the others connected people separately by three degree of separation. However, only users separated by two degree of separation are candidates to be suggested as a friend. The algorithm uses the patterns defined by their connections to find those users who have similar behavior as the root user. The recommendation mechanism was developed based on the characterization and analyses of the network formed by the user's friends and friends-of-friends (FOF).",
"title": ""
},
{
"docid": "3b78988b74c2e42827c9e75e37d2223e",
"text": "This paper addresses how to construct a RBAC-compatible attribute-based encryption (ABE) for secure cloud storage, which provides a user-friendly and easy-to-manage security mechanism without user intervention. Similar to role hierarchy in RBAC, attribute lattice introduced into ABE is used to define a seniority relation among all values of an attribute, whereby a user holding the senior attribute values acquires permissions of their juniors. Based on these notations, we present a new ABE scheme called Attribute-Based Encryption with Attribute Lattice (ABE-AL) that provides an efficient approach to implement comparison operations between attribute values on a poset derived from attribute lattice. By using bilinear groups of composite order, we propose a practical construction of ABE-AL based on forward and backward derivation functions. Compared with prior solutions, our scheme offers a compact policy representation solution, which can significantly reduce the size of privatekeys and ciphertexts. Furthermore, our solution provides a richer expressive power of access policies to facilitate flexible access control for ABE scheme.",
"title": ""
},
{
"docid": "de394e291cac1a56cb19d858014bff19",
"text": "The design of antennas for metal-mountable radio-frequency identification tags is driven by a unique set of challenges: cheap, small, low-profile, and conformal structures need to provide reliable operation when tags are mounted on conductive platforms of various shapes and sizes. During the past decade, a tremendous amount of research has been dedicated to meeting these stringent requirements. Currently, the tag-reading ranges of several meters are achieved with flexible-label types of tags. Moreover, a whole spectrum of tag-size performance ratios has been demonstrated through a variety of innovative antenna-design approaches. This article reviews and summarizes the progress made in antennas for metal-mountable tags, and presents future prospects.",
"title": ""
},
{
"docid": "c1713b817c4b2ce6e134b6e0510a961f",
"text": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction.",
"title": ""
}
] |
scidocsrr
|
50e30807cc5bac0a89ecac10859ef6c9
|
Metamorphic Testing and Testing with Special Values
|
[
{
"docid": "421cb7fb80371c835a5d314455fb077c",
"text": "This paper explains, in an introductory fashion, the method of specifying the correct behavior of a program by the use of input/output assertions and describes one method for showing that the program is correct with respect to those assertions. An initial assertion characterizes conditions expected to be true upon entry to the program and a final assertion characterizes conditions expected to be true upon exit from the program. When a program contains no branches, a technique known as symbolic execution can be used to show that the truth of the initial assertion upon entry guarantees the truth of the final assertion upon exit. More generally, for a program with branches one can define a symbolic execution tree. If there is an upper bound on the number of times each loop in such a program may be executed, a proof of correctness can be given by a simple traversal of the (finite) symbolic execution tree. However, for most programs, no fixed bound on the number of times each loop is executed exists and the corresponding symbolic execution trees are infinite. In order to prove the correctness of such programs, a more general assertion structure must be provided. The symbolic execution tree of such programs must be traversed inductively rather than explicitly. This leads naturally to the use of additional assertions which are called \"inductive assertions.\"",
"title": ""
}
] |
[
{
"docid": "f79e5a2b19bb51e8dc0017342a153fee",
"text": "Decentralized ledger-based cryptocurrencies like Bitcoin present a way to construct payment systems without trusted banks. However, the anonymity of Bitcoin is fragile. Many altcoins and protocols are designed to improve Bitcoin on this issue, among which Zerocash is the first fullfledged anonymous ledger-based currency, using zero-knowledge proof, specifically zk-SNARK, to protect privacy. However, Zerocash suffers two problems: poor scalability and low efficiency. In this paper, we address the above issues by constructing a micropayment system in Zerocash called Z-Channel. First, we improve Zerocash to support multisignature and time lock functionalities, and prove that the reconstructed scheme is secure. Then we construct Z-Channel based on the improved Zerocash scheme. Our experiments demonstrate that Z-Channel significantly improves the scalability and reduces the confirmation time for Zerocash payments.",
"title": ""
},
{
"docid": "28ab07763d682ae367b5c9ebd9c9ef13",
"text": "Nowadays, the teaching-learning processes are constantly changing, one of the latest modifications promises to strengthen the development of digital skills and thinking in the participants, from an early age. In this sense, the present article shows the advances of a study oriented to the formation of programming abilities, computational thinking and collaborative learning in an initial education context. As part of the study it was initially proposed to conduct a training day for teachers who will participate in the experimental phase of the research, considering this human resource as a link of great importance to achieve maximum use of students in the development of curricular themes of the level, using ICT resources and programmable educational robots. The criterion and the positive acceptance expressed by the teaching group after the evaluation applied at the end of the session, constitute a good starting point for the development of the following activities that make up the research in progress.",
"title": ""
},
{
"docid": "4e847c4acec420ef833a08a17964cb28",
"text": "Machine learning models are vulnerable to adversarial examples, inputs maliciously perturbed to mislead the model. These inputs transfer between models, thus enabling black-box attacks against deployed models. Adversarial training increases robustness to attacks by injecting adversarial examples into training data. Surprisingly, we find that although adversarially trained models exhibit strong robustness to some white-box attacks (i.e., with knowledge of the model parameters), they remain highly vulnerable to transferred adversarial examples crafted on other models. We show that the reason for this vulnerability is the model’s decision surface exhibiting sharp curvature in the vicinity of the data points, thus hindering attacks based on first-order approximations of the model’s loss, but permitting black-box attacks that use adversarial examples transferred from another model. We harness this observation in two ways: First, we propose a simple yet powerful novel attack that first applies a small random perturbation to an input, before finding the optimal perturbation under a first-order approximation. Our attack outperforms prior “single-step” attacks on models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, an extension of adversarial training that additionally augments training data with perturbed inputs transferred from a number of fixed pre-trained models. On MNIST and ImageNet, ensemble adversarial training vastly improves robustness to black-box attacks.",
"title": ""
},
{
"docid": "b429b37623a690cd4b224a334985f7dd",
"text": "Data centers play a key role in the expansion of cloud computing. However, the efficiency of data center networks is limited by oversubscription. The typical unbalanced traffic distributions of a DCN further aggravate the problem. Wireless networking, as a complementary technology to Ethernet, has the flexibility and capability to provide feasible approaches to handle the problem. In this article, we analyze the challenges of DCNs and articulate the motivations of employing wireless in DCNs. We also propose a hybrid Ethernet/wireless DCN architecture and a mechanism to dynamically schedule wireless transmissions based on traffic demands. Our simulation study demonstrates the effectiveness of the proposed wireless DCN.",
"title": ""
},
{
"docid": "17db3273504bba730c9e43c8ea585250",
"text": "In this paper, License plate localization and recognition (LPLR) is presented. It uses image processing and character recognition technology in order to identify the license number plates of the vehicles automatically. This system is considerable interest because of its good application in traffic monitoring systems, surveillance devices and all kind of intelligent transport system. The objective of this work is to design algorithm for License Plate Localization and Recognition (LPLR) of Tanzanian License Plates. The plate numbers used are standard ones with black and yellow or black and white colors. Also, the letters and numbers are placed in the same row (identical vertical levels), resulting in frequent changes in the horizontal intensity. Due to that, the horizontal changes of the intensity have been easily detected, since the rows that contain the number plates are expected to exhibit many sharp variations. Hence, the edge finding method is exploited to find the location of the plate. To increase readability of the plate number, part of the image was enhanced, noise removal and smoothing median filter is used due to easy development. The algorithm described in this paper is implemented using MATLAB 7.11.0(R2010b).",
"title": ""
},
{
"docid": "080f29a336c0188eeec82d27aa80092c",
"text": "Do physically attractive individuals truly possess a multitude of better characteristics? The current study aimed to answer the age old question, “Do looks matter?” within the context of online dating and framed itself using cursory research performed by Brand and colleagues (2012). Good Genes Theory, Halo Effect, Physical Attractiveness Stereotype, and Social Information Procession theory were also used to explore what function appearance truly plays in online dating and how it influences a user’s written text. 83 men were surveyed and asked to rate 84 women’s online dating profiles (photos and texts) independently of one another to determine if those who were perceived as physically attractive also wrote more attractive texts as well. Results indicated that physical attractiveness was correlated with text attractiveness but not with text confidence. Findings also indicated the more attractive a woman’s photo, the less discrepancy there was between her photo attractiveness and text attractiveness scores. Finally, photo attractiveness did not differ significantly for men’s ratings of women in this study and women’s ratings of men in the Brand et al. (2012) study.",
"title": ""
},
{
"docid": "ce0cfd1dd69e235f942b2e7583b8323b",
"text": "Increasing use of the World Wide Web as a B2C commercial tool raises interest in understanding the key issues in building relationships with customers on the Internet. Trust is believed to be the key to these relationships. Given the differences between a virtual and a conventional marketplace, antecedents and consequences of trust merit re-examination. This research identifies a number of key factors related to trust in the B2C context and proposes a framework based on a series of underpinning relationships among these factors. The findings in this research suggest that people are more likely to purchase from the web if they perceive a higher degree of trust in e-commerce and have more experience in using the web. Customer’s trust levels are likely to be influenced by the level of perceived market orientation, site quality, technical trustworthiness, and user’s web experience. People with a higher level of perceived site quality seem to have a higher level of perceived market orientation and trustworthiness towards e-commerce. Furthermore, people with a higher level of trust in e-commerce are more likely to participate in e-commerce. Positive ‘word of mouth’, money back warranty and partnerships with well-known business partners, rank as the top three effective risk reduction tactics. These findings complement the previous findings on e-commerce and shed light on how to establish a trust relationship on the World Wide Web. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "09581c79829599090d8f838416058c05",
"text": "This paper proposes to tackle the AMR parsing bottleneck by improving two components of an AMR parser: concept identification and alignment. We first build a Bidirectional LSTM based concept identifier that is able to incorporate richer contextual information to learn sparse AMR concept labels. We then extend an HMM-based word-to-concept alignment model with graph distance distortion and a rescoring method during decoding to incorporate the structural information in the AMR graph. We show integrating the two components into an existing AMR parser results in consistently better performance over the state of the art on various datasets.",
"title": ""
},
{
"docid": "112b9294f4d606a0112fe80742698184",
"text": "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their ka ma. A set of nodes, called a bank-set , keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing application.",
"title": ""
},
{
"docid": "945553f360d7f569f15d249dbc5fa8cd",
"text": "One of the main issues in service collaborations among business partners is the possible lack of trust among them. A promising approach to cope with this issue is leveraging on blockchain technology by encoding with smart contracts the business process workflow. This brings the benefits of trust decentralization, transparency, and accountability of the service composition process. However, data in the blockchain are public, implying thus serious consequences on confidentiality and privacy. Moreover, smart contracts can access data outside the blockchain only through Oracles, which might pose new confidentiality risks if no assumptions are made on their trustworthiness. For these reasons, in this paper, we are interested in investigating how to ensure data confidentiality during business process execution on blockchain even in the presence of an untrusted Oracle.",
"title": ""
},
{
"docid": "518cb733bfbb746315498c1409d118c5",
"text": "BACKGROUND\nAndrogenetic alopecia (AGA) is a common form of scalp hair loss that affects up to 50% of males between 18 and 40 years old. Several molecules are commonly used for the treatment of AGA, acting on different steps of its pathogenesis (Minoxidil, Finasteride, Serenoa repens) and show some side effects. In literature, on the basis of hypertrichosis observed in patients treated with analogues of prostaglandin PGF2a, it was supposed that prostaglandins would have an important role in the hair growth: PGE and PGF2a play a positive role, while PGD2 a negative one.\n\n\nOBJECTIVE\nWe carried out a pilot study to evaluate the efficacy of topical cetirizine versus placebo in patients with AGA.\n\n\nPATIENTS AND METHODS\nA sample of 85 patients was recruited, of which 67 were used to assess the effectiveness of the treatment with topical cetirizine, while 18 were control patients.\n\n\nRESULTS\nWe found that the main effect of cetirizine was an increase in total hair density, terminal hair density and diameter variation from T0 to T1, while the vellus hair density shows an evident decrease. The use of a molecule as cetirizine, with no notable side effects, makes possible a good compliance by patients.\n\n\nCONCLUSION\nOur results have shown that topical cetirizine 1% is responsible for a significant improvement of the initial framework of AGA.",
"title": ""
},
{
"docid": "b3fce50260d7f77e8ca294db9c6666f6",
"text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "4db8a0d39ef31b49f2b6d542a14b03a2",
"text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.",
"title": ""
},
{
"docid": "074de6f0c250f5c811b69598551612e4",
"text": "In this paper we present a novel GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g. fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows for interactive generation of realistic images, enabling advanced rendering techniques such as ray casting-based refraction and reflection, light scattering and absorption, and ambient occlusion. In contrast to previous methods, it does not rely on preprocessing such as expensive, and often coarse, scalar field conversion or mesh generation steps. Our method directly takes unsorted particle data as input. It can be further accelerated by identifying fully populated simulation cells during simulation. The extracted surface can be filtered to achieve smooth surface appearance.",
"title": ""
},
{
"docid": "099dbf8d4c0b401cd3389583eb4495f3",
"text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.",
"title": ""
},
{
"docid": "848dd074e4615ea5ecb164c96fac6c63",
"text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.",
"title": ""
},
{
"docid": "5a805b6f9e821b7505bccc7b70fdd557",
"text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.",
"title": ""
},
{
"docid": "dc3de555216f10d84890ecb1165774ff",
"text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.",
"title": ""
},
{
"docid": "93c84b6abfe30ff7355e4efc310b440b",
"text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.",
"title": ""
}
] |
scidocsrr
|
d7bf8a79235036e6858e9e8354089a9c
|
From Abstraction to Implementation: Can Computational Thinking Improve Complex Real-World Problem Solving? A Computational Thinking-Based Approach to the SDGs
|
[
{
"docid": "b64a91ca7cdeb3dfbe5678eee8962aa7",
"text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.",
"title": ""
}
] |
[
{
"docid": "c4b1615bbd32f99fa59ca2d7b8c40b10",
"text": "Practical face recognition systems are sometimes confronted with low-resolution face images. Traditional two-step methods solve this problem through employing super-resolution (SR). However, these methods usually have limited performance because the target of SR is not absolutely consistent with that of face recognition. Moreover, time-consuming sophisticated SR algorithms are not suitable for real-time applications. To avoid these limitations, we propose a novel approach for LR face recognition without any SR preprocessing. Our method based on coupled mappings (CMs), projects the face images with different resolutions into a unified feature space which favors the task of classification. These CMs are learned through optimizing the objective function to minimize the difference between the correspondences (i.e., low-resolution image and its high-resolution counterpart). Inspired by locality preserving methods for dimensionality reduction, we introduce a penalty weighting matrix into our objective function. Our method significantly improves the recognition performance. Finally, we conduct experiments on publicly available databases to verify the efficacy of our algorithm.",
"title": ""
},
{
"docid": "4798cb0bcd147e6a49135b845d7f2624",
"text": "There is an upsurging interest in designing succinct data structures for basic searching problems (see [23] and references therein). The motivation has to be found in the exponential increase of electronic data nowadays available which is even surpassing the significant increase in memory and disk storage capacities of current computers. Space reduction is an attractive issue because it is also intimately related to performance improvements as noted by several authors (e.g. Knuth [15], Bentley [5]). In designing these implicit data structures the goal is to reduce as much as possible the auxiliary information kept together with the input data without introducing a significant slowdown in the final query performance. Yet input data are represented in their entirety thus taking no advantage of possible repetitiveness into them. The importance of those issues is well known to programmers who typically use various tricks to squeeze data as much as possible and still achieve good query performance. Their approaches, though, boil down to heuristics whose effectiveness is witnessed only by experimentation. In this paper, we address the issue of compressing and indexing data by studying it in a theoretical framework. We devise a novel data structure for indexing and searching whose space occupancy is a function of the entropy of the underlying data set. The novelty resides in the careful combination of a compression algorithm, proposed by Burrows and Wheeler [7], with the structural properties of a well known indexing tool, the Suffix Array [17]. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible at no significant slowdown in the query performance. More precisely, its space occupancy is optimal in an information-content sense because a text T [1, u] is stored using O(Hk(T )) + o(1) bits per input symbol, where Hk(T ) is the kth order entropy of T (the bound holds for any fixed k). Given an arbitrary string P [1, p], the opportunistic data structure allows to search for the occ occurrences of P in T requiring O(p+occ log u) time complexity (for any fixed > 0). If data are uncompressible we achieve the best space bound currently known [11]; on compressible data our solution improves the succinct suffix array of [11] and the classical suffix tree and suffix array data structures either in space or in query time complexity or both. It is a belief [27] that some space overhead should be paid to use full-text indices (like suffix trees or suffix arrays) with respect to word-based indices (like inverted lists). The results in this paper show that a full-text index may achieve sublinear space overhead on compressible texts. As an application we devise a variant of the well-known Glimpse tool [18] which achieves sublinear space and sublinear query time complexity. Conversely, inverted lists achieve only the second goal [27], and classical Glimpse achieves both goals but under some restrictive conditions [4]. Finally, we investigate the modifiability of our opportunistic data structure by studying how to choreograph its basic ideas with a dynamic setting thus achieving effective searching and updating time bounds. ∗Dipartimento di Informatica, Università di Pisa, Italy. E-mail: ferragin@di.unipi.it. †Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale, Alessandria, Italy and IMC-CNR, Pisa, Italy. E-mail: manzini@mfn.unipmn.it.",
"title": ""
},
{
"docid": "67af0ebeebec40efa792a010ce205890",
"text": "We present a near-optimal polynomial-time approximation algorithm for the asymmetric traveling salesman problem for graphs of bounded orientable or non-orientable genus. Given any algorithm that achieves an approximation ratio of f(n) on arbitrary n-vertex graphs as a black box, our algorithm achieves an approximation factor of O(f(g)) on graphs with genus g. In particular, the O(log n/loglog n)-approximation algorithm for general graphs by Asadpour et al. [SODA 2010] immediately implies an O(log g/loglog g)-approximation algorithm for genus-g graphs. Moreover, recent results on approximating the genus of graphs imply that our O(log g/loglog g)-approximation algorithm can be applied to bounded-degree graphs even if no genus-g embedding of the graph is given. Our result improves and generalizes the o(√ g log g)-approximation algorithm of Oveis Gharan and Saberi [SODA 2011], which applies only to graphs with orientable genus g and requires a genus-g embedding as part of the input, even for bounded-degree graphs. Finally, our techniques yield a O(1)-approximation algorithm for ATSP on graphs of genus g with running time 2O(g) · nO(1).",
"title": ""
},
{
"docid": "113b8cfda23cf7e8b3d7b4821d549bf7",
"text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.",
"title": ""
},
{
"docid": "62af709fd559596f6d3d7a52902d5da5",
"text": "This paper presents the results of several large-scale studies of face recognition employing visible light and infra-red (IR) imagery in the context of principal component analysis. We find that in a scenario involving time lapse between gallery and probe, and relatively controlled lighting, (1) PCA-based recognition using visible light images outperforms PCA-based recognition using infra-red images, (2) the combination of PCA-based recognition using visible light and infra-red imagery substantially outperforms either one individually. In a same session scenario (i.e. nearsimultaneous acquisition of gallery and probe images) neither modality is significantly better than the other. These experimental results reinforce prior research that employed a smaller data set, presenting a convincing argument that, even across a broad experimental spectrum, the behaviors enumerated above are valid and consistent.",
"title": ""
},
{
"docid": "82ca6a400bf287dc287df9fa751ddac2",
"text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.",
"title": ""
},
{
"docid": "715de052c6a603e3c8a572531920ecfa",
"text": "Muscle samples were obtained from the gastrocnemius of 17 female and 23 male track athletes, 10 untrained women, and 11 untrained men. Portions of the specimen were analyzed for total phosphorylase, lactic dehydrogenase (LDH), and succinate dehydrogenase (SDH) activities. Sections of the muscle were stained for myosin adenosine triphosphatase, NADH2 tetrazolium reductase, and alpha-glycerophosphate dehydrogenase. Maximal oxygen uptake (VO2max) was measured on a treadmill for 23 of the volunteers (6 female athletes, 11 male athletes, 10 untrained women, and 6 untrained men). These measurements confirm earlier reports which suggest that the athlete's preference for strength, speed, and/or endurance events is in part a matter of genetic endowment. Aside from differences in fiber composition and enzymes among middle-distance runners, the only distinction between the sexes was the larger fiber areas of the male athletes. SDH activity was found to correlate 0.79 with VO2max, while muscle LDH appeared to be a function of muscle fiber composition. While sprint- and endurance-trained athletes are characterized by distinct fiber compositions and enzyme activities, participants in strength events (e.g., shot-put) have relatively low muscle enzyme activities and a variety of fiber compositions.",
"title": ""
},
{
"docid": "903b68096d2559f0e50c38387260b9c8",
"text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.",
"title": ""
},
{
"docid": "154c40c2fab63ad15ded9b341ff60469",
"text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.",
"title": ""
},
{
"docid": "b456ef31418fbe2a82bac60045a57fc2",
"text": "Continuous blood pressure (BP) monitoring in a noninvasive and unobtrusive way can significantly improve the awareness, control and treatment rate of prevalent hypertension. Pulse transit time (PTT) has become increasingly popular in recent years for continuous BP measurement without a cuff. However, the accuracy issue of PTT-based method remains to be solved for clinical application. Some previous studies have attempted to estimate BP with only PTT by using linear regression, which is susceptible to arterial regulation and may not reflect the actual relationship between PTT and BP. Furthermore, PTT does not contain all the information of BP variation, thereby resulting in unsatisfactory accuracy. In this paper we establish a cuffless BP estimation model from a physiological perspective by utilizing PTT and photoplethysmogram (PPG) intensity ratio (PIR), an indicator we have recently proposed for evaluation of the change in arterial diameter and the low frequency variation of BP, with the consideration that PIR can track changes in mean BP (MBP) and arterial diameter change. The performance of the proposed BP model was evaluated by comparing the estimated BP with Finapres BP as reference on 10 healthy subjects. The results showed that the mean ± standard deviation (SD) of the estimation error for systolic and diastolic BP were -0.41 ± 5.15 and -0.84 ± 4.05 mmHg, and mean absolute difference (MAD) were 4.18 and 3.43 mmHg, respectively. Furthermore, the proposed modeling method was superior to one contrast PTT-based method, demonstrating the proposed model would be promising for reliable continuous cuffless BP measurement.",
"title": ""
},
{
"docid": "9876e4298f674a617f065f348417982a",
"text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.",
"title": ""
},
{
"docid": "bbb06abacfd8f4eb01fac6b11a4447bf",
"text": "In this paper, we present a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm following an inertial assisted Kalman Filter and reusing the estimated 3D map. By leveraging an inertial assisted Kalman Filter, we achieve an efficient motion tracking bearing fast dynamic movement in the front-end. To enable place recognition and reduce the trajectory estimation drift, we construct a factor graph based non-linear optimization in the back-end. We carefully design a feedback mechanism to balance the front/back ends ensuring the estimation accuracy. We also propose a novel initialization method that accurately estimate the scale factor, the gravity, the velocity, and gyroscope and accelerometer biases in a very robust way. We evaluated the algorithm on a public dataset, when compared to other state-of-the-art monocular Visual-Inertial SLAM approaches, our algorithm achieves better accuracy and robustness in an efficient way. By the way, we also evaluate our algorithm in a MonocularInertial setup with a low cost IMU to achieve a robust and lowdrift realtime SLAM system.",
"title": ""
},
{
"docid": "85ccad436c7e7eed128825e3946ae0ef",
"text": "Recent research has made great strides in the field of detecting botnets. However, botnets of all kinds continue to plague the Internet, as many ISPs and organizations do not deploy these techniques. We aim to mitigate this state by creating a very low-cost method of detecting infected bot host. Our approach is to leverage the botnet detection work carried out by some organizations to easily locate collaborating bots elsewhere. We created BotMosaic as a countermeasure to IRC-based botnets. BotMosaic relies on captured bot instances controlled by a watermarker, who inserts a particular pattern into their network traffic. This pattern can then be detected at a very low cost by client organizations and the watermark can be tuned to provide acceptable false-positive rates. A novel feature of the watermark is that it is inserted collaboratively into the flows of multiple captured bots at once, in order to ensure the signal is strong enough to be detected. BotMosaic can also be used to detect stepping stones and to help trace back to the botmaster. It is content agnostic and can operate on encrypted traffic. We evaluate BotMosaic using simulations and a testbed deployment.",
"title": ""
},
{
"docid": "6573629e918822c0928e8cf49f20752c",
"text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",
"title": ""
},
{
"docid": "ef9437b03a95fc2de438fe32bd2e32b9",
"text": "and Creative Modeling Modeling is not simply a process of response mimicry as commonly believed. Modeled judgments and actions may differ in specific content but embody the same rule. For example, a model may deal with moral dilemmas that differ widely in the nature of the activity but apply the same moral standard to them. Modeled activities thus convey rules for generative and innovative behavior. This higher level learning is achieved through abstract modeling. Once observers extract the rules underlying the modeled activities they can generate new behaviors that go beyond what they have seen or heard. Creativeness rarely springs entirely from individual inventiveness. A lot of modeling goes on in creativity. By refining preexisting innovations, synthesizing them into new ways and adding novel elements to them something new is created. When exposed to models of differing styles of thinking and behaving, observers vary in what they adopt from the different sources and thereby create new blends of personal characteristics that differ from the individual models (Bandura, Ross & Ross, 1963). Modeling influences that exemplify new perspectives and innovative styles of thinking also foster creativity by weakening conventional mind sets (Belcher, 1975; Harris & Evans, 1973).",
"title": ""
},
{
"docid": "b2aec3f88af47e47b4ca60493895cb8e",
"text": "In this paper, a simple but efficient approach for blind image splicing detection is proposed. Image splicing is a common and fundamental operation used for image forgery. The detection of image splicing is a preliminary but desirable study for image forensics. Passive detection approaches of image splicing are usually regarded as pattern recognition problems based on features which are sensitive to splicing. In the proposed approach, we analyze the discontinuity of image pixel correlation and coherency caused by splicing in terms of image run-length representation and sharp image characteristics. The statistical features extracted from image run-length representation and image edge statistics are used for splicing detection. The support vector machine (SVM) is used as the classifier. Our experimental results demonstrate that the two proposed features outperform existing ones both in detection accuracy and computational complexity.",
"title": ""
},
{
"docid": "525ddfaae4403392e8817986f2680a68",
"text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.",
"title": ""
},
{
"docid": "9c008dc2f3da4453317ce92666184da0",
"text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.",
"title": ""
},
{
"docid": "e3051e92e84c69f999c09fe751c936f0",
"text": "Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be “compressed” to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.",
"title": ""
},
{
"docid": "19a538b6a49be54b153b0a41b6226d1f",
"text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.",
"title": ""
}
] |
scidocsrr
|
81d6eda2f2c652ad866ae891ba9cf8b9
|
Periodization paradigms in the 21st century: evidence-led or tradition-driven?
|
[
{
"docid": "978749608ae97db4fd3d0e05f740c016",
"text": "The theory of training was established about five decades ago when knowledge of athletes' preparation was far from complete and the biological background was based on a relatively small amount of objective research findings. At that time, traditional 'training periodization', a division of the entire seasonal programme into smaller periods and training units, was proposed and elucidated. Since then, international sport and sport science have experienced tremendous changes, while the traditional training periodization has remained at more or less the same level as the published studies of the initial publications. As one of the most practically oriented components of theory, training periodization is intended to offer coaches basic guidelines for structuring and planning training. However, during recent decades contradictions between the traditional model of periodization and the demands of high-performance sport practice have inevitably developed. The main limitations of traditional periodization stemmed from: (i) conflicting physiological responses produced by 'mixed' training directed at many athletic abilities; (ii) excessive fatigue elicited by prolonged periods of multi-targeted training; (iii) insufficient training stimulation induced by workloads of medium and low concentration typical of 'mixed' training; and (iv) the inability to provide multi-peak performances over the season. The attempts to overcome these limitations led to development of alternative periodization concepts. The recently developed block periodization model offers an alternative revamped approach for planning the training of high-performance athletes. Its general idea proposes the sequencing of specialized training cycles, i.e. blocks, which contain highly concentrated workloads directed to a minimal number of targeted abilities. Unlike the traditional model, in which the simultaneous development of many athletic abilities predominates, block-periodized training presupposes the consecutive development of reasonably selected target abilities. The content of block-periodized training is set down in its general principles, a taxonomy of mesocycle blocks, and guidelines for compiling an annual plan.",
"title": ""
}
] |
[
{
"docid": "47bfe9238083f0948c16d7beeac75155",
"text": "In this paper, we propose a solution procedure for the Elementary Shortest Path Problem with Resource Constraints (ESPPRC). A relaxed version of this problem in which the path does not have to be elementary has been the backbone of a number of solution procedures based on column generation for several important problems, such as vehicle routing and crew-pairing. In many cases relaxing the restriction of an elementary path resulted in optimal solutions in a reasonable computation time. However, for a number of other problems, the elementary path restriction has too much impact on the solution to be relaxed or might even be necessary. We propose an exact solution procedure for the ESPPRC which extends the classical label correcting algorithm originally developed for the relaxed (non-elementary) path version of this problem. We present computational experiments of this algorithm for our specific problem and embedded in a column generation scheme for the classical Vehicle Routing Problem with Time Windows.",
"title": ""
},
{
"docid": "614285482e8748e99fb061dd9e0f3887",
"text": "A top-wall substrate integrated waveguide (SIW) slot radiator for generating circular polarized (CP) field is proposed and characterized in this letter. The reflection of the slot radiator is extremely weak, which simplifies the linear traveling wave array design. Based on such a structure, a 16-element CP SIW traveling wave antenna array is designed, fabricated, and measured at 16 GHz. A -23 dB side lobe level (SLL) with an axial ratio (AR) of 1.95 dB is experimentally achieved. The size of the proposed SIW CP linear array antenna is 285 mm times 22 mm. The measured gain is 18.9 dB, and the usable bandwidth is 2.5%.",
"title": ""
},
{
"docid": "f88dfa78bc6e36691c4f74152946cb45",
"text": "A new antenna, designed on a polyethylene terephthalate (PET) substrate and implemented by inkjet printing using a conductive ink, is proposed as a passive tag antenna for UHF radio frequency identification (RFID). The operating bandwidth of the proposed antenna is very large since it encompasses all worldwide UHF RFID bands and extends well beyond at both edges. Moreover, it has a very simple geometry, can be easily tuned to feed many of the commercial RFID chips, and is very robust with respect to realization tolerances. The antenna has been designed using a general-purpose 3-D computer-aided design (CAD), CST Microwave Studio, and measured results are in very good agreement with simulations. The proposed passive RFID tag meets both the objectives of low-cost and size reduction.",
"title": ""
},
{
"docid": "512d418f33d864d0e48ce4b7ab52a8b9",
"text": "(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); (2) Methods: Two back propagation neural network (BPNN) models were developed for the early period after natural fruit drop in June and the ripening period, respectively. Within the same periods, images of apple cv. “Gala” trees were captured from an orchard near Bonn, Germany. Two sample sets were developed to train and test models; each set included 150 samples from the 2009 and 2010 growing season. For each sample (each canopy image), pixels were segmented into fruit, foliage, and background using image segmentation. The four features extracted from the data set for the canopy were: total cross-sectional area of fruits, fruit number, total cross-section area of small fruits, and cross-sectional area of foliage, and were used as inputs. With the actual weighted yield per tree as a target, BPNN was employed to learn their mutual relationship as a prerequisite to develop the prediction; (3) Results: For the developed BPNN model of the early period after June drop, correlation coefficients (R2) between the estimated and the actual weighted yield, mean forecast error (MFE), mean absolute percentage error (MAPE), and root mean square error (RMSE) were 0.81, −0.05, 10.7%, 2.34 kg/tree, respectively. For the model of the ripening period, these measures were 0.83, −0.03, 8.9%, 2.3 kg/tree, respectively. In 2011, the two previously developed models were used to predict apple yield. The RMSE and R2 values between the estimated and harvested apple yield were 2.6 kg/tree and 0.62 for the early period (small, green fruit) and improved near harvest (red, large fruit) to 2.5 kg/tree and 0.75 for a tree with ca. 18 kg yield per tree. For further method verification, the cv. “Pinova” apple trees were used as another variety in 2012 to develop the BPNN prediction model for the early period after June drop. The model was used in 2013, which gave similar results as those found with cv. “Gala”; (4) Conclusion: Overall, the results showed in this research that the proposed estimation models performed accurately using canopy and fruit features using image analysis algorithms.",
"title": ""
},
{
"docid": "6e9ee317822ba925b9d3e823c717d08d",
"text": "Agriculture is the major occupation in India and forms the backbone of Indian economy in which irrigation plays a crucial role for increasing the quality and quantity of crop yield. In spite of many revolutionary advancements in agriculture, there has not been a dramatic increase in agricultural performance. Lack of irrigation infrastructure and agricultural knowledge are the critical factors influencing agricultural performance. However, by using advanced agricultural equipment, the effect of these factors can be curtailed. The presented system aims at increasing the yield of crops by using an intelligent irrigation controller that makes use of wireless sensors. Sensors are used to monitor primary parameters such as soil moisture, soil pH, temperature and humidity. Irrigation decisions are taken based on the sensed data and the type of crop being grown. The system provides a mobile application in which farmers can remotely monitor and control the irrigation system. Also, the water pump is protected against damages due to voltage variations and dry running. Keywords—Android application, Bluetooth, humidity, irrigation, soil moisture, soil pH, temperature, wireless sensors.",
"title": ""
},
{
"docid": "bbf9c2cfd22dc0caeac796c1f16261b8",
"text": "Recent years have witnessed the emergence of Smart Environments technology for assisting people with their daily routines and for remote health monitoring. A lot of work has been done in the past few years on Activity Recognition and the technology is not just at the stage of experimentation in the labs, but is ready to be deployed on a larger scale. In this paper, we design a data-mining framework to extract the useful features from sensor data collected in the smart home environment and select the most important features based on two different feature selection criterions, then utilize several machine learning techniques to recognize the activities. To validate these algorithms, we use real sensor data collected from volunteers living in our smart apartment test bed. We compare the performance between alternative learning algorithms and analyze the prediction results of two different group experiments performed in the smart home.",
"title": ""
},
{
"docid": "d339ef4e124fdc9d64330544b7391055",
"text": "Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Part I of this series presented a neurophysiologic theory of the effects of Sudarshan Kriya Yoga (SKY). Part II will review clinical studies, our own clinical observations, and guidelines for the safe and effective use of yoga breath techniques in a wide range of clinical conditions. Although more clinical studies are needed to document the benefits of programs that combine pranayama (yogic breathing) asanas (yoga postures), and meditation, there is sufficient evidence to consider Sudarshan Kriya Yoga to be a beneficial, low-risk, low-cost adjunct to the treatment of stress, anxiety, post-traumatic stress disorder (PTSD), depression, stress-related medical illnesses, substance abuse, and rehabilitation of criminal offenders. SKY has been used as a public health intervention to alleviate PTSD in survivors of mass disasters. Yoga techniques enhance well-being, mood, attention, mental focus, and stress tolerance. Proper training by a skilled teacher and a 30-minute practice every day will maximize the benefits. Health care providers play a crucial role in encouraging patients to maintain their yoga practices.",
"title": ""
},
{
"docid": "39bf990d140eb98fa7597de1b6165d49",
"text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.",
"title": ""
},
{
"docid": "c8ca57db545f2d1f70f3640651bb3e79",
"text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.",
"title": ""
},
{
"docid": "3767702e22ac34493bb1c6c2513da9f7",
"text": "The majority of the online reviews are written in free-text format. It is often useful to have a measure which summarizes the content of the review. One such measure can be sentiment which expresses the polarity (positive/negative) of the review. However, a more granular classification of sentiment, such as rating stars, would be more advantageous and would help the user form a better opinion. In this project, we propose an approach which involves a combination of topic modeling and sentiment analysis to achieve this objective and thereby help predict the rating stars.",
"title": ""
},
{
"docid": "718cf9a405a81b9a43279a1d02f5e516",
"text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.",
"title": ""
},
{
"docid": "8cfcadd2216072dbeb5c7f5d99326c49",
"text": "In this paper, a human eye localization algorithm in images and video is presented for faces with frontal pose and upright orientation. A given face region is filtered by a high-pass filter of a wavelet transform. In this way, edges of the region are highlighted, and a caricature-like representation is obtained. After analyzing horizontal projections and profiles of edge regions in the high-pass filtered image, the candidate points for each eye are detected. All the candidate points are then classified using a support vector machine based classifier. Locations of each eye are estimated according to the most probable ones among the candidate points. It is experimentally observed that our eye localization method provides promising results for both image and video processing applications.",
"title": ""
},
{
"docid": "151b3f80fe443b8f9b5f17c0531e0679",
"text": "Pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer’s disease have been the subject of extensive research in recent years. In this paper, we use deep learning methods, and in particular sparse autoencoders and 3D convolutional neural networks, to build an algorithm that can predict the disease status of a patient, based on an MRI scan of the brain. We report on experiments using the ADNI data set involving 2,265 historical scans. We demonstrate that 3D convolutional neural networks outperform several other classifiers reported in the literature and produce state-of-art results.",
"title": ""
},
{
"docid": "5232ea4de509766a4fcf0e195f05d81b",
"text": "This paper provides new results for control of complex flight maneuvers for a quadrotor unmanned aerial vehicle (UAV). The flight maneuvers are defined by a concatenation of flight modes or primitives, each of which is achieved by a nonlinear controller that solves an output tracking problem. A mathematical model of the quadrotor UAV rigid body dynamics, defined on the configuration space SE(3), is introduced as a basis for the analysis. The quadrotor UAV has four input degrees of freedom, namely the magnitudes of the four rotor thrusts; each flight mode is defined by solving an asymptotic optimal tracking problem. Although many flight modes can be studied, we focus on three output tracking problems, namely (1) outputs given by the vehicle attitude, (2) outputs given by the three position variables for the vehicle center of mass, and (3) output given by the three velocity variables for the vehicle center of mass. A nonlinear tracking controller is developed on the special Euclidean group SE(3) for each flight mode, and the closed loop is shown to have desirable properties that are almost global in each case. Several numerical examples, including one example in which the quadrotor recovers from being initially upside down and another example that includes switching and transitions between different flight modes, illustrate the versatility and generality of the proposed approach.",
"title": ""
},
{
"docid": "89271c3d5497ea7d7f84b86d67baeb15",
"text": "Three studies are presented which provide a mixed methods exploration of fingerprint analysis. Using a qualitative approach (Expt 1), expert analysts used a 'think aloud' task to describe their process of analysis. Thematic analysis indicated consistency of practice, and experts' comments underpinned the development of a training tool for subsequent use. Following this, a quantitative approach (Expt 2) assessed expert reliability on a fingerprint matching task. The results suggested that performance was high and often at ceiling, regardless of the length of experience held by the expert. As a final test, the experts' fingerprint analysis method was taught to a set of naïve students, and their performance on the fingerprint matching task was compared both to the expert group and to an untrained novice group (Expt 3). Results confirmed that the trained students performed significantly better than the untrained students. However, performance remained substantially below that of the experts. Several explanations are explored to account for the performance gap between experts and trained novices, and their implications are discussed in terms of the future of fingerprint evidence in court.",
"title": ""
},
{
"docid": "584645a035454682222a26870377703c",
"text": "Conventionally, the sum and difference signals of a tracking system are fixed up by sum and difference network and the network is often composed of four or more magic tees whose arms direct at four different directions, which give inconveniences to assemble. In this paper, a waveguide side-wall slot directional coupler and a double dielectric slab filled waveguide phase shifter is used to form a planar magic tee with four arms in the same H-plane. Four planar magic tees can be used to construct the W-band planar monopulse comparator. The planar magic tee is analyzed exactly with Ansoft HFSS software, and is optimized by genetic algorithm. Simulation results are presented, which show good performance.",
"title": ""
},
{
"docid": "fcdde2f5b55b6d8133e6dea63d61b2c8",
"text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ···· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce",
"title": ""
},
{
"docid": "502cae1daa2459ed0f826ed3e20c44e4",
"text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.",
"title": ""
},
{
"docid": "585ed9a4a1c903c836ee7d6b5677e042",
"text": "Several factors contribute to on-going challenges of spatial planning and urban policy in megacities, including rapid population shifts, less organized urban areas, and a lack of data with which to monitor urban growth and land use change. To support Mumbai’s sustainable development, this research was conducted to examine past urban land use changes on the basis of remote sensing data collected between 1973 and 2010. An integrated Markov ChainseCellular Automata (MCeCA) urban growth model was implemented to predict the city’s expansion for the years 2020e2030. To consider the factors affecting urban growth, the MCeCA model was also connected to multi-criteria evaluation to generate transition probability maps. The results of the multi-temporal change detection show that the highest urban growth rates, 142% occurred between 1973 and 1990. In contrast, the growth rates decreased to 40% between 1990 and 2001 and decreased to 38% between 2001 and 2010. The areas most affected by this degradation were open land and croplands. The MCeCA model predicts that this trend will continue in the future. Compared to the reference year, 2010, increases in built-up areas of 26% by 2020 and 12% by 2030 are forecast. Strong evidence is provided for complex future urban growth, characterized by a mixture of growth patterns. The most pronounced of these is urban expansion toward the north along the main traffic infrastructure, linking the two currently non-affiliated main settlement ribbons. Additionally, urban infill developments are expected to emerge in the eastern areas, and these developments are expected to increase urban pressure. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1c269ac67fb954da107229fe4e18dcc8",
"text": "The number of output-voltage levels available in pulsewidth-modulated (PWM) voltage-source inverters can be increased by inserting a split-wound coupled inductor between the upper and lower switches in each inverter leg. Interleaved PWM control of both inverter-leg switches produces three-level PWM voltage waveforms at the center tap of the coupled inductor winding, representing the inverter-leg output terminal, with a PWM frequency twice the switching frequency. The winding leakage inductance is in series with the output terminal, with the main magnetizing inductance filtering the instantaneous PWM-cycle voltage differences between the upper and lower switches. Since PWM dead-time signal delays can be removed, higher device switching frequencies and higher fundamental output voltages are made possible. The proposed inverter topologies produce five-level PWM voltage waveforms between two inverter-leg terminals with a PWM frequency up to four times higher than the inverter switching frequency. This is achieved with half the number of switches used in alternative schemes. This paper uses simulated and experimental results to illustrate the operation of the proposed inverter structures.",
"title": ""
}
] |
scidocsrr
|
90de74b88910549d837e827ce6061567
|
ALL OUR SONS: THE DEVELOPMENTAL NEUROBIOLOGY AND NEUROENDOCRINOLOGY OF BOYS AT RISK.
|
[
{
"docid": "7340866fa3965558e1571bcc5294b896",
"text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.",
"title": ""
}
] |
[
{
"docid": "e189f36ba0fcb91d0608d0651c60516e",
"text": "In this paper, we describe the progressive design of the gesture recognition module of an automated food journaling system -- Annapurna. Annapurna runs on a smartwatch and utilises data from the inertial sensors to first identify eating gestures, and then captures food images which are presented to the user in the form of a food journal. We detail the lessons we learnt from multiple in-the-wild studies, and show how eating recognizer is refined to tackle challenges such as (i) high gestural diversity, and (ii) non-eating activities with similar gestural signatures. Annapurna is finally robust (identifying eating across a wide diversity in food content, eating styles and environments) and accurate (false-positive and false-negative rates of 6.5% and 3.3% respectively)",
"title": ""
},
{
"docid": "b99c42f412408610e1bfd414f4ea6b9f",
"text": "ADPfusion combines the usual high-level, terse notation of Haskell with an underlying fusion framework. The result is a parsing library that allows the user to write algorithms in a style very close to the notation used in formal languages and reap the performance benefits of automatic program fusion. Recent developments in natural language processing and computational biology have lead to a number of works that implement algorithms that process more than one input at the same time. We provide an extension of ADPfusion that works on extended index spaces and multiple input sequences, thereby increasing the number of algorithms that are amenable to implementation in our framework. This allows us to implement even complex algorithms with a minimum of overhead, while enjoying all the guarantees that algebraic dynamic programming provides to the user.",
"title": ""
},
{
"docid": "d81282c41c609b980442f481d0a7fa3d",
"text": "Some of the recent applications in the field of the power supplies use multiphase converters to achieve fast dynamic response, smaller input/output filters, or better packaging. Typically, these converters have several paralleled power stages, with a current loop in each phase and a single voltage loop. The presence of the current loops avoids current imbalance among phases. The purpose of this paper is to demonstrate that, in CCM, with a proper design, there is an intrinsic mechanism of self-balance that reduces the current imbalance. Thus, in the buck converter, if natural zero-voltage switching (ZVS) is achieved in both transitions, the instantaneous inductor current compensates partially the different DC currents through the phases. The need for using n current loops will be finally determined by the application but not by the converter itself. Using the buck converter as a base, a multiphase converter has been developed. Several tests have been carried out in the laboratory and the results show clearly that, when the conditions are met, the phase currents are very well balanced even during transient conditions.",
"title": ""
},
{
"docid": "f752d156cc1c606e5b06cf99a90b2a49",
"text": "We study the relationship between Facebook popularity (number of contacts) and personality traits on a large number of subjects. We test to which extent two prevalent viewpoints hold. That is, popular users (those with many social contacts) are the ones whose personality traits either predict many offline (real world) friends or predict propensity to maintain superficial relationships. We find that the predictor for number of friends in the real world (Extraversion) is also a predictor for number of Facebook contacts. We then test whether people who have many social contacts on Facebook are the ones who are able to adapt themselves to new forms of communication, present themselves in likable ways, and have propensity to maintain superficial relationships. We show that there is no statistical evidence to support such a conjecture.",
"title": ""
},
{
"docid": "1158e01718dd8eed415dd5b3513f4e30",
"text": "Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.",
"title": ""
},
{
"docid": "993590032de592f4bb69d9c906ff76a8",
"text": "The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. Two key enablers will allow the realization of the vision of 5G: very dense deployments and centralized processing. This article discusses the challenges and requirements in the design of 5G mobile networks based on these two key enablers. It discusses how cloud technologies and flexible functionality assignment in radio access networks enable network densification and centralized operation of the radio access network over heterogeneous backhaul networks. The article describes the fundamental concepts, shows how to evolve the 3GPP LTE a",
"title": ""
},
{
"docid": "4a69a0c5c225d9fbb40373aebaeb99be",
"text": "The hyperlink structure of Wikipedia constitutes a key resource for many Natural Language Processing tasks and applications, as it provides several million semantic annotations of entities in context. Yet only a small fraction of mentions across the entire Wikipedia corpus is linked. In this paper we present the automatic construction and evaluation of a Semantically Enriched Wikipedia (SEW) in which the overall number of linked mentions has been more than tripled solely by exploiting the structure of Wikipedia itself and the wide-coverage sense inventory of BabelNet. As a result we obtain a sense-annotated corpus with more than 200 million annotations of over 4 million different concepts and named entities. We then show that our corpus leads to competitive results on multiple tasks, such as Entity Linking and Word Similarity.",
"title": ""
},
{
"docid": "90e5eaa383c00a0551a5161f07c683e7",
"text": "The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.",
"title": ""
},
{
"docid": "022a2f42669fdb337cfb4646fed9eb09",
"text": "A mobile agent with the task to classify its sensor pattern has to cope with ambiguous information. Active recognition of three-dimensional objects involves the observer in a search for discriminative evidence, e.g., by change of its viewpoint. This paper defines the recognition process as a sequential decision problem with the objective to disambiguate initial object hypotheses. Reinforcement learning provides then an efficient method to autonomously develop near-optimal decision strategies in terms of sensorimotor mappings. The proposed system learns object models from visual appearance and uses a radial basis function (RBF) network for a probabilistic interpretation of the two-dimensional views. The information gain in fusing successive object hypotheses provides a utility measure to reinforce actions leading to discriminative viewpoints. The system is verified in experiments with 16 objects and two degrees of freedom in sensor motion. Crucial improvements in performance are gained using the learned in contrast to random camera placements. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1d8f11b742dd810f228b80747ec2a0bd",
"text": "The particle swarm optimization algorithm was showed to converge rapidly during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, the gradient descending method can achieve faster convergent speed around global optimum, and at the same time, the convergent accuracy can be higher. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with back-propagation (BP) algorithm, also referred to as PSO–BP algorithm, is proposed to train the weights of feedforward neural network (FNN), the hybrid algorithm can make use of not only strong global searching ability of the PSOA, but also strong local searching ability of the BP algorithm. In this paper, a novel selection strategy of the inertial weight is introduced to the PSO algorithm. In the proposed PSO–BP algorithm, we adopt a heuristic way to give a transition from particle swarm search to gradient descending search. In this paper, we also give three kind of encoding strategy of particles, and give the different problem area in which every encoding strategy is used. The experimental results show that the proposed hybrid PSO–BP algorithm is better than the Adaptive Particle swarm optimization algorithm (APSOA) and BP algorithm in convergent speed and convergent accuracy. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a75e29521b04d5e09228918e4ed560a6",
"text": "This study assessed motives for social network site (SNS) use, group belonging, collective self-esteem, and gender effects among older adolescents. Communication with peer group members was the most important motivation for SNS use. Participants high in positive collective self-esteem were strongly motivated to communicate with peer group via SNS. Females were more likely to report high positive collective self-esteem, greater overall use, and SNS use to communicate with peers. Females also posted higher means for group-in-self, passing time, and entertainment. Negative collective self-esteem correlated with social compensation, suggesting that those who felt negatively about their social group used SNS as an alternative to communicating with other group members. Males were more likely than females to report negative collective self-esteem and SNS use for social compensation and social identity gratifications.",
"title": ""
},
{
"docid": "88c592bdd7bb9c9348545734a9508b7b",
"text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.",
"title": ""
},
{
"docid": "6c7284ca77809210601c213ee8a685bb",
"text": "Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.",
"title": ""
},
{
"docid": "86dfbb8dc8682f975ccb3cfce75eac3a",
"text": "BACKGROUND\nAlthough many precautions have been introduced into early burn management, post burn contractures are still significant problems in burn patients. In this study, a form of Z-plasty in combination with relaxing incision was used for the correction of contractures.\n\n\nMETHODS\nPreoperatively, a Z-advancement rotation flap combined with a relaxing incision was drawn on the contracture line. Relaxing incision created a skin defect like a rhomboid. Afterwards, both limbs of the Z flap were incised. After preparation of the flaps, advancement and rotation were made in order to cover the rhomboid defect. Besides subcutaneous tissue, skin edges were closely approximated with sutures.\n\n\nRESULTS\nThis study included sixteen patients treated successfully with this flap. It was used without encountering any major complications such as infection, hematoma, flap loss, suture dehiscence or flap necrosis. All rotated and advanced flaps healed uneventfully. In all but one patient, effective contracture release was achieved by means of using one or two Z-plasty. In one patient suffering severe left upper extremity contracture, a little residual contracture remained due to inadequate release.\n\n\nCONCLUSION\nWhen dealing with this type of Z-plasty for mild contractures, it offers a new option for the correction of post burn contractures, which is safe, simple and effective.",
"title": ""
},
{
"docid": "8760b523ca90dccf7a9a197622bda043",
"text": "The increasing need for better performance, protection, and reliability in shipboard power distribution systems, and the increasing availability of power semiconductors is generating the potential for solid state circuit breakers to replace traditional electromechanical circuit breakers. This paper reviews various solid state circuit breaker topologies that are suitable for low and medium voltage shipboard system protection. Depending on the application solid state circuit breakers can have different main circuit topologies, fault detection methods, commutation methods of power semiconductor devices, and steady state operation after tripping. This paper provides recommendations on the solid state circuit breaker topologies that provides the best performance-cost tradeoff based on the application.",
"title": ""
},
{
"docid": "54fc5bc85ef8022d099fff14ab1b7ce0",
"text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.",
"title": ""
},
{
"docid": "ddc6a5e9f684fd13aec56dc48969abc2",
"text": "During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.",
"title": ""
},
{
"docid": "6ff034e2ff0d54f7e73d23207789898d",
"text": "This letter presents two high-gain, multidirector Yagi-Uda antennas for use within the 24.5-GHz ISM band, realized through a multilayer, purely additive inkjet printing fabrication process on a flexible substrate. Multilayer material deposition is used to realize these 3-D antenna structures, including a fully printed 120- μm-thick dielectric substrate for microstrip-to-slotline feeding conversion. The antennas are fabricated, measured, and compared to simulated results showing good agreement and highlighting the reliable predictability of the printing process. An endfire realized gain of 8 dBi is achieved within the 24.5-GHz ISM band, presenting the highest-gain inkjet-printed antenna at this end of the millimeter-wave regime. The results of this work further demonstrate the feasibility of utilizing inkjet printing for low-cost, vertically integrated antenna structures for on-chip and on-package integration throughout the emerging field of high-frequency wireless electronics.",
"title": ""
},
{
"docid": "dade322206eeab84bfdae7d45fe043ca",
"text": "Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules. We evaluate the effectiveness of very deep convolutional neural networks at the task of expert-level lung nodule malignancy classification. Using the state-of-the-art ResNet architecture as our basis, we explore the effect of curriculum learning, transfer learning, and varying network depth on the accuracy of malignancy classification. Due to a lack of public datasets with standardized problem definitions and train/test splits, studies in this area tend to not compare directly against other existing work. This makes it hard to know the relative improvement in the new solution. In contrast, we directly compare our system against two state-of-the-art deep learning systems for nodule classification on the LIDC/IDRI dataset using the same experimental setup and data set. The results show that our system achieves the highest performance in terms of all metrics measured including sensitivity, specificity, precision, AUROC, and accuracy. The proposed method of combining deep residual learning, curriculum learning, and transfer learning translates to high nodule classification accuracy. This reveals a promising new direction for effective pulmonary nodule CAD systems that mirrors the success of recent deep learning advances in other image-based application domains.",
"title": ""
},
{
"docid": "7e949c7cd50d1e381f58fe26f9736124",
"text": "Mental illness is one of the most undertreated health problems worldwide. Previous work has shown that there are remarkably strong cues to mental illness in short samples of the voice. These cues are evident in severe forms of illness, but it would be most valuable to make earlier diagnoses from a richer feature set. Furthermore there is an abstraction gap between these voice cues and the diagnostic cues used by practitioners. We believe that by closing this gap, we can build more effective early diagnostic systems for mental illness. In order to develop improved monitoring, we need to translate the high-level cues used by practitioners into features that can be analyzed using signal processing and machine learning techniques. In this paper we describe the elicitation process that we used to tap the practitioners' knowledge. We borrow from both AI (expert systems) and HCI (contextual inquiry) fields in order to perform this knowledge transfer. The paper highlights an unusual and promising role for HCI - the analysis of interaction data for health diagnosis.",
"title": ""
}
] |
scidocsrr
|
274c00be5f61e8d94bb71e89efa7561f
|
"How Many Silences Are There?" Men's Experience of Victimization in Intimate Partner Relationships.
|
[
{
"docid": "ccabfee18c9b3dfc322d55572f24f53a",
"text": "The concept of hegemonic masculinity has influenced gender studies across many academic fields but has also attracted serious criticism. The authors trace the origin of the concept in a convergence of ideas in the early 1980s and map the ways it was applied when research on men and masculinities expanded. Evaluating the principal criticisms, the authors defend the underlying concept of masculinity, which in most research use is neither reified nor essentialist. However, the criticism of trait models of gender and rigid typologies is sound. The treatment of the subject in research on hegemonic masculinity can be improved with the aid of recent psychological models, although limits to discursive flexibility must be recognized. The concept of hegemonic masculinity does not equate to a model of social reproduction; we need to recognize social struggles in which subordinated masculinities influence dominant forms. Finally, the authors review what has been confirmed from early formulations (the idea of multiple masculinities, the concept of hegemony, and the emphasis on change) and what needs to be discarded (onedimensional treatment of hierarchy and trait conceptions of gender). The authors suggest reformulation of the concept in four areas: a more complex model of gender hierarchy, emphasizing the agency of women; explicit recognition of the geography of masculinities, emphasizing the interplay among local, regional, and global levels; a more specific treatment of embodiment in contexts of privilege and power; and a stronger emphasis on the dynamics of hegemonic masculinity, recognizing internal contradictions and the possibilities of movement toward gender democracy.",
"title": ""
}
] |
[
{
"docid": "4b7e71b412770cbfe059646159ec66ca",
"text": "We present empirical evidence to demonstrate that there is little or no difference between the Java Virtual Machine and the .NET Common Language Runtime, as regards the compilation and execution of object-oriented programs. Then we give details of a case study that proves the superiority of the Common Language Runtime as a target for imperative programming language compilers (in particular GCC).",
"title": ""
},
{
"docid": "f5ba6ef8d99ccc57bf64f7e5c3c05f7e",
"text": "Applications of fuzzy logic (FL) to power electronics and drives are on the rise. The paper discusses some representative applications of FL in the area, preceded by an interpretative review of fuzzy logic controller (FLC) theory. A discussion on design and implementation aspects is presented, that also considers the interaction of neural networks and fuzzy logic techniques. Finally, strengths and limitations of FLC are considered, including possible applications in the area.",
"title": ""
},
{
"docid": "a1c126807088d954b73c2bd5d696c481",
"text": "or, why space syntax works when it looks as though it shouldn't 0 Abstract A common objection to the space syntax analysis of cities is that even in its own terms the technique of using a non-uniform line representation of space and analysing it by measures that are essentially topological, ignores too much geometric and metric detail to be credible. In this paper it is argued that far from ignoring geometric and metric properties the 'line-graph' internalises them into the structure of the graph and in doing so allows the graph analysis to pick up the nonlocal, or extrinsic, properties of spaces that are critical to the movement dynamics through which a city evolves its essential structures. Nonlocal properties are those which are defined by the relation of elements to all others in the system, rather than intrinsic to the element itself. The method also leads to a powerful analysis of urban structures because cities are essentially nonlocal systems. 1 Preliminaries 1.1 The critique of line graphs Space syntax is a family of techniques for representing and analysing spatial layouts of all kinds. A spatial representation is first chosen according to how space is defined for the purposes of the research-rooms, convex spaces, lines, convex isovists, and so on-and then one or more measures of 'configuration' are selected to analyse the patterns formed by that representation. Prior to the researcher setting up the research question, no one representation or measure is privileged over others. Part of the researcher's task is to discover which representation and which measure captures the logic of a particular system, as shown by observation of its functioning. In the study of cities, one representation and one type of measure has proved more consistently fruitful than others: the representation of urban space as a matrix of the 'longest and fewest' lines, the 'axial map', and the analysis of this by translating the line matrix into a graph, and the use of the various versions of the 'topological' (i.e. nonmetric) measure of patterns of line connectivity called 'integration'. (Hillier et al 1982, Steadman 1983, Hillier & Hanson 1984) This 'line graph' approach has proved quite unexpectedly successful. It has generated not only models for predicting urban et al 1998), but also strong theoretical results on urban structure, and even a general theory of the dynamics linking the urban grid, movement, land uses and building densities in 'organic' cities …",
"title": ""
},
{
"docid": "a2e91a00e2f3bc23b5de83ca39566c84",
"text": "This paper addresses an emerging new field of research that combines the strengths and capabilities of electronics and textiles in one: electronic textiles, or e-textiles. E-textiles, also called Smart Fabrics, have not only \"wearable\" capabilities like any other garment, but also local monitoring and computation, as well as wireless communication capabilities. Sensors and simple computational elements are embedded in e-textiles, as well as built into yarns, with the goal of gathering sensitive information, monitoring vital statistics and sending them remotely (possibly over a wireless channel) for further processing. Possible applications include medical (infant or patient) monitoring, personal information processing systems, or remote monitoring of deployed personnel in military or space applications. We illustrate the challenges imposed by the dual textile/electronics technology on their modeling and optimization methodology.",
"title": ""
},
{
"docid": "9244b687b0031e895cea1fcf5a0b11da",
"text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.",
"title": ""
},
{
"docid": "18ab36acafc5e0d39d02cecb0db2f7b3",
"text": "Trigeminal trophic syndrome is a rare complication after peripheral or central damage to the trigeminal nerve, characterized by sensorial impairment in the trigeminal nerve territory and self-induced nasal ulceration. Conditions that can affect the trigeminal nerve include brainstem cerebrovascular disease, diabetes, tabes, syringomyelia, and postencephalopathic parkinsonism; it can also occur following the surgical management of trigeminal neuralgia. Trigeminal trophic syndrome may develop months to years after trigeminal nerve insult. Its most common presentation is a crescent-shaped ulceration within the trigeminal sensory territory. The ala nasi is the most frequently affected site. Trigeminal trophic syndrome is notoriously difficult to diagnose and manage. A clear history is of paramount importance, with exclusion of malignant, fungal, granulomatous, vasculitic, or infective causes. We present a case of ulceration of the left ala nasi after brainstem cerebrovascular accident.",
"title": ""
},
{
"docid": "fa440af1d9ec65caf3cd37981919b56e",
"text": "We present a method for spotting sporadically occurring gestures in a continuous data stream from body-worn inertial sensors. Our method is based on a natural partitioning of continuous sensor signals and uses a two-stage approach for the spotting task. In a first stage, signal sections likely to contain specific motion events are preselected using a simple similarity search. Those preselected sections are then further classified in a second stage, exploiting the recognition capabilities of hidden Markov models. Based on two case studies, we discuss implementation details of our approach and show that it is a feasible strategy for the spotting of various types of motion events. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d622d45275c7d4c177aaf3e34eb8062b",
"text": "Detecting which tweets describe a specific event and clustering them is one of the main challenging tasks related to Social Media currently addressed in the NLP community. Existing approaches have mainly focused on detecting spikes in clusters around specific keywords or Named Entities (NE). However, one of the main drawbacks of such approaches is the difficulty in understanding when the same keywords describe different events. In this paper, we propose a novel approach that exploits NE mentions in tweets and their entity context to create a temporal event graph. Then, using simple graph theory techniques and a PageRank-like algorithm, we process the event graphs to detect clusters of tweets describing the same events. Experiments on two gold standard datasets show that our approach achieves state-of-the-art results both in terms of evaluation performances and the quality of the detected events.",
"title": ""
},
{
"docid": "5fa0e48da2045baa1f00a27a9baa4897",
"text": "The inferred cost of work-related stress call for prevention strategies that aim at detecting early warning signs at the workplace. This paper goes one step towards the goal of developing a personal health system for detecting stress. We analyze the discriminative power of electrodermal activity (EDA) in distinguishing stress from cognitive load in an office environment. A collective of 33 subjects underwent a laboratory intervention that included mild cognitive load and two stress factors, which are relevant at the workplace: mental stress induced by solving arithmetic problems under time pressure and psychosocial stress induced by social-evaluative threat. During the experiments, a wearable device was used to monitor the EDA as a measure of the individual stress reaction. Analysis of the data showed that the distributions of the EDA peak height and the instantaneous peak rate carry information about the stress level of a person. Six classifiers were investigated regarding their ability to discriminate cognitive load from stress. A maximum accuracy of 82.8% was achieved for discriminating stress from cognitive load. This would allow keeping track of stressful phases during a working day by using a wearable EDA device.",
"title": ""
},
{
"docid": "472036e178742f009537acce8a54c863",
"text": "This paper presents a comparative study of highspeed, low-power and low voltage full adder circuits. Our approach is based on XOR-XNOR (4T) design full adder circuits combined in a single unit. This technique helps in reducing the power consumption and the propagation delay while maintaining low complexity of logic design. Simulation results illustrate the superiority of the designed adder circuits against the conventional CMOS, TG and Hybrid adder circuits in terms of power, delay and power delay product (PDP) at low voltage. Noise analysis shows designed full adder circuit's work at high frequency and high temperature satisfactorily. Simulation results reveal that the designed circuits exhibit lower PDP, more power efficiency and faster when compared to the available full adder circuits at low voltage. The design is implemented on UMC 0.18µm process models in Cadence Virtuoso Schematic Composer at 1.8 V single ended supply voltage and simulations are carried out on Spectre S.",
"title": ""
},
{
"docid": "3a68bf0d9d79a8b7794ea9d5d236eb41",
"text": "This paper describes a camera-based observation system for football games that is used for the automatic analysis of football games and reasoning about multi-agent activity. The observation system runs on video streams produced by cameras set up for TV broadcasting. The observation system achieves reliability and accuracy through various mechanisms for adaptation, probabilistic estimation, and exploiting domain constraints. It represents motions compactly and segments them into classified ball actions.",
"title": ""
},
{
"docid": "07fbce97ec4e5e7fd176507b64b01e33",
"text": "Drought and heat-induced forest dieback and mortality are emerging global concerns. Although Mediterranean-type forest (MTF) ecosystems are considered to be resilient to drought and other disturbances, we observed a sudden and unprecedented forest collapse in a MTF in Western Australia corresponding with record dry and heat conditions in 2010/2011. An aerial survey and subsequent field investigation were undertaken to examine: the incidence and severity of canopy dieback and stem mortality, associations between canopy health and stand-related factors as well as tree species response. Canopy mortality was found to be concentrated in distinct patches, representing 1.5 % of the aerial sample (1,350 ha). Within these patches, 74 % of all measured stems (>1 cm DBHOB) had dying or recently killed crowns, leading to 26 % stem mortality six months following the collapse. Patches of canopy collapse were more densely stocked with the dominant species, Eucalyptus marginata, and lacked the prominent midstorey species Banksia grandis, compared to the surrounding forest. A differential response to the disturbance was observed among co-occurring tree species, which suggests contrasting strategies for coping with extreme water stress. These results suggest that MTFs, once thought to be resilient to climate change, are susceptible to sudden and severe forest collapse when key thresholds have been reached.",
"title": ""
},
{
"docid": "72aef0bd0793116983c11883ebfb5525",
"text": "Building facade classification by architectural styles allows categorization of large databases of building images into semantic categories belonging to certain historic periods, regions and cultural influences. Image databases sorted by architectural styles permit effective and fast image search for the purposes of content-based image retrieval, 3D reconstruction, 3D city-modeling, virtual tourism and indexing of cultural heritage buildings. Building facade classification is viewed as a task of classifying separate architectural structural elements, like windows, domes, towers, columns, etc, as every architectural style applies certain rules and characteristic forms for the design and construction of the structural parts mentioned. In the context of building facade architectural style classification the current paper objective is to classify the architectural style of facade windows. Typical windows belonging to Romanesque, Gothic and Renaissance/Baroque European main architectural periods are classified. The approach is based on clustering and learning of local features, applying intelligence that architects use to classify windows of the mentioned architectural styles in the training stage.",
"title": ""
},
{
"docid": "922a4369bf08f23e1c0171dc35d5642b",
"text": "Most automated facial expression analysis methods treat the face as a 2D object, flat like a sheet of paper. That works well provided images are frontal or nearly so. In real-world conditions, moderate to large head rotation is common and system performance to recognize expression degrades. Multi-view Convolutional Neural Networks (CNNs) have been proposed to increase robustness to pose, but they require greater model sizes and may generalize poorly across views that are not included in the training set. We propose FACSCaps architecture to handle multi-view and multi-label facial action unit (AU) detection within a single model that can generalize to novel views. Additionally, FACSCaps's ability to synthesize faces enables insights into what is leaned by the model. FACSCaps models video frames using matrix capsules, where hierarchical pose relationships between face parts are built into internal representations. The model is trained by jointly optimizing a multi-label loss and the reconstruction accuracy. FACSCaps was evaluated using the FERA 2017 facial expression dataset that includes spontaneous facial expressions in a wide range of head orientations. FACSCaps outperformed both state-of-the-art CNNs and their temporal extensions.",
"title": ""
},
{
"docid": "4106a8cf90180e237fdbe847c13d0126",
"text": "The Internet has witnessed the proliferation of applications and services that rely on HTTP as application protocol. Users play games, read emails, watch videos, chat and access web pages using their PC, which in turn downloads tens or hundreds of URLs to fetch all the objects needed to display the requested content. As result, billions of URLs are observed in the network. When monitoring the traffic, thus, it is becoming more and more important to have methodologies and tools that allow one to dig into this data and extract useful information. In this paper, we present CLUE, Clustering for URL Exploration, a methodology that leverages clustering algorithms, i.e., unsupervised techniques developed in the data mining field to extract knowledge from passive observation of URLs carried by the network. This is a challenging problem given the unstructured format of URLs, which, being strings, call for specialized approaches. Inspired by text-mining algorithms, we introduce the concept of URL-distance and use it to compose clusters of URLs using the well-known DBSCAN algorithm. Experiments on actual datasets show encouraging results. Well-separated and consistent clusters emerge and allow us to identify, e.g., malicious traffic, advertising services, and thirdparty tracking systems. In a nutshell, our clustering algorithm offers the means to get insights on the data carried by the network, with applications in the security or privacy protection fields.",
"title": ""
},
{
"docid": "d12e99d6dc078d24a171f921ac0ef4d3",
"text": "An omni-directional rolling spherical robot equipped with a high-rate flywheel (BYQ-V) is presented, the gyroscopic effects of high-rate flywheel can further enhance the dynamic stability of the spherical robot. This robot is designed for territory or lunar exploration in the future. The mechanical structure and control system of the robot are given particularly. Using the constrained Lagrangian method, the simplified dynamic model of the robot is derived under some assumptions, Moreover, a Linear Quadratic Regulator (LQR) controller and Percentage Derivative (PD) controller are designed to implement the pose and velocity control of the robot respectively, Finally, the dynamic model and the controllers are validated through simulation study and prototype experiment.",
"title": ""
},
{
"docid": "429f27ab8039a9e720e9122f5b1e3bea",
"text": "We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.",
"title": ""
},
{
"docid": "bde03a5d90507314ce5f034b9b764417",
"text": "Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.",
"title": ""
},
{
"docid": "92dca681aa54142d24e3b7bf1854a2d2",
"text": "Holographic Recurrent Networks (HRNs) are recurrent networks which incorporate associative memory techniques for storing sequential structure. HRNs can be easily and quickly trained using gradient descent techniques to generate sequences of discrete outputs and trajectories through continuous space. The performance of HRNs is found to be superior to that of ordinary recurrent networks on these sequence generation tasks.",
"title": ""
},
{
"docid": "9d73ff3f8528bb412c585d802873fcb4",
"text": "In this work, we introduce a novel interpretation of residual networks showing they are exponential ensembles. This observation is supported by a large-scale lesion study that demonstrates they behave just like ensembles at test time. Subsequently, we perform an analysis showing these ensembles mostly consist of networks that are each relatively shallow. For example, contrary to our expectations, most of the gradient in a residual network with 110 layers comes from an ensemble of very short networks, i.e., only 10-34 layers deep. This suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble. Ultimately, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by ensembling many short networks together. This insight reveals that depth is still an open research question and invites the exploration of the related notion of multiplicity.",
"title": ""
}
] |
scidocsrr
|
7a6c13536dd2b138cdfdf822f28d8869
|
A lightweight active service migration framework for computational offloading in mobile cloud computing
|
[
{
"docid": "0e55e64ddc463d0ea151de8efe40183f",
"text": "Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems. & 2013 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "aa18c10c90af93f38c8fca4eff2aab09",
"text": "The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.",
"title": ""
}
] |
[
{
"docid": "7f799fbe03849971cb3272e35e7b13db",
"text": "Text often expresses the writer's emotional state or evokes emotions in the reader. The nature of emotional phenomena like reading and writing can be interpreted in different ways and represented with different computational models. Affective computing (AC) researchers often use a categorical model in which text data is associated with emotional labels. We introduce a new way of using normative databases as a way of processing text with a dimensional model and compare it with different categorical approaches. The approach is evaluated using four data sets of texts reflecting different emotional phenomena. An emotional thesaurus and a bag-‐of-‐words model are used to generate vectors for each pseudo-‐ document, then for the categorical models three dimensionality reduction techniques are evaluated: Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Non-‐negative Matrix Factorization (NMF). For the dimensional model a normative database is used to produce three-‐dimensional vectors (valence, arousal, dominance) for each pseudo-‐document. This 3-‐dimensional model can be used to generate psychologically driven visualizations. Both models can be used for affect detection based on distances amongst categories and pseudo-‐documents. Experiments show that the categorical model using NMF and the dimensional model tend to perform best. 1. INTRODUCTION Emotions and affective states are pervasive in all forms of communication, including text based, and increasingly recognized as important to understanding the full meaning that a message conveys, or the impact it will have on readers. Given the increasing amounts of textual communication being produced (e.g. emails, user created content, published content) researchers are seeking automated language processing techniques that include models of emotions. Emotions and other affective states (e.g. moods) have been studied by many disciplines. Affect scientists have studied emotions since Darwin (Darwin, 1872), and different schools within psychology have produced different theories representing different ways of interpreting affective phenomena (comprehensively reviewed in Davidson, Scherer and Goldsmith, 2003). In the last decade technologists have also started contributing to this research. Affective Computing (AC) in particular is contributing new ways to improve communication between the sensitive human and the unemotionally computer. AC researchers have developed computational systems that recognize and respond to the affective states of the user (Calvo and D'Mello, 2010). Affect-‐sensitive user interfaces are being developed in a number of domains including gaming, mental health, and learning technologies. The basic tenet behind most AC systems is that automatically recognizing and responding to a user's affective states during interactions with a computer, …",
"title": ""
},
{
"docid": "74dead8ad89ae4a55105fb7ae95d3e20",
"text": "Improved health is one of the many reasons people choose to adopt a vegetarian diet, and there is now a wealth of evidence to support the health benefi ts of a vegetarian diet. Abstract: There is now a significant amount of research that demonstrates the health benefits of vegetarian and plant-based diets, which have been associated with a reduced risk of obesity, diabetes, heart disease, and some types of cancer as well as increased longevity. Vegetarian diets are typically lower in fat, particularly saturated fat, and higher in dietary fiber. They are also likely to include more whole grains, legumes, nuts, and soy protein, and together with the absence of red meat, this type of eating plan may provide many benefits for the prevention and treatment of obesity and chronic health problems, including diabetes and cardiovascular disease. Although a well-planned vegetarian or vegan diet can meet all the nutritional needs of an individual, it may be necessary to pay particular attention to some nutrients to ensure an adequate intake, particularly if the person is on a vegan diet. This article will review the evidence for the health benefits of a vegetarian diet and also discuss strategies for meeting the nutritional needs of those following a vegetarian or plant-based eating pattern.",
"title": ""
},
{
"docid": "84d8058c67870f8606b485e7ad430c58",
"text": "Stanford typed dependencies are a widely desired representation of natural language sentences, but parsing is one of the major computational bottlenecks in text analysis systems. In light of the evolving definition of the Stanford dependencies and developments in statistical dependency parsing algorithms, this paper revisits the question of Cer et al. (2010): what is the tradeoff between accuracy and speed in obtaining Stanford dependencies in particular? We also explore the effects of input representations on this tradeoff: part-of-speech tags, the novel use of an alternative dependency representation as input, and distributional representaions of words. We find that direct dependency parsing is a more viable solution than it was found to be in the past. An accompanying software release can be found at: http://www.ark.cs.cmu.edu/TBSD",
"title": ""
},
{
"docid": "a4a5c6cbec237c2cd6fb3abcf6b4a184",
"text": "Developing automatic diagnostic tools for the early detection of skin cancer lesions in dermoscopic images can help to reduce melanoma-induced mortality. Image segmentation is a key step in the automated skin lesion diagnosis pipeline. In this paper, a fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented. Delaunay Triangulation is used to extract a binary mask of the lesion region, without the need of any training stage. A quantitative experimental evaluation has been conducted on a publicly available database, by taking into account six well-known state-of-the-art segmentation methods for comparison. The results of the experimental analysis demonstrate that the proposed approach is highly accurate when dealing with benign lesions, while the segmentation accuracy significantly decreases when melanoma images are processed. This behavior led us to consider geometrical and color features extracted from the binary masks generated by our algorithm for classification, achieving promising results for melanoma detection.",
"title": ""
},
{
"docid": "ced3a56c5469528e8fa5784dc0fff5d4",
"text": "This paper explores the relation between a set of behavioural information security governance factors and employees’ information security awareness. To enable statistical analysis between proposed relations, data was collected from two different samples in 24 organisations: 24 information security executives and 240 employees. The results reveal that having a formal unit with explicit responsibility for information security, utilizing coordinating committees, and sharing security knowledge through an intranet site significantly correlates with dimensions of employees’ information security awareness. However, regular identification of vulnerabilities in information systems and related processes is significantly negatively correlated with employees’ information security awareness, in particular managing passwords. The effect of behavioural information security governance on employee information security awareness is an understudied topic. Therefore, this study is explorative in nature and the results are preliminary. Nevertheless, the paper provides implications for both research and practice.",
"title": ""
},
{
"docid": "6e923a586a457521e9de9d4a9cab77ad",
"text": "We present a new approach to the matting problem which splits the task into two steps: interactive trimap extraction followed by trimap-based alpha matting. By doing so we gain considerably in terms of speed and quality and are able to deal with high resolution images. This paper has three contributions: (i) a new trimap segmentation method using parametric max-flow; (ii) an alpha matting technique for high resolution images with a new gradient preserving prior on alpha; (iii) a database of 27 ground truth alpha mattes of still objects, which is considerably larger than previous databases and also of higher quality. The database is used to train our system and to validate that both our trimap extraction and our matting method improve on state-of-the-art techniques.",
"title": ""
},
{
"docid": "0ad68f20acf338f4051a93ba5e273187",
"text": "FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.",
"title": ""
},
{
"docid": "105f34c3fa2d4edbe83d184b7cf039aa",
"text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.",
"title": ""
},
{
"docid": "b7eb2c65c459c9d5776c1e2cba84706c",
"text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.",
"title": ""
},
{
"docid": "220acd23ebb9c69cfb9ee00b063468c6",
"text": "This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.",
"title": ""
},
{
"docid": "7b25d1c4d20379a8a0fabc7398ea2c28",
"text": "In this paper we introduce an efficient and stable implicit SPH method for the physically-based simulation of incompressible fluids. In the area of computer graphics the most efficient SPH approaches focus solely on the correction of the density error to prevent volume compression. However, the continuity equation for incompressible flow also demands a divergence-free velocity field which is neglected by most methods. Although a few methods consider velocity divergence, they are either slow or have a perceivable density fluctuation.\n Our novel method uses an efficient combination of two pressure solvers which enforce low volume compression (below 0.01%) and a divergence-free velocity field. This can be seen as enforcing incompressibility both on position level and velocity level. The first part is essential for realistic physical behavior while the divergence-free state increases the stability significantly and reduces the number of solver iterations. Moreover, it allows larger time steps which yields a considerable performance gain since particle neighborhoods have to be updated less frequently. Therefore, our divergence-free SPH (DFSPH) approach is significantly faster and more stable than current state-of-the-art SPH methods for incompressible fluids. We demonstrate this in simulations with millions of fast moving particles.",
"title": ""
},
{
"docid": "b8700283c7fb65ba2e814adffdbd84f8",
"text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.",
"title": ""
},
{
"docid": "c7e3fc9562a02818bba80d250241511d",
"text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.",
"title": ""
},
{
"docid": "5bf9aeb37fc1a82420b2ff4136f547d0",
"text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.",
"title": ""
},
{
"docid": "fc3c4f6c413719bbcf7d13add8c3d214",
"text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.",
"title": ""
},
{
"docid": "f489e2c0d6d733c9e2dbbdb1d7355091",
"text": "In many signal processing applications, the signals provided by the sensors are mixtures of many sources. The problem of separation of sources is to extract the original signals from these mixtures. A new algorithm, based on ideas of backpropagation learning, is proposed for source separation. No a priori information on the sources themselves is required, and the algorithm can deal even with non-linear mixtures. After a short overview of previous works in that eld, we will describe the proposed algorithm. Then, some experimental results will be discussed.",
"title": ""
},
{
"docid": "e5261ee5ea2df8bae7cc82cb4841dea0",
"text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.",
"title": ""
},
{
"docid": "22c72f94040cd65dde8e00a7221d2432",
"text": "Research on “How to create a fair, convenient attendance management system”, is being pursued by academics and government departments fervently. This study is based on the biometric recognition technology. The hand geometry machine captures the personal hand geometry data as the biometric code and applies this data in the attendance management system as the attendance record. The attendance records that use this technology is difficult to replicate by others. It can improve the reliability of the attendance records and avoid fraudulent issues that happen when you use a register. This research uses the social survey method-questionnaire to evaluate the theory and practice of introducing biometric recognition technology-hand geometry capturing into the attendance management system.",
"title": ""
},
{
"docid": "ca655b741316e8c65b6b7590833396e1",
"text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "69b3275cb4cae53b3a8888e4fe7f85f7",
"text": "In this paper we propose a way to improve the K-SVD image denoising algorithm. The suggested method aims to reduce the gap that exists between the local processing (sparse-coding of overlapping patches) and the global image recovery (obtained by averaging the overlapping patches). Inspired by game-theory ideas, we define a disagreement-patch as the difference between the intermediate locally denoised patch and its corresponding part in the final outcome. Our algorithm iterates the denoising process several times, applied on modified patches. Those are obtained by subtracting the disagreement-patches from their corresponding input noisy ones, thus pushing the overlapping patches towards an agreement. Experimental results demonstrate the improvement this algorithm leads to.",
"title": ""
}
] |
scidocsrr
|
75fcd9ee01bbccf5e009284699ff1a0d
|
Floral morphology as the main driver of flower-feeding insect occurrences in the Paris region
|
[
{
"docid": "8f4a0c6252586fa01133f9f9f257ec87",
"text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.",
"title": ""
}
] |
[
{
"docid": "2b38ac7d46a1b3555fef49a4e02cac39",
"text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"title": ""
},
{
"docid": "fe1697301e7480ae255aa4d9f60b1040",
"text": "Background and aim\nType 2 diabetes mellitus (T2DM) is one of the major diseases confronting the health care systems. In diabetes mellitus (DM), combined use of oral hypoglycemic medications has been shown to be more effective than metformin (Met) alone in glycemic control. This study determined the effects of Ginkgo biloba (GKB) extract as an adjuvant to Met in patients with uncontrolled T2DM.\n\n\nSubjects and methods\nSixty T2DM patients were recruited in a randomized, placebo-controlled, double-blinded, and multicenter trial. The patients, currently using Met, were randomly grouped into those treated with either GKB extract (120 mg/day) or placebo (starch, 120 mg/day) for 90 days. Blood glycated hemoglobin (HbA1c), fasting serum glucose, serum insulin, body mass index (BMI), waist circumference (WC), insulin resistance, and visceral adiposity index (VAI) were determined before (baseline) and after 90 days of GKB extract treatment.\n\n\nResults\nGKB extract significantly decreased blood HbA1c (7.7%±1.2% vs baseline 8.6%±1.6%, P<0.001), fasting serum glucose (154.7±36.1 mg/dL vs baseline 194.4±66.1 mg/dL, P<0.001) and insulin (13.4±7.8 μU/mL vs baseline 18.5±8.9 μU/mL, P=0.006) levels, BMI (31.6±5.1 kg/m2 vs baseline 34.0±6.0 kg/m2, P<0.001), waist WC (102.6±10.5 cm vs baseline 106.0±10.9 cm, P<0.001), and VAI (158.9±67.2 vs baseline 192.0±86.2, P=0.007). GKB extract did not negatively impact the liver, kidney, or hematopoietic functions.\n\n\nConclusion\nGKB extract as an adjuvant was effective in improving Met treatment outcomes in T2DM patients. Thus, it is suggested that GKB extract is an effective dietary supplement for the control of DM in humans.",
"title": ""
},
{
"docid": "246bbb92bc968d20866b8c92a10f8ac7",
"text": "This survey paper provides an overview of content-based music information retrieval systems, both for audio and for symbolic music notation. Matching algorithms and indexing methods are briefly presented. The need for a TREC-like comparison of matching algorithms such as MIREX at ISMIR becomes clear from the high number of quite different methods which so far only have been used on different data collections. We placed the systems on a map showing the tasks and users for which they are suitable, and we find that existing content-based retrieval systems fail to cover a gap between the very general and the very specific retrieval tasks.",
"title": ""
},
{
"docid": "406e6a8966aa43e7538030f844d6c2f0",
"text": "The idea of developing software components was envisioned more than forty years ago. In the past two decades, Component-Based Software Engineering (CBSE) has emerged as a distinguishable approach in software engineering, and it has attracted the attention of many researchers, which has led to many results being published in the research literature. There is a huge amount of knowledge encapsulated in conferences and journals targeting this area, but a systematic analysis of that knowledge is missing. For this reason, we aim to investigate the state-of-the-art of the CBSE area through a detailed literature review. To do this, 1231 studies dating from 1984 to 2012 were analyzed. Using the available evidence, this paper addresses five dimensions of CBSE: main objectives, research topics, application domains, research intensity and applied research methods. The main objectives found were to increase productivity, save costs and improve quality. The most addressed application domains are homogeneously divided between commercial-off-the-shelf (COTS), distributed and embedded systems. Intensity of research showed a considerable increase in the last fourteen years. In addition to the analysis, this paper also synthesizes the available evidence, identifies open issues and points out areas that call for further research. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5071eba5a173fdd496b41f2c8d24e028",
"text": "We survey four variants of RSA designed to speed up RSA decryption and signing. We only consider variants that are backwards compatible in the sense that a system using one of these variants can interoperate with systems using standard RSA.",
"title": ""
},
{
"docid": "d1a9ac5a11d1f9fbd9b9ee24a199cb70",
"text": "In this paper, we proposed a new robust twin support vector machine (called R-TWSVM) via second order cone programming formulations for classification, which can deal with data with measurement noise efficiently. Preliminary experiments confirm the robustness of the proposed method and its superiority to the traditional robust SVM in both computation time and classification accuracy. Remarkably, since there are only inner products about inputs in our dual problems, this makes us apply kernel trick directly for nonlinear cases. Simultaneously we does not need to solve the extra inverse of matrices, which is totally different with existing TWSVMs. In addition, we also show that the TWSVMs are the special case of our robust model and simultaneously give a new dual form of TWSVM by degenerating R-TWSVM, which successfully overcomes the existing shortcomings of TWSVM. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f282c9ff4afa773af39eb963f4987d09",
"text": "The fast development of computing and communication has reformed the financial markets' dynamics. Nowadays many people are investing and trading stocks through online channels and having access to real-time market information efficiently. There are more opportunities to lose or make money with all the stocks information available throughout the World; however, one should spend a lot of effort and time to follow those stocks and the available instant information. This paper presents a preliminary regarding a multi-agent recommender system for computational investing. This system utilizes a hybrid filtering technique to adaptively recommend the most profitable stocks at the right time according to investor's personal favour. The hybrid technique includes collaborative and content-based filtering. The content-based model uses investor preferences, influencing macro-economic factors, stocks profiles and the predicted trend to tailor to its advices. The collaborative filter assesses the investor pairs' investing behaviours and actions that are proficient in economic market to recommend the similar ones to the target investor.",
"title": ""
},
{
"docid": "92551f47dc9e17e4eeedaa94e98fd1dd",
"text": "This 1.2 /spl mu/m, 33 mW analog-to-digital converter (ADC) demonstrates a family of power reduction techniques including a commutated feedback capacitor switching (CFCS), sharing of the second stage of an op amp between adjacent stages of a pipeline, reusing the first stage of an op amp as the comparator pre-amp, and exploiting parasitic capacitance as common-mode feedback capacitors.",
"title": ""
},
{
"docid": "40e0d6e93c426107cbefbdf3d4ca85b9",
"text": "H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.",
"title": ""
},
{
"docid": "702d38b3ddfd2d0a2f506acbad561f63",
"text": "Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.",
"title": ""
},
{
"docid": "ce2b354fee0d2d895d8af2c6642919fa",
"text": "This paper presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) and data independence (unsupervised criterion). Classification accuracy is used as a metric to evaluate the performance of the method. By minimizing the structural risk, projection originated from the decision boundaries directly improves the classification performance from a supervised perspective. From an unsupervised perspective, projection can also be obtained based on maximum independence among features (or attributes) in data to indirectly achieve better classification accuracy over more intrinsic representation of the data. Orthogonality interrelates the two sets of projections such that minimum redundancy exists between the projections, leading to more effective dimensionality reduction. Experimental results show that the proposed hybrid dimensionality reduction method that satisfies both criteria simultaneously provides higher classification performance, especially for noisy data sets, in relatively lower dimensional space than various existing methods.",
"title": ""
},
{
"docid": "4c64fb50bc70532d9a0ba4b6847525ed",
"text": "An 18-GHz range frequency synthesizer is implemented in 0.13-mum SiGe BiCMOS technology as part of a 60-GHz superheterodyne transceiver chipset. It provides for RF channels of 56.5-64 GHz in 500-MHz steps, and features a phase-rotating multi-modulus divider capable of sub-integer division. Output frequency range from the synthesizer is 16.0 to 18.8 GHz, while the enabled RF frequency range is 3.5 times this, or 55.8 to 65.8 GHz. The measured RMS phase noise of the synthesizer is 0.8deg (1 MHz to 1 GHz integration), while phase noise at 100-kHz and 10-MHz offsets are -90 and -124 dBc/Hz, respectively. Reference spurs are 69 dBc; sub-integer spurs are -65 dBc; and combined power consumption from 1.2 and 2.7 V is 144 mW.",
"title": ""
},
{
"docid": "46db4cfa5ccb08da3ca884ad794dc419",
"text": "Mutation testing of Python programs raises a problem of incompetent mutants. Incompetent mutants cause execution errors due to inconsistency of types that cannot be resolved before run-time. We present a practical approach in which incompetent mutants can be generated, but the solution is transparent for a user and incompetent mutants are detected by a mutation system during test execution. Experiments with 20 traditional and object-oriented operators confirmed that the overhead can be accepted. The paper presents an experimental evaluation of the first- and higher-order mutation. Four algorithms to the 2nd and 3rd order mutant generation were applied. The impact of code coverage consideration on the process efficiency is discussed. The experiments were supported by the MutPy system for mutation testing of Python programs.",
"title": ""
},
{
"docid": "39490ce3446ac22bdc6042a3a38bc5ee",
"text": "The ultimate goal of an information provider is to satisfy the user information needs. That is, to provide the user with the right information, at the right time, through the right means. A prerequisite for developing personalised services is to rely on user profiles representing users’ information needs. In this paper we will first address the issue of presenting a general user profile model. Then, the general user profile model will be customised for digital libraries users.",
"title": ""
},
{
"docid": "fe4428aa7ae69111bb55d45c2941566e",
"text": "In this paper, we determine ordering quantity and reorder point for aircraft consumable spare parts. We use continuous review model to propose a spare part inventory policy that can be used in a aircraft maintenance company in Indonesia. We employ ABC classification system to categorize the spare parts based on their dollar contribution. We focus our research on managing the inventory level for spare parts on class A and B which commonly known as important classes. The result from the research indicates that the continuous review policy gives a significant amount of saving compared to an existing policy used by the company.",
"title": ""
},
{
"docid": "69d16861f969b2aaaa6658a754268786",
"text": "In this paper, we introduce a bilinear composition loss function to address the problem of image dehazing. Previous methods in image dehazing use a two-stage approach which first estimate the transmission map followed by clear image estimation. The drawback of a two-stage method is that it tends to boost local image artifacts such as noise, aliasing and blocking. This is especially the case for heavy haze images captured with a low quality device. Our method is based on convolutional neural networks. Unique in our method is the bilinear composition loss function which directly model the correlations between transmission map, clear image, and atmospheric light. This allows errors to be back-propagated to each sub-network concurrently, while maintaining the composition constraint to avoid overfitting of each sub-network. We evaluate the effectiveness of our proposed method using both synthetic and real world examples. Extensive experiments show that our method outperfoms state-of-the-art methods especially for haze images with severe noise level and compressions.",
"title": ""
},
{
"docid": "bd700aba43a8a8de5615aa1b9ca595a7",
"text": "Cloud computing has formed the conceptual and infrastructural basis for tomorrow’s computing. The global computing infrastructure is rapidly moving towards cloud based architecture. While it is important to take advantages of could based computing by means of deploying it in diversified sectors, the security aspects in a cloud based computing environment remains at the core of interest. Cloud based services and service providers are being evolved which has resulted in a new business trend based on cloud technology. With the introduction of numerous cloud based services and geographically dispersed cloud service providers, sensitive information of different entities are normally stored in remote servers and locations with the possibilities of being exposed to unwanted parties in situations where the cloud servers storing those information are compromised. If security is not robust and consistent, the flexibility and advantages that cloud computing has to offer will have little credibility. This paper presents a review on the cloud computing concepts as well as security issues inherent within the context of cloud computing and cloud",
"title": ""
},
{
"docid": "347ffb664378b56a5ae3a45d1251d7b7",
"text": "We present Essentia 2.0, an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. The library is cross-platform and currently supports Linux, Mac OS X, and Windows systems. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications.",
"title": ""
},
{
"docid": "a29a51df4eddfa0239903986f4011532",
"text": "In recent years additive manufacturing, or threedimensional (3D) printing, it is becoming increasingly widespread and used also in the medical and biomedical field [1]. 3D printing is a technology that allows to print, in plastic or other material, solid objects of any shape from its digital model. The printing process takes place by overlapping layers of material corresponding to cross sections of the final product. The 3D models can be created de novo, with a 3D modeling software, or it is possible to replicate an existing object with the use of a 3D scanner. In the past years, the development of appropriate software packages allowed to generate 3D printable anatomical models from computerized tomography, magnetic resonance imaging and ultrasound scans [2,3]. Up to now there have been 3D printed objects of nearly any size (from nanostructures to buildings) and material. Plastics, metals, ceramics, graphene and even derivatives of human tissues. The so-called “bio-printers”, in fact, allow to print one above the other thin layers of cells immersed in a gelatinous matrix. Recent advances of 3D bioprinting enabled researchers to print biocompatible scaffolds and human tissues such as skin, bone, cartilage, vessels and are driving to the design and 3D printing of artificial organs like liver and kidney [4]. Dentistry, prosthetics, craniofacial reconstructive surgery, neurosurgery and orthopedic surgery are among the disciplines that have already shown versatility and possible applications of 3D printing in adults and children [2,5]. Only a few experiences have instead been reported in newborn and infants. 3D printed individualized bioresorbable airway splints have been used for the treatment of three infants with severe tracheobronchomalacia, ensuring resolution of pulmonary and extrapulmonary symptoms [6,7]. A 3D model of a complex congenital heart defects have been used for preoperative planning of intraoperative procedures, allowing surgeons to repair a complex defect in a single intervention [8]. As already shown for children with obstructive sleep apnea and craniofacial anomalies [9]. personalized 3D printed masks could improve CPAP effectiveness and comfort also in term and preterm neonates. Neonatal emergency transport services and rural hospitals could also benefit from this technology, making possible to print medical devices spare parts, surgical and medical instruments wherever not readily available. It is envisaged that 3D printing, in the next future, will give its contribute toward the individualization of neonatal care, although further multidisciplinary studies are still needed to evaluate safety, possible applications and realize its full potential.",
"title": ""
},
{
"docid": "378c3b785db68bd5efdf1ad026c901ea",
"text": "Intrinsically switched tunable filters are switched on and off using the tuning elements that tune their center frequencies and/or bandwidths, without requiring an increase in the tuning range of the tuning elements. Because external RF switches are not needed, substantial improvements in insertion loss, linearity, dc power consumption, control complexity, size, and weight are possible compared to conventional approaches. An intrinsically switched varactor-tuned bandstop filter and bandpass filter bank are demonstrated here for the first time. The intrinsically switched bandstop filter prototype has a second-order notch response with more than 50 dB of rejection continuously tunable from 665 to 1000 MHz (50%) with negligible passband ripple in the intrinsic off state. The intrinsically switched tunable bandpass filter bank prototype, comprised of three third-order bandpass filters, has a constant 50-MHz bandwidth response continuously tunable from 740 to 1644 MHz (122%) with less than 5 dB of passband insertion loss and more than 40 dB of isolation between bands.",
"title": ""
}
] |
scidocsrr
|
839740a1ad696b4703f9eff52b5afefb
|
Design of Power and Area Efficient Approximate Multipliers
|
[
{
"docid": "a10752bb80ad47e18ef7dbcd83d49ff7",
"text": "Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier.",
"title": ""
},
{
"docid": "962ab9e871dc06c3cd290787dc7e71aa",
"text": "The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.",
"title": ""
}
] |
[
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "6df12ee53551f4a3bd03bca4ca545bf1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
},
{
"docid": "cc8766fc94cf9865c9035c7b3d3ce4a6",
"text": "Image features known as “gist descriptors” have recently been applied to the malware classification problem. In this research, we implement, test, and analyze a malware score based on gist descriptors, and verify that the resulting score yields very strong classification results. We also analyze the robustness of this gist-based scoring technique when applied to obfuscated malware, and we perform feature reduction to determine a minimal set of gist features. Then we compare the effectiveness of a deep learning technique to this gist-based approach. While scoring based on gist descriptors is effective, we show that our deep learning technique performs equally well. A potential advantage of the deep learning approach is that there is no need to extract the gist features when training or scoring.",
"title": ""
},
{
"docid": "609c3a75308eb951079373feb88432ae",
"text": "We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie one from Wikipedia and the other from IMDb written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-ofthe-art neural RC models which have achieved near human performance on the SQuAD dataset (Rajpurkar et al., 2016b), even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding.",
"title": ""
},
{
"docid": "3ff01763def34800cf8afb9fc5fa9c83",
"text": "The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems. The support vector machine has the advantage that a smaller number of parameters for the model can be identified in a manner that does not require the extent of prior information or heuristic assumptions that some previous techniques require. Furthermore, the optimization method of a support vector machine is quadratic programming, which is a well-studied and understood mathematical programming technique. Support vector machine simulations are carried out on nonlinear problems previously studied by other researchers using neural networks. This allows initial comparison against other techniques to determine the feasibility of using the proposed method for nonlinear detection. Results show that support vector machines perform as well as neural networks on the nonlinear problems investigated. A method is then proposed to introduce decision feedback processing to support vector machines to address the fact that intersymbol interference (ISI) data generates input vectors having temporal correlation, whereas a standard support vector machine assumes independent input vectors. Presenting the problem from the viewpoint of the pattern space illustrates the utility of a bank of support vector machines. This approach yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter. A simulation using a linear system shows that the proposed method performs equally to a conventional decision feedback equalizer for this problem.",
"title": ""
},
{
"docid": "ba5b796721787105e48ad2794cfc11cc",
"text": "Real world applications of machine learning in natural language processing can span many different domains and usually require a huge effort for the annotation of domain specific training data. For this reason, domain adaptation techniques have gained a lot of attention in the last years. In order to derive an effective domain adaptation, a good feature representation across domains is crucial as well as the generalisation ability of the predictive model. In this paper we address the problem of domain adaptation for sentiment classification by combining deep learning, for acquiring a cross-domain high-level feature representation, and ensemble methods, for reducing the cross-domain generalization error. The proposed adaptation framework has been evaluated on a benchmark dataset composed of reviews of four different Amazon category of products, significantly outperforming the state of the art methods.",
"title": ""
},
{
"docid": "b51d531c2ff106124f96a4287e466b90",
"text": "Detecting buildings from very high resolution (VHR) aerial and satellite images is extremely useful in map making, urban planning, and land use analysis. Although it is possible to manually locate buildings from these VHR images, this operation may not be robust and fast. Therefore, automated systems to detect buildings from VHR aerial and satellite images are needed. Unfortunately, such systems must cope with major problems. First, buildings have diverse characteristics, and their appearance (illumination, viewing angle, etc.) is uncontrolled in these images. Second, buildings in urban areas are generally dense and complex. It is hard to detect separate buildings from them. To overcome these difficulties, we propose a novel building detection method using local feature vectors and a probabilistic framework. We first introduce four different local feature vector extraction methods. Extracted local feature vectors serve as observations of the probability density function (pdf) to be estimated. Using a variable-kernel density estimation method, we estimate the corresponding pdf. In other words, we represent building locations (to be detected) in the image as joint random variables and estimate their pdf. Using the modes of the estimated density, as well as other probabilistic properties, we detect building locations in the image. We also introduce data and decision fusion methods based on our probabilistic framework to detect building locations. We pick certain crops of VHR panchromatic aerial and Ikonos satellite images to test our method. We assume that these crops are detected using our previous urban region detection method. Our test images are acquired by two different sensors, and they have different spatial resolutions. Also, buildings in these images have diverse characteristics. Therefore, we can test our methods on a diverse data set. Extensive tests indicate that our method can be used to automatically detect buildings in a robust and fast manner in Ikonos satellite and our aerial images.",
"title": ""
},
{
"docid": "2399e1ffd634417f00273993ad0ba466",
"text": "Requirements prioritization aims at identifying the most important requirements for a software system, a crucial step when planning for system releases and deciding which requirements to implement in each release. Several prioritization methods and supporting tools have been proposed so far. How to evaluate their properties, with the aim of supporting the selection of the most appropriate method for a specific project, is considered a relevant question. In this paper, we present an empirical study aiming at evaluating two state-of-the art tool-supported requirements prioritization methods, AHP and CBRank. We focus on three measures: the ease of use, the time-consumption and the accuracy. The experiment has been conducted with 23 experienced subjects on a set of 20 requirements from a real project. Results indicate that for the first two characteristics CBRank overcomes AHP, while for the accuracy AHP performs better than CBRank, even if the resulting ranks from the two methods are very similar. The majority of the users found CBRank the ‘‘overall best”",
"title": ""
},
{
"docid": "3fd747a983ef1a0e5eff117b8765d4b3",
"text": "We study centrality in urban street patterns of different world cities represented as networks in geographical space. The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure. A hierarchical clustering analysis based on the distributions of centrality has a certain capacity to distinguish different classes of cities. In particular, self-organized cities exhibit scale-free properties similar to those found in nonspatial networks, while planned cities do not.",
"title": ""
},
{
"docid": "45ea8e1e27f6c687d957af561aca5188",
"text": "Impedance matching networks for nonlinear devices such as amplifiers and rectifiers are normally very challenging to design, particularly for broadband and multiband devices. A novel design concept for a broadband high-efficiency rectenna without using matching networks is presented in this paper for the first time. An off-center-fed dipole antenna with relatively high input impedance over a wide frequency band is proposed. The antenna impedance can be tuned to the desired value and directly provides a complex conjugate match to the impedance of a rectifier. The received RF power by the antenna can be delivered to the rectifier efficiently without using impedance matching networks; thus, the proposed rectenna is of a simple structure, low cost, and compact size. In addition, the rectenna can work well under different operating conditions and using different types of rectifying diodes. A rectenna has been designed and made based on this concept. The measured results show that the rectenna is of high power conversion efficiency (more than 60%) in two wide bands, which are 0.9–1.1 and 1.8–2.5 GHz, for mobile, Wi-Fi, and ISM bands. Moreover, by using different diodes, the rectenna can maintain its wide bandwidth and high efficiency over a wide range of input power levels (from 0 to 23 dBm) and load values (from 200 to 2000 Ω). It is, therefore, suitable for high-efficiency wireless power transfer or energy harvesting applications. The proposed rectenna is general and simple in structure without the need for a matching network hence is of great significance for many applications.",
"title": ""
},
{
"docid": "24e73ff615bb27e3f8f16746f496b689",
"text": "A physically-based computational technique was investigated which is intended to estimate an initial guess for complex values of the wavenumber of a disturbance leading to the solution of the fourth-order Orr–Sommerfeld (O–S) equation. The complex wavenumbers, or eigenvalues, were associated with the stability characteristics of a semi-infinite shear flow represented by a hyperbolic-tangent function. This study was devoted to the examination of unstable flow assuming a spatially growing disturbance and is predicated on the fact that flow instability is correlated with elevated levels of perturbation kinetic energy per unit mass. A MATLAB computer program was developed such that the computational domain was selected to be in quadrant IV, where the real part of the wavenumber is positive and the imaginary part is negative to establish the conditions for unstable flow. For a given Reynolds number and disturbance wave speed, the perturbation kinetic energy per unit mass was computed at various node points in the selected subdomain of the complex plane. The initial guess for the complex wavenumber to start the solution process was assumed to be associated with the highest calculated perturbation kinetic energy per unit mass. Once the initial guess had been approximated, it was used to obtain the solution to the O–S equation by performing a Runge–Kutta integration scheme that computationally marched from the far field region in the shear layer down to the lower solid boundary. Results compared favorably with the stability characteristics obtained from an earlier study for semi-infinite Blasius flow over a flat boundary. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c6725a67f1fa2b091e0bbf980e6260be",
"text": "This paper examines job satisfaction and employees’ turnover intentions in Total Nigeria PLC in Lagos State. The paper highlights and defines basic concepts of job satisfaction and employees’ turnover intention. It specifically considered satisfaction with pay, nature of work and supervision as the three facets of job satisfaction that affect employee turnover intention. To achieve this objective, authors adopted a survey method by administration of questionnaires, conducting interview and by reviewing archival documents as well as review of relevant journals and textbooks in this field of learning as means of data collection. Four (4) major hypotheses were derived from literature and respective null hypotheses tested at .05 level of significance It was found that specifically job satisfaction reduces employees’ turnover intention and that Total Nigeria PLC adopts standard pay structure, conducive nature of work and efficient supervision not only as strategies to reduce employees’ turnover but also as the company retention strategy.",
"title": ""
},
{
"docid": "5350ffea7a4187f0df11fd71562aba43",
"text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.",
"title": ""
},
{
"docid": "bb5c4d59f598427ea1e2946ae74a7cc8",
"text": "In a nutshell: This course comprehensively covers important user experience (UX) evaluation methods as well as opportunities and challenges of UX evaluation in the area of entertainment and games. The course is an ideal forum for attendees to gain insight into state-of-the art user experience evaluation methods going way-beyond standard usability and user experience evaluation approaches in the area of human-computer interaction. It surveys and assesses the efforts of user experience evaluation of the gaming and human computer interaction communities during the last 15 years.",
"title": ""
},
{
"docid": "69e4bb63a9041b3c95fba1a903bc0e5c",
"text": "Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing.",
"title": ""
},
{
"docid": "27b3cd45e0bdb279a5aa5f1f082ea850",
"text": "Tensors (also called multiway arrays) are a generalization of vectors and matrices to higher dimensions based on multilinear algebra. The development of theory and algorithms for tensor decompositions (factorizations) has been an active area of study within the past decade, e.g., [1] and [2]. These methods have been successfully applied to many problems in unsupervised learning and exploratory data analysis. Multiway analysis enables one to effectively capture the multilinear structure of the data, which is usually available as a priori information about the data. Hence, it might provide advantages over matrix factorizations by enabling one to more effectively use the underlying structure of the data. Besides unsupervised tensor decompositions, supervised tensor subspace regression and classification formulations have been also successfully applied to a variety of fields including chemometrics, signal processing, computer vision, and neuroscience.",
"title": ""
},
{
"docid": "5301c9ab75519143c5657b9fa780cfcb",
"text": "Although discriminatively trained classifiers are usually more accurate when labeled training data is abundant, previous work has sh own that when training data is limited, generative classifiers can ou t-perform them. This paper describes a hybrid model in which a high-dim ensional subset of the parameters are trained to maximize generative likelihood, and another, small, subset of parameters are discriminativ ely trained to maximize conditional likelihood. We give a sample complexi ty bound showing that in order to fit the discriminative parameters we ll, the number of training examples required depends only on the logari thm of the number of feature occurrences and feature set size. Experim ental results show that hybrid models can provide lower test error and can p roduce better accuracy/coverage curves than either their purely g nerative or purely discriminative counterparts. We also discuss sever al advantages of hybrid models, and advocate further work in this area.",
"title": ""
},
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
},
{
"docid": "3aaffdda034c762ad36954386d796fb9",
"text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.",
"title": ""
},
{
"docid": "503756888df43d745e4fb5051f8855fb",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] |
scidocsrr
|
58290864dd532a48b4558668cb8b6eda
|
Gremlin: Systematic Resilience Testing of Microservices
|
[
{
"docid": "35260e253551bcfd21ce6d08c707f092",
"text": "Current debugging and optimization methods scale poorly to deal with the complexity of modern Internet services, in which a single request triggers parallel execution of numerous heterogeneous software components over a distributed set of computers. The Achilles’ heel of current methods is the need for a complete and accurate model of the system under observation: producing such a model is challenging because it requires either assimilating the collective knowledge of hundreds of programmers responsible for the individual components or restricting the ways in which components interact. Fortunately, the scale of modern Internet services offers a compensating benefit: the sheer volume of requests serviced means that, even at low sampling rates, one can gather a tremendous amount of empirical performance observations and apply “big data” techniques to analyze those observations. In this paper, we show how one can automatically construct a model of request execution from pre-existing component logs by generating a large number of potential hypotheses about program behavior and rejecting hypotheses contradicted by the empirical observations. We also show how one can validate potential performance improvements without costly implementation effort by leveraging the variation in component behavior that arises naturally over large numbers of requests to measure the impact of optimizing individual components or changing scheduling behavior. We validate our methodology by analyzing performance traces of over 1.3 million requests to Facebook servers. We present a detailed study of the factors that affect the end-to-end latency of such requests. We also use our methodology to suggest and validate a scheduling optimization for improving Facebook request latency.",
"title": ""
}
] |
[
{
"docid": "91c5ad5a327026a424454779f96da601",
"text": "We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.",
"title": ""
},
{
"docid": "bdf8d4a8862aad3631f5def11b13b101",
"text": "We examine the relationship between children's kindergarten attention skills and developmental patterns of classroom engagement throughout elementary school in disadvantaged urban neighbourhoods. Kindergarten measures include teacher ratings of classroom behavior, direct assessments of number knowledge and receptive vocabulary, and parent-reported family characteristics. From grades 1 through 6, teachers also rated children's classroom engagement. Semi-parametric mixture modeling generated three distinct trajectories of classroom engagement (n = 1369, 50% boys). Higher levels of kindergarten attention were proportionately associated with greater chances of belonging to better classroom engagement trajectories compared to the lowest classroom engagement trajectory. In fact, improvements in kindergarten attention reliably increased the likelihood of belonging to more productive classroom engagement trajectories throughout elementary school, above and beyond confounding child and family factors. Measuring the development of classroom productivity is pertinent because such dispositions represent precursors to mental health, task-orientation, and persistence in high school and workplace behavior in adulthood.",
"title": ""
},
{
"docid": "846ae985f61a0dcdb1ff3a2226c1b41a",
"text": "OBJECTIVE\nThis article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area.\n\n\nBACKGROUND\nFirst attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays.\n\n\nMETHODS\nFirst, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted.\n\n\nRESULTS\nThis review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation.\n\n\nCONCLUSION\nThe sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems.\n\n\nAPPLICATION\nTactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.",
"title": ""
},
{
"docid": "dff035a6e773301bd13cd0b71d874861",
"text": "Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data. A number of approaches have been proposed to extract representative features from 3D skeletal data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skeletal features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.",
"title": ""
},
{
"docid": "fe97095f2af18806e7032176c6ac5d89",
"text": "Targeted social engineering attacks in the form of spear phishing emails, are often the main gimmick used by attackers to infiltrate organizational networks and implant state-of-the-art Advanced Persistent Threats (APTs). Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76% in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28% without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails.",
"title": ""
},
{
"docid": "30aeb5f14438b03f7cdaee9783273d97",
"text": "The status of English grammar teaching in English teaching has weakened and even once disappeared in part English class; until the late 1980s, foreign English teachers had a consistent view of the importance of grammar teaching. In recent years, more and more domestic scholars begin to think about the situation of China and explore the grammar teaching method. This article will review the explicit grammar instruction and implicit grammar teaching research, collect and analyze the integration of explicit grammar instruction and implicit grammar teaching strategy and its advantages in the grammar teaching.",
"title": ""
},
{
"docid": "9bbd6a417b373fb19f691d1edc728a6c",
"text": "The increasing advances in hardware technology for sensor processing and mobile technology has resulted in greater access and availability of sensor data from a wide variety of applications. For example, the commodity mobile devices contain a wide variety of sensors such as GPS, accelerometers, and other kinds of data. Many other kinds of technology such as RFID-enabled sensors also produce large volumes of data over time. This has lead to a need for principled methods for efficient sensor data processing. This chapter will provide an overview of the challenges of sensor data analytics and the different areas of research in this context. We will also present the organization of the chapters in this book in this context.",
"title": ""
},
{
"docid": "fb1f3f300bcd48d99f0a553a709fdc89",
"text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.",
"title": ""
},
{
"docid": "9b0ddf08b06c625ea579d9cee6c8884b",
"text": "A frequency-reconfigurable bow-tie antenna for Bluetooth, WiMAX, and WLAN applications is proposed. The bow-tie radiator is printed on two sides of the substrate and is fed by a microstripline continued by a pair of parallel strips. By embedding p-i-n diodes over the bow-tie arms, the effective electrical length of the antenna can be changed, leading to an electrically tunable operating band. The simple biasing circuit used in this design eliminates the need for extra bias lines, and thus avoids distortion of the radiation patterns. Measured results are in good agreement with simulations, which shows that the proposed antenna can be tuned to operate in either 2.2-2.53, 2.97-3.71, or 4.51-6 GHz band with similar radiation patterns.",
"title": ""
},
{
"docid": "70789bc929ef7d36f9bb4a02793f38f5",
"text": "Lock managers are among the most studied components in concurrency control and transactional systems. However, one question seems to have been generally overlooked: “When there are multiple lock requests on the same object, which one(s) should be granted first?” Nearly all existing systems rely on a FIFO (first in, first out) strategy to decide which transaction(s) to grant the lock to. In this paper, however, we show that the lock scheduling choices have significant ramifications on the overall performance of a transactional system. Despite the large body of research on job scheduling outside the database context, lock scheduling presents subtle but challenging requirements that render existing results on scheduling inapt for a transactional database. By carefully studying this problem, we present the concept of contention-aware scheduling, show the hardness of the problem, and propose novel lock scheduling algorithms (LDSF and bLDSF), which guarantee a constant factor approximation of the best scheduling. We conduct extensive experiments using a popular database on both TPC-C and a microbenchmark. Compared to FIFO— the default scheduler in most database systems—our bLDSF algorithm yields up to 300x speedup in overall transaction latency. Alternatively, our LDSF algorithm, which is simpler and achieves comparable performance to bLDSF, has already been adopted by open-source community, and was chosen as the default scheduling strategy in MySQL 8.0.3+. PVLDB Reference Format: Boyu Tian, Jiamin Huang, Barzan Mozafari, Grant Schoenebeck. Contention-Aware Lock Scheduling for Transactional Databases. PVLDB, 11 (5): xxxx-yyyy, 2018. DOI: 10.1145/3177732.3177740",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "6914ba1e0a6a60a9d8956f9b9429ab45",
"text": "Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction to quantum probability theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature.",
"title": ""
},
{
"docid": "d97518a615c4f963d86e36c9dd30b643",
"text": "In this paper, the Polyjet technology was applied to build high-Q X-band resonators and low loss filters for the first time. As one of state-of-the-art 3-D printing technologies, the Polyjet technique produces RF models with finest resolution and outstanding surface finish in a clean, fast and affordable way. The measured resonator with 0.3% frequency shift yielded a quality factor of 214 at 10.26 GHz. A Vertically stacked two-cavity bandpass filter with an insertion loss of 2.1 dB and 5.1% bandwidth (BW) was realized successfully. The dimensional tolerance of this process was found to be less than 0.5%. The well matched performance of the resonator and the filter, as well as the fine feature size indicate that the Polyjet process is suitable for the implementation of low loss and low cost RF devices.",
"title": ""
},
{
"docid": "568fa874b944120be9bdb71bec2f5cec",
"text": "Using a developmental systems perspective, this review focuses on how genetic predispositions interact with aspects of the eating environment to produce phenotypic food preferences. Predispositions include the unlearned, reflexive reactions to basic tastes: the preference for sweet and salty tastes, and the rejection of sour and bitter tastes. Other predispositions are (a) the neophobic reaction to new foods and (b) the ability to learn food preferences based on associations with the contexts and consequences of eating various foods. Whether genetic predispositions are manifested in food preferences that foster healthy diets depends on the eating environment, including food availability and child-feeding practices of the adults. Unfortunately, in the United States today, the ready availability of energy-dense foods, high in sugar, fat, and salt, provides an eating environment that fosters food preferences inconsistent with dietary guidelines, which can promote excess weight gain and obesity.",
"title": ""
},
{
"docid": "7a2c19e94d07afbfe81c7875aed1ff23",
"text": "We combine linear discriminant analysis (LDA) and K-means clustering into a coherent framework to adaptively select the most discriminative subspace. We use K-means clustering to generate class labels and use LDA to do subspace selection. The clustering process is thus integrated with the subspace selection process and the data are then simultaneously clustered while the feature subspaces are selected. We show the rich structure of the general LDA-Km framework by examining its variants and their relationships to earlier approaches. Relations among PCA, LDA, K-means are clarified. Extensive experimental results on real-world datasets show the effectiveness of our approach.",
"title": ""
},
{
"docid": "67d704317471c71842a1dfe74ddd324a",
"text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.",
"title": ""
},
{
"docid": "f1cfe1cb5ddf46076dae6cd0f69d137f",
"text": "SiC-SIT power semiconductor switching devices has an advantage that its switching time is high speed compared to those of other power semiconductor switching devices. We adopt newly developed SiC-SITs which have the maximum ratings 800V/4A and prepare a breadboard of a conventional single-ended push-pull(SEPP) high frequency inverter. This paper describes the characteristics of SiC-SIT on the basis of the experimental results of the breadboard. Its operational frequencies are varied at from 100 kHz to 250kHz with PWM control technique for output power regulation. Its load is induction fluid heating systems for super-heated-steam production.",
"title": ""
},
{
"docid": "31ed2186bcd711ac4a5675275cd458eb",
"text": "Location-aware wireless sensor networks will enable a new class of applications, and accurate range estimation is critical for this task. Low-cost location determination capability is studied almost entirely using radio frequency received signal strength (RSS) measurements, resulting in poor accuracy. More accurate systems use wide bandwidths and/or complex time-synchronized infrastructure. Low-cost, accurate ranging has proven difficult because small timing errors result in large range errors. This paper addresses estimation of the distance between wireless nodes using a two-way ranging technique that approaches the Cramér-Rao Bound on ranging accuracy in white noise and achieves 1-3 m accuracy in real-world ranging and localization experiments. This work provides an alternative to inaccurate RSS and complex, wide-bandwidth methods. Measured results using a prototype wireless system confirm performance in the real world.",
"title": ""
},
{
"docid": "414bb4a869a900066806fa75edc38bd6",
"text": "For nearly a century, scholars have sought to understand, measure, and explain giftedness. Succeeding theories and empirical investigations have often built on earlier work, complementing or sometimes clashing over conceptions of talent or contesting the mechanisms of talent development. Some have even suggested that giftedness itself is a misnomer, mistaken for the results of endless practice or social advantage. In surveying the landscape of current knowledge about giftedness and gifted education, this monograph will advance a set of interrelated arguments: The abilities of individuals do matter, particularly their abilities in specific talent domains; different talent domains have different developmental trajectories that vary as to when they start, peak, and end; and opportunities provided by society are crucial at every point in the talent-development process. We argue that society must strive to promote these opportunities but that individuals with talent also have some responsibility for their own growth and development. Furthermore, the research knowledge base indicates that psychosocial variables are determining influences in the successful development of talent. Finally, outstanding achievement or eminence ought to be the chief goal of gifted education. We assert that aspiring to fulfill one's talents and abilities in the form of transcendent creative contributions will lead to high levels of personal satisfaction and self-actualization as well as produce yet unimaginable scientific, aesthetic, and practical benefits to society. To frame our discussion, we propose a definition of giftedness that we intend to be comprehensive. Giftedness is the manifestation of performance that is clearly at the upper end of the distribution in a talent domain even relative to other high-functioning individuals in that domain. Further, giftedness can be viewed as developmental in that in the beginning stages, potential is the key variable; in later stages, achievement is the measure of giftedness; and in fully developed talents, eminence is the basis on which this label is granted. Psychosocial variables play an essential role in the manifestation of giftedness at every developmental stage. Both cognitive and psychosocial variables are malleable and need to be deliberately cultivated. Our goal here is to provide a definition that is useful across all domains of endeavor and acknowledges several perspectives about giftedness on which there is a fairly broad scientific consensus. Giftedness (a) reflects the values of society; (b) is typically manifested in actual outcomes, especially in adulthood; (c) is specific to domains of endeavor; (d) is the result of the coalescing of biological, pedagogical, psychological, and psychosocial factors; and (e) is relative not just to the ordinary (e.g., a child with exceptional art ability compared to peers) but to the extraordinary (e.g., an artist who revolutionizes a field of art). In this monograph, our goal is to review and summarize what we have learned about giftedness from the literature in psychological science and suggest some directions for the field of gifted education. We begin with a discussion of how giftedness is defined (see above). In the second section, we review the reasons why giftedness is often excluded from major conversations on educational policy, and then offer rebuttals to these arguments. In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student's talents and abilities. In fact, high-ability students in the United States are not faring well on international comparisons. The scores of advanced students in the United States with at least one college-educated parent were lower than the scores of students in 16 other developed countries regardless of parental education level. In the third section, we summarize areas of consensus and controversy in gifted education, using the extant psychological literature to evaluate these positions. Psychological science points to several variables associated with outstanding achievement. The most important of these include general and domain-specific ability, creativity, motivation and mindset, task commitment, passion, interest, opportunity, and chance. Consensus has not been achieved in the field however in four main areas: What are the most important factors that contribute to the acuities or propensities that can serve as signs of potential talent? What are potential barriers to acquiring the \"gifted\" label? What are the expected outcomes of gifted education? And how should gifted students be educated? In the fourth section, we provide an overview of the major models of giftedness from the giftedness literature. Four models have served as the foundation for programs used in schools in the United States and in other countries. Most of the research associated with these models focuses on the precollegiate and early university years. Other talent-development models described are designed to explain the evolution of talent over time, going beyond the school years into adult eminence (but these have been applied only by out-of-school programs as the basis for educating gifted students). In the fifth section we present methodological challenges to conducting research on gifted populations, including definitions of giftedness and talent that are not standardized, test ceilings that are too low to measure progress or growth, comparison groups that are hard to find for extraordinary individuals, and insufficient training in the use of statistical methods that can address some of these challenges. In the sixth section, we propose a comprehensive model of trajectories of gifted performance from novice to eminence using examples from several domains. This model takes into account when a domain can first be expressed meaningfully-whether in childhood, adolescence, or adulthood. It also takes into account what we currently know about the acuities or propensities that can serve as signs of potential talent. Budding talents are usually recognized, developed, and supported by parents, teachers, and mentors. Those individuals may or may not offer guidance for the talented individual in the psychological strengths and social skills needed to move from one stage of development to the next. We developed the model with the following principles in mind: Abilities matter, domains of talent have varying developmental trajectories, opportunities need to be provided to young people and taken by them as well, psychosocial variables are determining factors in the successful development of talent, and eminence is the aspired outcome of gifted education. In the seventh section, we outline a research agenda for the field. This agenda, presented in the form of research questions, focuses on two central variables associated with the development of talent-opportunity and motivation-and is organized according to the degree to which access to talent development is high or low and whether an individual is highly motivated or not. Finally, in the eighth section, we summarize implications for the field in undertaking our proposed perspectives. These include a shift toward identification of talent within domains, the creation of identification processes based on the developmental trajectories of talent domains, the provision of opportunities along with monitoring for response and commitment on the part of participants, provision of coaching in psychosocial skills, and organization of programs around the tools needed to reach the highest possible levels of creative performance or productivity.",
"title": ""
},
{
"docid": "9736331d674470adbe534503ef452cca",
"text": "In this paper we present our system for human-in-theloop video object segmentation. The backbone of our system is a method for one-shot video object segmentation [3]. While fast, this method requires an accurate pixel-level segmentation of one (or several) frames as input. As manually annotating such a segmentation is impractical, we propose a deep interactive image segmentation method, that can accurately segment objects with only a handful of clicks. On the GrabCut dataset, our method obtains 90% IOU with just 3.8 clicks on average, setting the new state of the art. Furthermore, as our method iteratively refines an initial segmentation, it can effectively correct frames where the video object segmentation fails, thus allowing users to quickly obtain high quality results even on challenging sequences. Finally, we investigate usage patterns and give insights in how many steps users take to annotate frames, what kind of corrections they provide, etc., thus giving important insights for further improving interactive video segmentation.",
"title": ""
}
] |
scidocsrr
|
bf4a38ce39c068b3f8160ade9b970d54
|
Security Analysis of Cloud Computing
|
[
{
"docid": "ae369da37b2ff231082df12f15b26cb5",
"text": "Although the cloud computing model is considered to be a very promising internet-based computing platform, it results in a loss of security control over the cloud-hosted assets. This is due to the outsourcing of enterprise IT assets hosted on third-party cloud computing platforms. Moreover, the lack of security constraints in the Service Level Agreements between the cloud providers and consumers results in a loss of trust as well. Obtaining a security certificate such as ISO 27000 or NIST-FISMA would help cloud providers improve consumers trust in their cloud platforms' security. However, such standards are still far from covering the full complexity of the cloud computing model. We introduce a new cloud security management framework based on aligning the FISMA standard to fit with the cloud computing model, enabling cloud providers and consumers to be security certified. Our framework is based on improving collaboration between cloud providers, service providers and service consumers in managing the security of the cloud platform and the hosted services. It is built on top of a number of security standards that assist in automating the security management process. We have developed a proof of concept of our framework using. NET and deployed it on a test bed cloud platform. We evaluated the framework by managing the security of a multi-tenant SaaS application exemplar.",
"title": ""
}
] |
[
{
"docid": "e8d48a28c208a0ff5c4e17dd205f8bd9",
"text": "Red and blue light are both vital factors for plant growth and development. We examined how different ratios of red light to blue light (R/B) provided by light-emitting diodes affected photosynthetic performance by investigating parameters related to photosynthesis, including leaf morphology, photosynthetic rate, chlorophyll fluorescence, stomatal development, light response curve, and nitrogen content. In this study, lettuce plants (Lactuca sativa L.) were exposed to 200 μmol⋅m(-2)⋅s(-1) irradiance for a 16 h⋅d(-1) photoperiod under the following six treatments: monochromatic red light (R), monochromatic blue light (B) and the mixture of R and B with different R/B ratios of 12, 8, 4, and 1. Leaf photosynthetic capacity (A max) and photosynthetic rate (P n) increased with decreasing R/B ratio until 1, associated with increased stomatal conductance, along with significant increase in stomatal density and slight decrease in stomatal size. P n and A max under B treatment had 7.6 and 11.8% reduction in comparison with those under R/B = 1 treatment, respectively. The effective quantum yield of PSII and the efficiency of excitation captured by open PSII center were also significantly lower under B treatment than those under the other treatments. However, shoot dry weight increased with increasing R/B ratio with the greatest value under R/B = 12 treatment. The increase of shoot dry weight was mainly caused by increasing leaf area and leaf number, but no significant difference was observed between R and R/B = 12 treatments. Based on the above results, we conclude that quantitative B could promote photosynthetic performance or growth by stimulating morphological and physiological responses, yet there was no positive correlation between P n and shoot dry weight accumulation.",
"title": ""
},
{
"docid": "f73881fdb6b732e7a6a79cd13618e649",
"text": "Information exchange among coalition command and control (C2) systems in network-enabled environments requires ensuring that each recipient system understands and interprets messages exactly as the source system intended. The Semantic Interoperability Logical Framework (SILF) aims at meeting NATO's needs for semantically correct interoperability between C2 systems, as well as the need to adapt quickly to new missions and new combinations of coalition partners and systems. This paper presents an overview of the SILF framework and performs a detailed analysis of a case study for implementing SILF in a real-world military scenario.",
"title": ""
},
{
"docid": "565a8ea886a586dc8894f314fa21484a",
"text": "BACKGROUND\nThe Entity Linking (EL) task links entity mentions from an unstructured document to entities in a knowledge base. Although this problem is well-studied in news and social media, this problem has not received much attention in the life science domain. One outcome of tackling the EL problem in the life sciences domain is to enable scientists to build computational models of biological processes with more efficiency. However, simply applying a news-trained entity linker produces inadequate results.\n\n\nMETHODS\nSince existing supervised approaches require a large amount of manually-labeled training data, which is currently unavailable for the life science domain, we propose a novel unsupervised collective inference approach to link entities from unstructured full texts of biomedical literature to 300 ontologies. The approach leverages the rich semantic information and structures in ontologies for similarity computation and entity ranking.\n\n\nRESULTS\nWithout using any manual annotation, our approach significantly outperforms state-of-the-art supervised EL method (9% absolute gain in linking accuracy). Furthermore, the state-of-the-art supervised EL method requires 15,000 manually annotated entity mentions for training. These promising results establish a benchmark for the EL task in the life science domain. We also provide in depth analysis and discussion on both challenges and opportunities on automatic knowledge enrichment for scientific literature.\n\n\nCONCLUSIONS\nIn this paper, we propose a novel unsupervised collective inference approach to address the EL problem in a new domain. We show that our unsupervised approach is able to outperform a current state-of-the-art supervised approach that has been trained with a large amount of manually labeled data. Life science presents an underrepresented domain for applying EL techniques. By providing a small benchmark data set and identifying opportunities, we hope to stimulate discussions across natural language processing and bioinformatics and motivate others to develop techniques for this largely untapped domain.",
"title": ""
},
{
"docid": "e3be398845434f3cd927a38bc4d4455f",
"text": "Purpose Although extensive research exists regarding job satisfaction, many previous studies used a more restrictive, quantitative methodology. The purpose of this qualitative study is to capture the perceptions of hospital nurses within generational cohorts regarding their work satisfaction. Design/methodology/approach A preliminary qualitative, phenomenological study design explored hospital nurses' work satisfaction within generational cohorts - Baby Boomers (1946-1964), Generation X (1965-1980) and Millennials (1981-2000). A South Florida hospital provided the venue for the research. In all, 15 full-time staff nurses, segmented into generational cohorts, participated in personal interviews to determine themes related to seven established factors of work satisfaction: pay, autonomy, task requirements, administration, doctor-nurse relationship, interaction and professional status. Findings An analysis of the transcribed interviews confirmed the importance of the seven factors of job satisfaction. Similarities and differences between the generational cohorts related to a combination of stages of life and generational attributes. Practical implications The results of any qualitative research relate only to the specific venue studied and are not generalizable. However, the information gleaned from this study is transferable and other organizations are encouraged to conduct their own research and compare the results. Originality/value This study is unique, as the seven factors from an extensively used and highly respected quantitative research instrument were applied as the basis for this qualitative inquiry into generational cohort job satisfaction in a hospital setting.",
"title": ""
},
{
"docid": "305a5a777cdffa7efc6e1715dfaac305",
"text": "Open-loop transfer functions can be used to create closed-loop models of pulsewidth-modulated (PWM) converters. The closed-loop small-signal model can be used to design a controller for the switching converter with well-known linear control theory. The dynamics of the power stage for boost PWM dc-dc converter operating in continuous-conduction mode (CCM) are studied. The transfer functions from output current to output voltage, from duty cycle to output voltage including MOSFET delay, and from input voltage to output voltage are derived. The derivations are performed using an averaged linear circuit small-signal model of the boost converter for CCM. Experimental Bode plots and step responses were used to test the accuracy of the derived transfer functions. The theoretical and experimental responses were in excellent agreement, confirming the validity of the derived transfer functions",
"title": ""
},
{
"docid": "a49ea9c9f03aa2d926faa49f4df63b7a",
"text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.",
"title": ""
},
{
"docid": "b4ac5df370c0df5fdb3150afffd9158b",
"text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.",
"title": ""
},
{
"docid": "226d6904cc052f300b32b29f4f800574",
"text": "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.",
"title": ""
},
{
"docid": "2a384fe57f79687cba8482cabfb4243b",
"text": "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce – a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).",
"title": ""
},
{
"docid": "07f4d14ddc034d9b5f803a7150b84764",
"text": "Reinforcement learning (RL) has had mixed success when applied to games. Large state spaces and the curse of dimensionality have limited the ability for RL techniques to learn to play complex games in a reasonable length of time. We discuss a modification of Q-learning to use nearest neighbor states to exploit previous experience in the early stages of learning. A weighting on the state features is learned using metric learning techniques, such that neighboring states represent similar game situations. Our method is tested on the arcade game Frogger, and it is shown that some of the effects of the curse of dimensionality can be mitigated.",
"title": ""
},
{
"docid": "d387558c10c164a49030e049f4eb03c7",
"text": "This paper proposes a high-frequency dynamic circuit network model of a DC motor for predicting conductive and radiated emissions in low-voltage automotive applications, and discusses a study in which this model was examined. The proposed model is based on a behavioral approach. The methodology for testing various motors together with their filters and optimization of overall system performance by achieving minima of emissions is introduced.",
"title": ""
},
{
"docid": "5325672f176fd572f7be68a466538d95",
"text": "The successful execution of location-based and feature-based queries on spatial databases requires the construction of spatial indexes on the spatial attributes. This is not simple when the data is unstructured as is the case when the data is a collection of documents such as news articles, which is the domain of discourse, where the spatial attribute consists of text that can be (but is not required to be) interpreted as the names of locations. In other words, spatial data is specified using text (known as a toponym) instead of geometry, which means that there is some ambiguity involved. The process of identifying and disambiguating references to geographic locations is known as geotagging and involves using a combination of internal document structure and external knowledge, including a document-independent model of the audience's vocabulary of geographic locations, termed its spatial lexicon. In contrast to previous work, a new spatial lexicon model is presented that distinguishes between a global lexicon of locations known to all audiences, and an audience-specific local lexicon. Generic methods for inferring audiences' local lexicons are described. Evaluations of this inference method and the overall geotagging procedure indicate that establishing local lexicons cannot be overlooked, especially given the increasing prevalence of highly local data sources on the Internet, and will enable the construction of more accurate spatial indexes.",
"title": ""
},
{
"docid": "5089dff6e717807450d7f185158cc542",
"text": "Previous work has demonstrated that in the context of Massively Open Online Courses (MOOCs), doing activities is more predictive of learning than reading text or watching videos (Koedinger et al., 2015). This paper breaks down the general behaviors of reading and watching into finer behaviors, and considers how these finer behaviors may provide evidence for active learning as well. By characterizing learner strategies through patterns in their data, we can evaluate which strategies (or measures of them) are predictive of learning outcomes. We investigated strategies such as page re-reading (active reading) and video watching in response to an incorrect attempt (active watching) and found that they add predictive power beyond mere counts of the amount of doing, reading, and watching.",
"title": ""
},
{
"docid": "e9251977f62ce9dddf16730dff8e47cb",
"text": "INTRODUCTION AND OBJECTIVE\nCircumcision is one of the oldest surgical procedures and one of the most frequently performed worldwide. It can be done by many different techniques. This prospective series presents the results of Plastibell® circumcision in children older than 2 years of age, evaluating surgical duration, immediate and late complications, time for plastic device separation and factors associated with it.\n\n\nMATERIALS AND METHODS\nWe prospectively analyzed 119 children submitted to Plastic Device Circumcision with Plastibell® by only one surgeon from December 2009 to June 2011. In all cases the surgery was done under general anesthesia associated with dorsal penile nerve block. Before surgery length of the penis and latero-lateral diameter of the glans were measured. Surgical duration, time of Plastibell® separation and use of analgesic medication in the post-operative period were evaluated. Patients were followed on days 15, 45, 90 and 120 after surgery.\n\n\nRESULTS\nAge at surgery varied from 2 to 12.5 (5.9 ± 2.9) years old. Mean surgical time was 3.7 ± 2.0 minutes (1.9 to 9 minutes). Time for plastic device separation ranged from 6 to 26 days (mean: 16 ± 4.2 days), being 14.8 days for children younger than 5 years of age and 17.4 days for those older than 5 years of age (p < 0.0001). The diameter of the Plastibell® does not interfered in separations time (p = 0,484). Late complications occurred in 32 (26.8%) subjects, being the great majority of low clinical significance, especially prepucial adherences, edema of the mucosa and discrete hypertrophy of the scar, all resolving with clinical treatment. One patient still using diaper had meatus stenosis and in one case the Plastibell® device stayed between the glans and the prepuce and needed to be removed manually.\n\n\nCONCLUSIONS\nCircumcision using a plastic device is a safe, quick and an easy technique with low complications, that when occur are of low clinical importance and of easy resolution. The mean time for the device to fall is shorter in children under 6 years of age and it is not influenced by the diameter of the device.",
"title": ""
},
{
"docid": "bcdb8fea60d1d13a8c5dcf7c49632653",
"text": "There is a small but growing body of research investigating how teams form and how that affects how they perform. Much of that research focuses on teams that seek to accomplish certain tasks such as writing an article or performing a Broadway musical. There has been much less investigation of the relative performance of teams that form to directly compete against another team. In this study, we report on team-vs-team competitions in the multiplayer online battle arena game Dota 2. Here, the teams’ overall goal is to beat the opponent. We use this setting to observe multilevel factors influence the relative performance of the teams. Those factors include compositional factors or attributes of the individuals comprising a team, relational factors or prior relations among individuals within a team and ecosystem factors or overlapping prior membership of team members with others within the ecosystem of teams. We also study how these multilevel factors affect the duration of a match. Our results show that advantages at the compositional, relational and ecosystem levels predict which team will succeed in short or medium duration matches. Relational and ecosystem factors are particularly helpful in predicting the winner in short duration matches, whereas compositional factors are more important predicting winners in medium duration matches. However, the two types of relations have opposite effects on the duration of winning. None of the three multilevel factors help explain which team will win in long matches.",
"title": ""
},
{
"docid": "83637dc7109acc342d50366f498c141a",
"text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.",
"title": ""
},
{
"docid": "2bdc4df73912f4f2be4436e1fdd16d69",
"text": "Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.",
"title": ""
},
{
"docid": "9c1267f42c32f853db912a08eddb8972",
"text": "IBM's Physical Analytics Integrated Data Repository and Services (PAIRS) is a geospatial Big Data service. PAIRS contains a massive amount of curated geospatial (or more precisely spatio-temporal) data from a large number of public and private data resources, and also supports user contributed data layers. PAIRS offers an easy-to-use platform for both rapid assembly and retrieval of geospatial datasets or performing complex analytics, lowering time-to-discovery significantly by reducing the data curation and management burden. In this paper, we review recent progress with PAIRS and showcase a few exemplary analytical applications which the authors are able to build with relative ease leveraging this technology.",
"title": ""
},
{
"docid": "13cbca0e2780a95c1e9d4928dc9d236c",
"text": "Matching user accounts can help us build better users’ profiles and benefit many applications. It has attracted much attention from both industry and academia. Most of existing works are mainly based on rich user profile attributes. However, in many cases, user profile attributes are unavailable, incomplete or unreliable, either due to the privacy settings or just because users decline to share their information. This makes the existing schemes quite fragile. Users often share their activities on different social networks. This provides an opportunity to overcome the above problem. We aim to address the problem of user identification based on User Generated Content (UGC). We first formulate the problem of user identification based on UGCs and then propose a UGC-based user identification model. A supervised machine learning based solution is presented. It has three steps: firstly, we propose several algorithms to measure the spatial similarity, temporal similarity and content similarity of two UGCs; secondly, we extract the spatial, temporal and content features to exploit these similarities; afterwards, we employ the machine learning method to match user accounts, and conduct the experiments on three ground truth datasets. The results show that the proposed method has given excellent performance with F1 values reaching 89.79%, 86.78% and 86.24% on three ground truth datasets, respectively. This work presents the possibility of matching user accounts with high accessible online data. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "754108343e8a57852d4a54abf45f5c43",
"text": "Precision measurement of dc high current is usually realized by second harmonic fluxgate current transducers, but the complicated modulation and demodulation circuits with high cost have been limiting their applications. This paper presents a low-cost transducer that can substitute the traditional ones for precision measurement of high current. The new transducer, based on the principle of zero-flux, is the combination of an improved self-oscillating fluxgate sensor with a magnetic integrator in a common feedback loop. The transfer function of the zero-flux control strategy of the transducer is established to verify the validity of the qualitative analysis on operating principle. Origins and major influence factors of the modulation ripple, respectively, caused by the useful signal extraction circuit and the transformer effect are studied, and related suppression methods are proposed, which can be considered as one of the major technical modifications for performance improvement. As verification, a prototype is realized, and several key specifications, including the linearity, small-signal bandwidth, modulation ripple, ratio stability under full load, power-on repeatability, magnetic error, and temperature coefficient, are characterized. Measurement results show that the new transducer with the maximum output ripple 0.3 μA can measure dc current up to ±600 A with a relative accuracy 1.3 ppm in the full scale, and it also can measure ac current and has a -3 dB bandwidth greater than 100 kHz.",
"title": ""
}
] |
scidocsrr
|
7164762ab8395c098344983691ca03af
|
Remote Agent: To Boldly Go Where No AI System Has Gone Before
|
[
{
"docid": "44a70fd9726f9ed9f92a9e5bf198788f",
"text": "This paper proposes a new logic programming language called GOLOG whose interpreter automatically maintains an explicit representation of the dynamic world being modeled, on the basis of user supplied axioms about the preconditions and eeects of actions and the initial state of the world. This allows programs to reason about the state of the world and consider the eeects of various possible courses of action before committing to a particular behavior. The net eeect is that programs may be written at a much higher level of abstraction than is usually possible. The language appears well suited for applications in high level control of robots and industrial processes, intelligent software agents, discrete event simulation, etc. It is based on a formal theory of action speciied in an extended version of the situation calculus. A prototype implementation in Prolog has been developed.",
"title": ""
},
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
}
] |
[
{
"docid": "cabdfcf94607adef9b07799aab463d64",
"text": "Monitoring the health of the elderly living independently in their own homes is a key issue in building sustainable healthcare models which support a country's ageing population. Existing approaches have typically proposed remotely monitoring the behaviour of a household's occupants through the use of additional sensors. However the costs and privacy concerns of such sensors have significantly limited their potential for widespread adoption. In contrast, in this paper we propose an approach which detects Activities of Daily Living, which we use as a proxy for the health of the household residents. Our approach detects appliance usage from existing smart meter data, from which the unique daily routines of the household occupants are learned automatically via a log Gaussian Cox process. We evaluate our approach using two real-world data sets, and show it is able to detect over 80% of kettle uses while generating less than 10% false positives. Furthermore, our approach allows earlier interventions in households with a consistent routine and fewer false alarms in the remaining households, relative to a fixed-time intervention benchmark.",
"title": ""
},
{
"docid": "078f875d35d61689475a1507c5525eaa",
"text": "This paper discusses the actuator-level control of Valkyrie, a new humanoid robot designed by NASA’s Johnson Space Center in collaboration with several external partners. We focus on several topics pertaining to Valkyrie’s series elastic actuators including control architecture, controller design, and implementation in hardware. A decentralized approach is taken in controlling Valkyrie’s many series elastic degrees of freedom. By conceptually decoupling actuator dynamics from robot limb dynamics, we simplify the problem of controlling a highly complex system and streamline the controller development process compared to other approaches. This hierarchical control abstraction is realized by leveraging disturbance observers in the robot’s joint-level torque controllers. We apply a novel analysis technique to understand the ability of a disturbance observer to attenuate the effects of unmodeled dynamics. The performance of our control approach is demonstrated in two ways. First, we characterize torque tracking performance of a single Valkyrie actuator in terms of controllable torque resolution, tracking error, bandwidth, and power consumption. Second, we perform tests on Valkyrie’s arm, a serial chain of actuators, and demonstrate its ability to accurately track torques with our decentralized control approach.",
"title": ""
},
{
"docid": "24880289ca2b6c31810d28c8363473b3",
"text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.",
"title": ""
},
{
"docid": "a27d955a673d4a0f7fc45d83c1ed9377",
"text": "Manifold Ranking (MR), a graph-based ranking algorithm, has been widely applied in information retrieval and shown to have excellent performance and feasibility on a variety of data types. Particularly, it has been successfully applied to content-based image retrieval, because of its outstanding ability to discover underlying geometrical structure of the given image database. However, manifold ranking is computationally very expensive, both in graph construction and ranking computation stages, which significantly limits its applicability to very large data sets. In this paper, we extend the original manifold ranking algorithm and propose a new framework named Efficient Manifold Ranking (EMR). We aim to address the shortcomings of MR from two perspectives: scalable graph construction and efficient computation. Specifically, we build an anchor graph on the data set instead of the traditional k-nearest neighbor graph, and design a new form of adjacency matrix utilized to speed up the ranking computation. The experimental results on a real world image database demonstrate the effectiveness and efficiency of our proposed method. With a comparable performance to the original manifold ranking, our method significantly reduces the computational time, makes it a promising method to large scale real world retrieval problems.",
"title": ""
},
{
"docid": "1abe9e992970ef186f919e3bf54f775b",
"text": "Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).",
"title": ""
},
{
"docid": "0be3178ff2f412952934a49084ee8edc",
"text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-",
"title": ""
},
{
"docid": "ddfd02c12c42edb2607a6f193f4c242b",
"text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.",
"title": ""
},
{
"docid": "d662536cbd7dca2ce06b3e1e44362776",
"text": "Internet of Things (IoT) devices such as the Amazon Echo e a smart speaker developed by Amazon e are undoubtedly great sources of potential digital evidence due to their ubiquitous use and their always-on mode of operation, constituting a human-life's black box. The Amazon Echo in particular plays a centric role for the cloud-based intelligent virtual assistant (IVA) Alexa developed by Amazon Lab126. The Alexaenabled wireless smart speaker is the gateway for all voice commands submitted to Alexa. Moreover, the IVA interacts with a plethora of compatible IoT devices and third-party applications that leverage cloud resources. Understanding the complex cloud ecosystem that allows ubiquitous use of Alexa is paramount on supporting digital investigations when need raises. This paper discusses methods for digital forensics pertaining to the IVA Alexa's ecosystem. The primary contribution of this paper consists of a new efficient approach of combining cloud-native forensics with client-side forensics (forensics for companion devices), to support practical digital investigations. Based on a deep understanding of the targeted ecosystem, we propose a proof-of-concept tool, CIFT, that supports identification, acquisition and analysis of both native artifacts from the cloud and client-centric artifacts from local devices (mobile applications",
"title": ""
},
{
"docid": "c02cc2c217da6614bccb90ac8b7c7506",
"text": "This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks.",
"title": ""
},
{
"docid": "998f2515ea7ceb02f867b709d4a987f9",
"text": "Crop pest and disease diagnosis are amongst important issues arising in the agriculture sector since it has significant impacts on the production of agriculture for a nation. The applying of expert system technology for crop pest and disease diagnosis has the potential to quicken and improve advisory matters. However, the development of an expert system in relation to diagnosing pest and disease problems of a certain crop as well as other identical research works remains limited. Therefore, this study investigated the use of expert systems in managing crop pest and disease of selected published works. This article aims to identify and explain the trends of methodologies used by those works. As a result, a conceptual framework for managing crop pest and disease was proposed on basis of the selected previous works. This article is hoped to relatively benefit the growth of research works pertaining to the development of an expert system especially for managing crop pest and disease in the agriculture domain.",
"title": ""
},
{
"docid": "b04a1c4a52cfe9310ff1e895ccdec35c",
"text": "The problem of recovering the sparse and low-rank components of a matrix captures a broad spectrum of applications. Authors in [4] proposed the concept of ”rank-sparsity incoherence” to characterize the fundamental identifiability of the recovery, and derived practical sufficient conditions to ensure the high possibility of recovery. This exact recovery is achieved via solving a convex relaxation problem where the l1 norm and the nuclear norm are utilized for being surrogates of the sparsity and low-rank. Numerically, this convex relaxation problem was reformulated into a semi-definite programming (SDP) problem whose dimension is considerably enlarged, and this SDP reformulation was proposed to be solved by generic interior-point solvers in [4]. This paper focuses on the algorithmic improvement for the sparse and low-rank recovery. In particular, we observe that the convex relaxation problem generated by the approach of [4] is actually well-structured in both the objective function and constraint, and it fits perfectly the applicable range of the classical alternating direction method (ADM). Hence, we propose the ADM approach for accomplishing the sparse and low-rank recovery, by taking full exploitation to the high-level separable structure of the convex relaxation problem. Preliminary numerical results are reported to verify the attractive efficiency of the ADM approach for recovering sparse and low-rank components of matrices.",
"title": ""
},
{
"docid": "aac360802c767fb9594e033341883578",
"text": "The protection mechanisms of computer systems control the access to objects, especially information objects. The range of responsibilities of these mechanisms includes at one extreme completely isolating executing programs from each other, and at the other extreme permitting complete cooperation and shared access among executing programs. Within this range one can identify at least seven levels at which protection mechanisms can be conceived as being required, each level being more difficult than its predecessor to implement:\n 1. No sharing at all (complete isolation).\n 2. Sharing copies of programs or data files.\n 3. Sharing originals of programs or data files.\n 4. Sharing programming systems or subsystems.\n 5. Permitting the cooperation of mutually suspicious subsystems---e.g., as with debugging or proprietary subsystems.\n 6. Providing \"memoryless\" subsystems---i.e., systems which, having performed their tasks, are guaranteed to have kept no secret record of the task performed (an income-tax computing service, for example, must be allowed to keep billing information on its use by customers but not to store information secretly on customers' incomes).\n 7. Providing \"certified\" subsystems---i.e., those whose correctness has been completely validated and is guaranteed a priori.",
"title": ""
},
{
"docid": "851de4b014dfeb6f470876896b0416b3",
"text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.",
"title": ""
},
{
"docid": "26e24e4a59943f9b80d6bf307680b70c",
"text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.",
"title": ""
},
{
"docid": "dc84e401709509638a1a9e24d7db53e1",
"text": "AIM AND OBJECTIVES\nExocrine pancreatic insufficiency caused by inflammation or pancreatic tumors results in nutrient malfunction by a lack of digestive enzymes and neutralization compounds. Despite satisfactory clinical results with current enzyme therapies, a normalization of fat absorption in patients is rare. An individualized therapy is required that includes high dosage of enzymatic units, usage of enteric coating, and addition of gastric proton pump inhibitors. The key goal to improve this therapy is to identify digestive enzymes with high activity and stability in the gastrointestinal tract.\n\n\nMETHODS\nWe cloned and analyzed three novel ciliate lipases derived from Tetrahymena thermophila. Using highly precise pH-STAT-titration and colorimetric methods, we determined stability and lipolytic activity under physiological conditions in comparison with commercially available porcine and fungal digestive enzyme preparations. We measured from pH 2.0 to 9.0, with different bile salts concentrations, and substrates such as olive oil and fat derived from pig diet.\n\n\nRESULTS\nCiliate lipases CL-120, CL-130, and CL-230 showed activities up to 220-fold higher than Creon, pancreatin standard, and rizolipase Nortase within a pH range from pH 2.0 to 9.0. They are highly active in the presence of bile salts and complex pig diet substrate, and more stable after incubation in human gastric juice compared with porcine pancreatic lipase and rizolipase.\n\n\nCONCLUSIONS\nThe newly cloned and characterized lipases fulfilled all requirements for high activity under physiological conditions. These novel enzymes are therefore promising candidates for an improved enzyme replacement therapy for exocrine pancreatic insufficiency.",
"title": ""
},
{
"docid": "ca23813c7caf031c97ae5c0db447d39d",
"text": "Sequence-to-sequence models, such as attention-based models in automatic speech recognition (ASR), are typically trained to optimize the cross-entropy criterion which corresponds to improving the log-likelihood of the data. However, system performance is usually measured in terms of word error rate (WER), not log-likelihood. Traditional ASR systems benefit from discriminative sequence training which optimizes criteria such as the state-level minimum Bayes risk (sMBR) which are more closely related to WER. In the present work, we explore techniques to train attention-based models to directly minimize expected word error rate. We consider two loss functions which approximate the expected number of word errors: either by sampling from the model, or by using N-best lists of decoded hypotheses, which we find to be more effective than the sampling-based method. In experimental evaluations, we find that the proposed training procedure improves performance by up to 8.2% relative to the baseline system. This allows us to train grapheme-based, uni-directional attention-based models which match the performance of a traditional, state-of-the-art, discriminative sequence-trained system on a mobile voice-search task.",
"title": ""
},
{
"docid": "649b1f289395aa6251fe9f3288209b67",
"text": "Besides game-based learning, gamification is an upcoming trend in education, studied in various empirical studies and found in many major learning management systems. Employing a newly developed qualitative instrument for assessing gamification in a system, we studied five popular LMS for their specific implementations. The instrument enabled experts to extract affordances for gamification in the five categories experiential, mechanics, rewards, goals, and social. Results show large similarities in all of the systems studied and few varieties in approaches to gamification.",
"title": ""
},
{
"docid": "4fe5c25f57d5fa5b71b0c2b9dae7db29",
"text": "Position control of a quad tilt-wing UAV via a nonlinear hierarchical adaptive control approach is presented. The hierarchy consists of two levels. In the upper level, a model reference adaptive controller creates virtual control commands so as to make the UAV follow a given desired trajectory. The virtual control inputs are then converted to desired attitude angle references which are fed to the lower level attitude controller. Lower level controller is a nonlinear adaptive controller. The overall controller is developed for the full nonlinear dynamics of the tilt-wing UAV and thus no linearization is required. In addition, since the approach is adaptive, uncertainties in the UAV dynamics can be handled. Performance of the controller is presented via simulation results.",
"title": ""
},
{
"docid": "e4dc1f30a914dc6f710f23b5bc047978",
"text": "Intelligence, expertise, ability and talent, as these terms have traditionally been used in education and psychology, are socially agreed upon labels that minimize the dynamic, evolving, and contextual nature of individual–environment relations. These hypothesized constructs can instead be described as functional relations distributed across whole persons and particular contexts through which individuals appear knowledgeably skillful. The purpose of this article is to support a concept of ability and talent development that is theoretically grounded in 5 distinct, yet interrelated, notions: ecological psychology, situated cognition, distributed cognition, activity theory, and legitimate peripheral participation. Although talent may be reserved by some to describe individuals possessing exceptional ability and ability may be described as an internal trait, in our description neither ability nor talent are possessed. Instead, they are treated as equivalent terms that can be used to describe functional transactions that are situated across person-in-situation. Further, and more important, by arguing that ability is part of the individual–environment transaction, we take the potential to appear talented out of the hands (or heads) of the few and instead treat it as an opportunity that is available to all although it may be actualized more frequently by some.",
"title": ""
}
] |
scidocsrr
|
5cb830db37198d577a47ceb88886514f
|
Social media analytics: a survey of techniques, tools and platforms
|
[
{
"docid": "477e4a6930d147a598e1e0c453062ed2",
"text": "Stock markets are driven by a multitude of dynamics in which facts and beliefs play a major role in affecting the price of a company’s stock. In today’s information age, news can spread around the globe in some cases faster than they happen. While it can be beneficial for many applications including disaster prevention, our aim in this thesis is to use the timely release of information to model the stock market. We extract facts and beliefs from the population using one of the fastest growing social networking tools on the Internet, namely Twitter. We examine the use of Natural Language Processing techniques with a predictive machine learning approach to analyze millions of Twitter posts from which we draw distinctive features to create a model that enables the prediction of stock prices. We selected several stocks from the NASDAQ stock exchange and collected Intra-Day stock quotes during a period of two weeks. We build different feature representations from the raw Twitter posts and combined them with the stock price in order to build a regression model using the Support Vector Regression algorithm. We were able to build models of the stocks which predicted discrete prices that were close to a strong baseline. We further investigated the prediction of future prices, on average predicting 15 minutes ahead of the actual price, and evaluated the results using a Virtual Stock Trading Engine. These results were in general promising, but contained also some random variations across the different datasets.",
"title": ""
},
{
"docid": "07ffe189312da8519c4a6260402a0b22",
"text": "Computational social science is an emerging research area at the intersection of computer science, statistics, and the social sciences, in which novel computational methods are used to answer questions about society. The field is inherently collaborative: social scientists provide vital context and insight into pertinent research questions, data sources, and acquisition methods, while statisticians and computer scientists contribute expertise in developing mathematical models and computational tools. New, large-scale sources of demographic, behavioral, and network data from the Internet, sensor networks, and crowdsourcing systems augment more traditional data sources to form the heart of this nascent discipline, along with recent advances in machine learning, statistics, social network analysis, and natural language processing. The related research area of social computing deals with the mechanisms through which people interact with computational systems, examining questions such as how and why people contribute user-generated content and how to design systems that better enable them to do so. Examples of social computing systems include prediction markets, crowdsourcing markets, product review sites, and collaboratively edited wikis, all of which encapsulate some notion of aggregating crowd wisdom, beliefs, or ideas—albeit in different ways. Like computational social science, social computing blends techniques from machine learning and statistics with ideas from the social sciences. For example, the economics literature on incentive design has been especially influential.",
"title": ""
},
{
"docid": "1bb246ec4e68bd7072983e2824e8f9ff",
"text": "With the increasing availability of electronic documents and the rapid growth of the World Wide Web, the task of automatic categorization of documents became the key method for organizing the information and knowledge discovery. Proper classification of e-documents, online news, blogs, e-mails and digital libraries need text mining, machine learning and natural language processing techniques to get meaningful knowledge. The aim of this paper is to highlight the important techniques and methodologies that are employed in text documents classification, while at the same time making awareness of some of the interesting challenges that remain to be solved, focused mainly on text representation and machine learning techniques. This paper provides a review of the theory and methods of document classification and text mining, focusing on the existing literature.",
"title": ""
}
] |
[
{
"docid": "06e3d228e9fac29dab7180e56f087b45",
"text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.",
"title": ""
},
{
"docid": "877e7654a4e42ab270a96e87d32164fd",
"text": "The presence of gender stereotypes in many aspects of society is a well-known phenomenon. In this paper, we focus on studying such stereotypes and bias in Hindi movie industry (Bollywood). We analyze movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at inter-sentence and intrasentence level. Different features like occupation, introduction of cast in text, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. We derive a semantic graph and compute centrality of each character and observe similar bias there. We also show that such bias is not applicable for movie posters where females get equal importance even though their character has little or no impact on the movie plot. Furthermore, we explore the movie trailers to estimate on-screen time for males and females and also study the portrayal of emotions by gender in them. The silver lining is that our system was able to identify 30 movies over last 3 years where such stereotypes were broken.",
"title": ""
},
{
"docid": "16a1f15e8e414b59a230fb4a28c53cc7",
"text": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences.",
"title": ""
},
{
"docid": "13572c74a989b8677eec026788b381fe",
"text": "We examined the effect of stereotype threat on blood pressure reactivity. Compared with European Americans, and African Americans under little or no stereotype threat, African Americans under stereotype threat exhibited larger increases in mean arterial blood pressure during an academic test, and performed more poorly on difficult test items. We discuss the significance of these findings for understanding the incidence of hypertension among African Americans.",
"title": ""
},
{
"docid": "32bdd9f720989754744eddb9feedbf32",
"text": "Readability depends on many factors ranging from shallow features like word length to semantic ones like coherence. We introduce novel graph-based coherence features based on frequent subgraphs and compare their ability to assess the readability of Wall Street Journal articles. In contrast to Pitler and Nenkova (2008) some of our graph-based features are significantly correlated with human judgments. We outperform Pitler and Nenkova (2008) in the readability ranking task by more than 5% accuracy thus establishing a new state-of-the-art on this dataset.",
"title": ""
},
{
"docid": "c8c82af8fc9ca5e0adac5b8b6a14031d",
"text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.",
"title": ""
},
{
"docid": "a0c37bb6608f51f7095d6e5392f3c2f9",
"text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589",
"title": ""
},
{
"docid": "71243804831966d5a312f5dc3c3a61a5",
"text": "Datasets: KBP 2015 for training and news articles in KBP 2016, 2017 for testing. Model BCUB CEAFE MUC BLANC AV G KBP 2016 Local Classifier 51.47 47.96 26.29 30.82 39.13 Basic ILP 51.44 47.77 26.65 30.95 39.19 +Discourse 51.67 49.1 34.08 34.08 42.23 Joint Learning 50.16 48.59 32.41 32.72 40.97 KBP 2017 Local Classifier 50.24 48.47 30.81 29.94 39.87 Basic ILP 50.4 48.49 31.33 30.58 40.2 +Discourse 50.35 48.61 37.24 31.94 42.04 Table 2: Results for event coreference resolution systems on the KBP 2016 and 2017 corpus. Joint Learning results correspond to the result files evaluated in Lu and Ng, 2017.",
"title": ""
},
{
"docid": "ef011f601c37f0d08c2567fe7e231324",
"text": "We live in a world were data are generated from a myriad of sources, and it is really cheap to collect and storage such data. However, the real benefit is not related to the data itself, but with the algorithms that are capable of processing such data in a tolerable elapse time, and to extract valuable knowledge from it. Therefore, the use of Big Data Analytics tools provide very significant advantages to both industry and academia. The MapReduce programming framework can be stressed as the main paradigm related with such tools. It is mainly identified by carrying out a distributed execution for the sake of providing a high degree of scalability, together with a fault-",
"title": ""
},
{
"docid": "84f47a0e228bc672c4e0c29dd217f6df",
"text": "Semantic annotation plays an important role for semantic-aware web service discovery, recommendation and composition. In recent years, many approaches and tools have emerged to assist in semantic annotation creation and analysis. However, the Quality of Semantic Annotation (QoSA) is largely overlooked despite of its significant impact on the effectiveness of semantic-aware solutions. Moreover, improving the QoSA is time-consuming and requires significant domain knowledge. Therefore, how to verify and improve the QoSA has become a critical issue for semantic web services. In order to facilitate this process, this paper presents a novel lifecycle framework aiming at QoSA assessment and optimization. The QoSA is formally defined as the success rate of web service invocations, associated with a verification framework. Based on a local instance repository constructed from the execution information of the invocations, a two-layer optimization method including a local-feedback strategy and a global-feedback one is proposed to improve the QoSA. Experiments on real-world web services show that our framework can gain 65.95%~148.16% improvement in QoSA, compared with the original annotation without optimization.",
"title": ""
},
{
"docid": "afa70058c6df7b85040ce40be752bb89",
"text": "The authors attempt to identify the various causes of stator and rotor failures in three-phase squirrel cage induction motors. A specific methodology is proposed to facilitate an accurate analysis of these failures. It is noted that, due to the destructive nature of most failures, it is not easy, and is sometimes impossible, to determine the primary cause of failure. By a process of elimination, one can usually be assured of properly identifying the most likely cause of the failure. It is pointed out that the key point in going through this process of elimination is to use the basic steps of analyzing the failure class and pattern, noting the general motor appearance, identifying the operating condition at the time of failure, and gaining knowledge of the past history of the motor and application.<<ETX>>",
"title": ""
},
{
"docid": "7f8777738b0e135f2d5d3666677d58dd",
"text": "Ph. D. Sandra Margeti} Department of Laboratory Haematology and Coagulation Clinical Institute of Chemistry Medical School University Hospital Sestre milosrdnice Vinogradska 29 10 000 Zagreb, Croatia Tel: +385 1 3787 115 Fax: +385 1 3768 280 e-mail: margeticsandraagmail.com Summary: Laboratory investigation of thrombophilia is aimed at detecting the well-established hereditary and acquired causes of venous thromboembolism, including activated protein C resistance/factor V Leiden mutation, prothrombin G20210A mutation, deficiencies of the physio logical anticoagulants antithrombin, protein C and protein S, the presence of antiphospholipid antibodies and increased plasma levels of homocysteine and coagulation factor VIII. In contrast, investigation of dysfibrinogenemia, a very rare thrombophilic risk factor, should only be considered in a patient with evidence of familial or recurrent thrombosis in the absence of all evaluated risk factors mentioned above. At this time, thrombophilia investigation is not recommended for other potential hereditary or acquired risk factors whose association with increased risk for thrombosis has not been proven sufficiently to date. In order to ensure clinical relevance of testing and to avoid any misinterpretation of results, laboratory investigation of thrombophilia should always be performed in accordance with the recommended guidelines on testing regarding the careful selection of patients, time of testing and assays and assay methods used. The aim of this review is to summarize the most important aspects on thrombophilia testing, including whom and when to test, what assays and assay methods to use and all other variables that should be considered when performing laboratory investigation of thrombophilia.",
"title": ""
},
{
"docid": "6d4aa3d000a565b562186d3b3dba1a22",
"text": "Recommender systems are software applications that provide or suggest items to intended users. These systems use filtering techniques to provide recommendations. The major ones of these techniques are collaborative-based filtering technique, content-based technique, and hybrid algorithm. The motivation came as a result of the need to integrate recommendation feature in digital libraries in order to reduce information overload. Content-based technique is adopted because of its suitability in domains or situations where items are more than the users. TF-IDF (Term Frequency Inverse Document Frequency) and cosine similarity were used to determine how relevant or similar a research paper is to a user's query or profile of interest. Research papers and user's query were represented as vectors of weights using Keyword-based Vector Space model. The weights indicate the degree of association between a research paper and a user's query. This paper also presents an algorithm to provide or suggest recommendations based on users' query. The algorithm employs both TF-IDF weighing scheme and cosine similarity measure. Based on the result or output of the system, integrating recommendation feature in digital libraries will help library users to find most relevant research papers to their needs. Keywords—Recommender Systems; Content-Based Filtering; Digital Library; TF-IDF; Cosine Similarity; Vector Space Model",
"title": ""
},
{
"docid": "29734bed659764e167beac93c81ce0a7",
"text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.",
"title": ""
},
{
"docid": "809d795cb5e5147979f8dffed44e6a44",
"text": "The goal of this paper is to study the characteristics of various control architectures (e.g. centralized, hierarchical, distributed, and hybrid) for a team of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) in performing collaborative surveillance and crowd control. To this end, an overview of different control architectures is first provided covering their functionalities and interactions. Then, three major functional modules needed for crowd control are discussed under those architectures, including 1) crowd detection using computer vision algorithms, 2) crowd tracking using an enhanced information aggregation strategy, and 3) vehicles motion planning using a graph search algorithm. Depending on the architectures, these modules can be placed in the ground control center or embedded in each vehicle. To test and demonstrate characteristics of various control architectures, a testbed has been developed involving these modules and various hardware and software components, such as 1) assembled UAVs and UGV, 2) a real-time simulator (in Repast Simphony), 3) off-the-shelf ARM architecture computers (ODROID-U2/3), 4) autopilot units with GPS sensors, and 5) multipoint wireless networks using XBee. Experiments successfully demonstrate the pros and cons of the considered control architectures in terms of computational performance in responding to different system conditions (e.g. information sharing).",
"title": ""
},
{
"docid": "bb0ac3d88646bf94710a4452ddf50e51",
"text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension",
"title": ""
},
{
"docid": "f120d34996b155a413247add6adc6628",
"text": "The storage and computation requirements of Convolutional Neural Networks (CNNs) can be prohibitive for exploiting these models over low-power or embedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmented with a sparsity-promoting penalty term. The sparsity structure of the network is identified using the Alternating Direction Method of Multipliers (ADMM), which is widely used in large optimization problems. This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-inducing penalty functions to decompose the minimization problem into sub-problems that can be solved sequentially. Applying our method to a variety of state-of-the-art CNN models, our proposed method is able to simplify the original model, generating models with less computation and fewer parameters, while maintaining and often improving generalization performance. Accomplishments on a variety of models strongly verify that our proposed ADMM-based method can be a very useful tool for simplifying and improving deep CNNs.",
"title": ""
},
{
"docid": "4fabfd530004921901d09134ebfd0eae",
"text": "“Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing” is authored by Ian Gibson, David Rosen and Brent Stucker, who collectively possess 60 years’ experience in the fi eld of additive manufacturing (AM). This is the second edition of the book which aims to include current developments and innovations in a rapidly changing fi eld. Its primary aim is to serve as a teaching aid for developing and established curricula, therefore becoming an all-encompassing introductory text for this purpose. It is also noted that researchers may fi nd the text useful as a guide to the ‘state-of-the-art’ and to identify research opportunities. The book is structured to provide justifi cation and information for the use and development of AM by using standardised terminology to conform to standards (American Society for Testing and Materials (ASTM) F42) introduced since the fi rst edition. The basic principles and historical developments for AM are introduced in summary in the fi rst three chapters of the book and this serves as an excellent introduction for the uninitiated. Chapters 4–11 focus on the core technologies of AM individually and, in most cases, in comprehensive detail which gives those interested in the technical application and development of the technologies a solid footing. The remaining chapters provide guidelines and examples for various stages of the process including machine and/or materials selection, design considerations and software limitations, applications and post-processing considerations.",
"title": ""
},
{
"docid": "3b9b49f8c2773497f8e05bff4a594207",
"text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%AP@0.5 on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.",
"title": ""
},
{
"docid": "3de480136e0fd3e122e63870bc49ebdb",
"text": "22FDX™ is the industry's first FDSOI technology architected to meet the requirements of emerging mobile, Internet-of-Things (IoT), and RF applications. This platform achieves the power and performance efficiency of a 16/14nm FinFET technology in a cost effective, planar device architecture that can be implemented with ∼30% fewer masks. Performance comes from a second generation FDSOI transistor, which produces nFET (pFET) drive currents of 910μΑ/μm (856μΑ/μm) at 0.8 V and 100nA/μm Ioff. For ultra-low power applications, it offers low-voltage operation down to 0.4V V<inf>min</inf> for 8T logic libraries, as well as 0.62V and 0.52V V<inf>min</inf> for high-density and high-current bitcells, ultra-low leakage devices approaching 1pA/μm I<inf>off</inf>, and body-biasing to actively trade-off power and performance. Superior RF/Analog characteristics to FinFET are achieved including high f<inf>T</inf>/f<inf>MAx</inf> of 375GHz/290GHz and 260GHz/250GHz for nFET and pFET, respectively. The high f<inf>MAx</inf> extends the capabilities to 5G and milli-meter wave (>24GHz) RF applications.",
"title": ""
}
] |
scidocsrr
|
d0c05a044c6125d249b7c4de875fe40c
|
Energy efficient IoT-based smart home
|
[
{
"docid": "a8dbb16b9a0de0dcae7780ffe4c0b7cf",
"text": "Increased demands on implementation of wireless sensor networks in automation praxis result in relatively new wireless standard – ZigBee. The new workplace was established on the Department of Electronics and Multimedia Communications (DEMC) in order to keep up with ZigBee modern trend. This paper presents the first results and experiences associated with ZigBee based wireless sensor networking. The accent was put on suitable chipset platform selection for Home Automation wireless network purposes. Four popular microcontrollers was selected to investigate memory requirements and power consumption such as ARM, x51, HCS08, and Coldfire. Next objective was to test interoperability between various manufacturers’ platforms, what is important feature of ZigBee standard. A simple network based on ZigBee physical layer as well as ZigBee compliant network were made to confirm the basic ZigBee interoperability.",
"title": ""
},
{
"docid": "72ac5e1ec4cfdcd2e7b0591adce56091",
"text": "Th is paper presents a low cost and flexib le home control and monitoring system using an embedded micro -web server, with IP connectivity for accessing and controlling devices and appliances remotely using Android based Smart phone app. The proposed system does not require a dedicated server PC with respect to similar systems and offers a novel communicat ion protocol to monitor and control the home environment with more than just the switching functionality. To demonstrate the feasibility and effectiveness of this system, devices such as light switches, power p lug, temperature sensor and current sensor have been integrated with the proposed home control system.",
"title": ""
}
] |
[
{
"docid": "7575e468e2ee37c9120efb5e73e4308a",
"text": "In this demo, we present Cleanix, a prototype system for cleaning relational Big Data. Cleanix takes data integrated from multiple data sources and cleans them on a shared-nothing machine cluster. The backend system is built on-top-of an extensible and flexible data-parallel substrate - the Hyracks framework. Cleanix supports various data cleaning tasks such as abnormal value detection and correction, incomplete data filling, de-duplication, and conflict resolution. We demonstrate that Cleanix is a practical tool that supports effective and efficient data cleaning at the large scale.",
"title": ""
},
{
"docid": "833c110e040311909aa38b05e457b2af",
"text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.",
"title": ""
},
{
"docid": "ecd4dd9d8807df6c8194f7b4c7897572",
"text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.",
"title": ""
},
{
"docid": "339de1d21bfce2e9a8848d6fbc2792d4",
"text": "The extraction of local tempo and beat information from audio recordings constitutes a challenging task, particularly for music that reveals significant tempo variations. Furthermore, the existence of various pulse levels such as measure, tactus, and tatum often makes the determination of absolute tempo problematic. In this paper, we present a robust mid-level representation that encodes local tempo information. Similar to the well-known concept of cyclic chroma features, where pitches differing by octaves are identified, we introduce the concept of cyclic tempograms, where tempi differing by a power of two are identified. Furthermore, we describe how to derive cyclic tempograms from music signals using two different methods for periodicity analysis and finally sketch some applications to tempo-based audio segmentation.",
"title": ""
},
{
"docid": "fdbad1d98044bf6494bfd211e6116db8",
"text": "This work addresses the problem of underwater archaeological surveys from the point of view of knowledge. We propose an approach based on underwater photogrammetry guided by a representation of knowledge used, as structured by ontologies. Survey data feed into to ontologies and photogrammetry in order to produce graphical results. This paper focuses on the use of ontologies during the exploitation of 3D results. JAVA software dedicated to photogram‐ metry and archaeological survey has been mapped onto an OWL formalism. The use of procedural attachment in a dual representation (JAVA OWL) of the involved concepts allows us to access computational facilities directly from OWL. As SWRL The use of rules illustrates very well such ‘double formalism’ as well as the use of computational capabilities of ‘rules logical expression’. We present an application that is able to read the ontology populated with a photo‐ grammetric survey data. Once the ontology is read, it is possible to produce a 3D representation of the individuals and observing graphically the results of logical spatial queries on the ontology. This work is done on a very important underwater archaeological site in Malta named Xlendi, probably the most ancient shipwreck of the central Mediterranean Sea.",
"title": ""
},
{
"docid": "912c213d76bed8d90f636ea5a6220cf1",
"text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.",
"title": ""
},
{
"docid": "81ddc594cb4b7f3ed05908ce779aa4f4",
"text": "Since the length of microblog texts, such as tweets, is strictly limited to 140 characters, traditional Information Retrieval techniques suffer from the vocabulary mismatch problem severely and cannot yield good performance in the context of microblogosphere. To address this critical challenge, in this paper, we propose a new language modeling approach for microblog retrieval by inferring various types of context information. In particular, we expand the query using knowledge terms derived from Freebase so that the expanded one can better reflect users’ search intent. Besides, in order to further satisfy users’ real-time information need, we incorporate temporal evidences into the expansion method, which can boost recent tweets in the retrieval results with respect to a given topic. Experimental results on two official TREC Twitter corpora demonstrate the significant superiority of our approach over baseline methods.",
"title": ""
},
{
"docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5",
"text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.",
"title": ""
},
{
"docid": "c2816721fa6ccb0d676f7fdce3b880d4",
"text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.",
"title": ""
},
{
"docid": "bcf27c4f750ab74031b8638a9b38fd87",
"text": "δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.",
"title": ""
},
{
"docid": "f1a36f7fd6b3cf42415c483f6ade768e",
"text": "The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease.",
"title": ""
},
{
"docid": "b374975ae9690f96ed750a888713dbc9",
"text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.",
"title": ""
},
{
"docid": "5df529aca774edb0eb5ac93c9a0ce3b7",
"text": "The GRASP (Graphical Representations of Algorithms, Structures, and Processes) project, which has successfully prototyped a new algorithmic-level graphical representation for software—the control structure diagram (CSD)—is currently focused on the generation of a new fine-grained complexity metric called the complexity profile graph (CPG). The primary impetus for creation and refinement of the CSD and the CPG is to improve the comprehension efficiency of software and, as a result, improve reliability and reduce costs. The current GRASP release provides automatic CSD generation for Ada 95, C, C++, Java, and Very High-Speed Integrated Circuit Hardware Description Language (VHDL) source code, and CPG generation for Ada 95 source code. The examples and discussion in this article are based on using GRASP with Ada 95.",
"title": ""
},
{
"docid": "ef771fa11d9f597f94cee5e64fcf9fd6",
"text": "The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closed-form expression of expected information gain. A new type of Pareto-front algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multi-objective evolutionary search. This makes the exploration-exploitation trade-off explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.",
"title": ""
},
{
"docid": "7df626465d52dfe5859e682c685c62bc",
"text": "This thesis addresses the task of error detection in the choice of content words focusing on adjective–noun and verb–object combinations. We show that error detection in content words is an under-explored area in research on learner language since (i) most previous approaches to error detection and correction have focused on other error types, and (ii) the approaches that have previously addressed errors in content words have not performed error detection proper. We show why this task is challenging for the existing algorithms and propose a novel approach to error detection in content words. We note that since content words express meaning, an error detection algorithm should take the semantic properties of the words into account. We use a compositional distribu-tional semantic framework in which we represent content words using their distributions in native English, while the meaning of the combinations is represented using models of com-positional semantics. We present a number of measures that describe different properties of the modelled representations and can reliably distinguish between the representations of the correct and incorrect content word combinations. Finally, we cast the task of error detection as a binary classification problem and implement a machine learning classifier that uses the output of the semantic measures as features. The results of our experiments confirm that an error detection algorithm that uses semantically motivated features achieves good accuracy and precision and outperforms the state-of-the-art approaches. We conclude that the features derived from the semantic representations encode important properties of the combinations that help distinguish the correct combinations from the incorrect ones. The approach presented in this work can naturally be extended to other types of content word combinations. Future research should also investigate how the error correction component for content word combinations could be implemented. 3 4 Acknowledgements First and foremost, I would like to express my profound gratitude to my supervisor, Ted Briscoe, for his constant support and encouragement throughout the course of my research. This work would not have been possible without his invaluable guidance and advice. I am immensely grateful to my examiners, Ann Copestake and Stephen Pulman, for providing their advice and constructive feedback on the final version of the dissertation. I am also thankful to my colleagues at the Natural Language and Information Processing research group for the insightful and inspiring discussions over these years. In particular, I would like to express my gratitude to would like to thank …",
"title": ""
},
{
"docid": "becd45d50ead03dd5af399d5618f1ea3",
"text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.",
"title": ""
},
{
"docid": "b19ba18dbce648ca584d5c41b406d1be",
"text": "Communication experiments using normal lab setup, which includes more hardware and less software raises the cost of the total system. The method proposed here provides a new approach through which all the analog and digital experiments can be performed using a single hardware-USRP (Universal Software Radio Peripheral) and software-GNU Radio Companion (GRC). Initially, networking setup is formulated using SDR technology. Later on, one of the analog communication experiments is demonstrated in real time using the GNU Radio Companion, RTL-SDR and USRP. The entire communication system is less expensive as the system uses a single reprogrammable hardware and most of the focus of the system deals with the software part.",
"title": ""
},
{
"docid": "41de353ad7e48d5f354893c6045394e2",
"text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.",
"title": ""
},
{
"docid": "4b886b3ee8774a1e3110c12bdbdcbcdf",
"text": "To engage in cooperative activities with human partners, robots have to possess basic interactive abilities and skills. However, programming such interactive skills is a challenging task, as each interaction partner can have different timing or an alternative way of executing movements. In this paper, we propose to learn interaction skills by observing how two humans engage in a similar task. To this end, we introduce a new representation called Interaction Primitives. Interaction primitives build on the framework of dynamic motor primitives (DMPs) by maintaining a distribution over the parameters of the DMP. With this distribution, we can learn the inherent correlations of cooperative activities which allow us to infer the behavior of the partner and to participate in the cooperation. We will provide algorithms for synchronizing and adapting the behavior of humans and robots during joint physical activities.",
"title": ""
}
] |
scidocsrr
|
a7bbea069feaed269fc9caf24cc3c6a0
|
Architectural support for SWAR text processing with parallel bit streams: the inductive doubling principle
|
[
{
"docid": "8fde46517d705da12fb43ce110a27a0f",
"text": "Parabix (parallel bit streams for XML) is an open-source XML parser that employs the SIMD (single-instruction multiple-data) capabilities of modern-day commodity processors to deliver dramatic performance improvements over traditional byte-at-a-time parsing technology. Byte-oriented character data is first transformed to a set of 8 parallel bit streams, each stream comprising one bit per character code unit. Character validation, transcoding and lexical item stream formation are all then carried out in parallel using bitwise logic and shifting operations. Byte-at-a-time scanning loops in the parser are replaced by bit scan loops that can advance by as many as 64 positions with a single instruction.\n A performance study comparing Parabix with the open-source Expat and Xerces parsers is carried out using the PAPI toolkit. Total CPU cycle counts, level 2 data cache misses and branch mispredictions are measured and compared for each parser. The performance of Parabix is further studied with a breakdown of the cycle counts across the core components of the parser. Prospects for further performance improvements are also outlined, with a particular emphasis on leveraging the intraregister parallelism of SIMD processing to enable intrachip parallelism on multicore architectures.",
"title": ""
}
] |
[
{
"docid": "bf1ba6901d6c64a341ba1491c6c2c3c9",
"text": "The present research proposes schema congruity as a theoretical basis for examining the effectiveness and consequences of product anthropomorphism. Results of two studies suggest that the ability of consumers to anthropomorphize a product and their consequent evaluation of that product depend on the extent to which that product is endowed with characteristics congruent with the proposed human schema. Furthermore, consumers’ perception of the product as human mediates the influence of feature type on product evaluation. Results of a third study, however, show that the affective tag attached to the specific human schema moderates the evaluation but not the successful anthropomorphizing of theproduct.",
"title": ""
},
{
"docid": "7b99f2b0c903797c5ed33496f69481fc",
"text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.",
"title": ""
},
{
"docid": "7f3686b783273c4df7c4fb41fe7ccefd",
"text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "345e46da9fc01a100f10165e82d9ca65",
"text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.",
"title": ""
},
{
"docid": "a4f0b524f79db389c72abd27d36f8944",
"text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.",
"title": ""
},
{
"docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04",
"text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.",
"title": ""
},
{
"docid": "518dc6882c6e13352c7b41f23dfd2fad",
"text": "The Diagnostic and Statistical Manual of Mental Disorders (DSM) is considered to be the gold standard manual for assessing the psychiatric diseases and is currently in its fourth version (DSM-IV), while a fifth (DSM-V) has just been released in May 2013. The DSM-V Anxiety Work Group has put forward recommendations to modify the criteria for diagnosing specific phobias. In this manuscript, we propose to consider the inclusion of nomophobia in the DSM-V, and we make a comprehensive overview of the existing literature, discussing the clinical relevance of this pathology, its epidemiological features, the available psychometric scales, and the proposed treatment. Even though nomophobia has not been included in the DSM-V, much more attention is paid to the psychopathological effects of the new media, and the interest in this topic will increase in the near future, together with the attention and caution not to hypercodify as pathological normal behaviors.",
"title": ""
},
{
"docid": "917ab22adee174259bef5171fe6f14fb",
"text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.",
"title": ""
},
{
"docid": "43f5d21de3421564a7d5ecd6c074ea0a",
"text": "Epithelial-mesenchymal transition (EMT) is an important process in embryonic development, fibrosis, and cancer metastasis. During cancer progression, the activation of EMT permits cancer cells to acquire migratory, invasive, and stem-like properties. A growing body of evidence supports the critical link between EMT and cancer stemness. However, contradictory results have indicated that the inhibition of EMT also promotes cancer stemness, and that mesenchymal-epithelial transition, the reverse process of EMT, is associated with the tumor-initiating ability required for metastatic colonization. The concept of 'intermediate-state EMT' provides a possible explanation for this conflicting evidence. In addition, recent studies have indicated that the appearance of 'hybrid' epithelial-mesenchymal cells is favorable for the establishment of metastasis. In summary, dynamic changes or plasticity between the epithelial and the mesenchymal states rather than a fixed phenotype is more likely to occur in tumors in the clinical setting. Further studies aimed at validating and consolidating the concept of intermediate-state EMT and hybrid tumors are needed for the establishment of a comprehensive profile of cancer metastasis.",
"title": ""
},
{
"docid": "a1f29ac1db0745a61baf6995459c02e7",
"text": "Adolescence is a developmental period characterized by suboptimal decisions and actions that give rise to an increased incidence of unintentional injuries and violence, alcohol and drug abuse, unintended pregnancy and sexually transmitted diseases. Traditional neurobiological and cognitive explanations for adolescent behavior have failed to account for the nonlinear changes in behavior observed during adolescence, relative to childhood and adulthood. This review provides a biologically plausible conceptualization of the neural mechanisms underlying these nonlinear changes in behavior, as a heightened responsiveness to incentives while impulse control is still relatively immature during this period. Recent human imaging and animal studies provide a biological basis for this view, suggesting differential development of limbic reward systems relative to top-down control systems during adolescence relative to childhood and adulthood. This developmental pattern may be exacerbated in those adolescents with a predisposition toward risk-taking, increasing the risk for poor outcomes.",
"title": ""
},
{
"docid": "f437862098dac160f3a3578baeb565a2",
"text": "Techniques for modeling and simulating channel conditions play an essential role in understanding network protocol and application behavior. In [11], we demonstrated that inaccurate modeling using a traditional analytical model yielded significant errors in error control protocol parameters choices. In this paper, we demonstrate that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes. We then present an algorithm that divides traces into stationary components in order to provide analytical channel models that, relative to traditional approaches, more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes. Our algorithm also generates artificial traces with the same statistical characteristics as actual collected network traces. For validation, we develop a channel model for the circuit-switched data service in GSM and show that it: (1) more closely approximates GSM channel characteristics than a traditional Gilbert model and (2) generates artificial traces that closely match collected traces' statistics. Using these traces in a simulator environment enables future protocol and application testing under different controlled and repeatable conditions.",
"title": ""
},
{
"docid": "c3c36535a6dbe74165c0e8b798ac820f",
"text": "Multiplier, being a very vital part in the design of microprocessor, graphical systems, multimedia systems, DSP system etc. It is very important to have an efficient design in terms of performance, area, speed of the multiplier, and for the same Booth's multiplication algorithm provides a very fundamental platform for all the new advances made for high end multipliers meant for faster multiplication with higher performance. The algorithm provides an efficient encoding of the bits during the first steps of the multiplication process. In pursuit of the same, Radix 4 booths encoding has increased the performance of the multiplier by reducing the number of partial products generated. Radix 4 Booths algorithm produces both positive and negative partial products and implementing the negative partial product nullifies the advances made in different units to some extent if not fully. Most of the research work focuses on the reduction of the number of partial products generated and making efficient implementation of the algorithm. There is very little work done on disposal of the negative partial products generated. The presented work in the paper addresses the issue of disposal of the negative partial products efficiently by computing the 2's complement avoiding the additional adder for adding 1 and generation of long carry chain, hence. The proposed mechanism also continues to support the concept of reducing the partial product and in persuasion of the same it is able to reduce the number of partial product and also improved further from n/2 +1 partial products achieved via modified booths algorithm to n/2. Also, while implementing the proposed mechanism using Verilog HDL, a mode selection capability is provided, enabling the same hardware to act as multiplier and as a simple two's complement calculator using the proposed mechanism. The proposed technique has added advantage in terms of its independentness of the number of bits to be multiplied. It is tested and verified with varied test vectors of different number bit sets. Xilinx synthesis tool is used for synthesis and the multiplier mechanism has a maximum operating frequency of 14.59 MHz and a delay of 7.013 ns.",
"title": ""
},
{
"docid": "db907780a2022761d2595a8ad5d03401",
"text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.",
"title": ""
},
{
"docid": "8fc560987781afbb25f47eb560176e2c",
"text": "Liposomes are microparticulate lipoidal vesicles which are under extensive investigation as drug carriers for improving the delivery of therapeutic agents. Due to new developments in liposome technology, several liposomebased drug formulations are currently in clinical trial, and recently some of them have been approved for clinical use. Reformulation of drugs in liposomes has provided an opportunity to enhance the therapeutic indices of various agents mainly through alteration in their biodistribution. This review discusses the potential applications of liposomes in drug delivery with examples of formulations approved for clinical use, and the problems associated with further exploitation of this drug delivery system. © 1997 Elsevier Science B.V.",
"title": ""
},
{
"docid": "c1aa687c4a48cfbe037fe87ed4062dab",
"text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.",
"title": ""
},
{
"docid": "be82da372c061ef3029273bfc91a9e0a",
"text": "Search and rescue missions and surveillance require finding targets in a large area. These tasks often use unmanned aerial vehicles (UAVs) with cameras to detect and move towards a target. However, common UAV approaches make two simplifying assumptions. First, they assume that observations made from different heights are deterministically correct. In practice, observations are noisy, with the noise increasing as the height used for observations increases. Second, they assume that a motion command executes correctly, which may not happen due to wind and other environmental factors. To address these, we propose a sequential algorithm that determines actions in real time based on observations, using partially observable Markov decision processes (POMDPs). Our formulation handles both observations and motion uncertainty and errors. We run offline simulations and learn a policy. This policy is run on a UAV to find the target efficiently. We employ a novel compact formulation to represent the coordinates of the drone relative to the target coordinates. Our POMDP policy finds the target up to 3.4 times faster when compared to a heuristic policy.",
"title": ""
},
{
"docid": "4239f9110973888c7eded81037c056b3",
"text": "The role of epistasis in the genetic architecture of quantitative traits is controversial, despite the biological plausibility that nonlinear molecular interactions underpin the genotype–phenotype map. This controversy arises because most genetic variation for quantitative traits is additive. However, additive variance is consistent with pervasive epistasis. In this Review, I discuss experimental designs to detect the contribution of epistasis to quantitative trait phenotypes in model organisms. These studies indicate that epistasis is common, and that additivity can be an emergent property of underlying genetic interaction networks. Epistasis causes hidden quantitative genetic variation in natural populations and could be responsible for the small additive effects, missing heritability and the lack of replication that are typically observed for human complex traits.",
"title": ""
},
{
"docid": "5565f51ad8e1aaee43f44917befad58a",
"text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.",
"title": ""
},
{
"docid": "1b47dffdff3825ad44a0430311e2420b",
"text": "The present paper describes the SSM algorithm of protein structure comparison in three dimensions, which includes an original procedure of matching graphs built on the protein's secondary-structure elements, followed by an iterative three-dimensional alignment of protein backbone Calpha atoms. The SSM results are compared with those obtained from other protein comparison servers, and the advantages and disadvantages of different scores that are used for structure recognition are discussed. A new score, balancing the r.m.s.d. and alignment length Nalign, is proposed. It is found that different servers agree reasonably well on the new score, while showing considerable differences in r.m.s.d. and Nalign.",
"title": ""
},
{
"docid": "9c16f3ccaab4e668578e3eda7d452ebd",
"text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.",
"title": ""
}
] |
scidocsrr
|
a1fec1fe18c288d3580ef83c567b7e69
|
Cross-Dataset Recognition: A Survey
|
[
{
"docid": "65901a189e87983dfd01db0161106a86",
"text": "The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset. We find that it is beneficial to explicitly account for bias when combining multiple datasets. (For more details refer to [3] and http://undoingbias.csail.mit.edu)",
"title": ""
},
{
"docid": "8fe9ab612f31d349e881550d8c99a446",
"text": "This paper investigates a new machine learning strategy cal led translated learning. Unlike many previous learning tasks, we focus on how to use l ab led data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled te xt ata to help learn a model for classifying image data, when the labeled images a r difficult to obtain. An important aspect of translated learning is to build a “bridge” to link one feature space (known as the “source space”) to another space (known as the “target space”) through a translator in order to migrate the know ledge from source to target. The translated learning solution uses a language mo del t link the class labels to the features in the source spaces, which in turn is t an lated to the features in the target spaces. Finally, this chain of linkages i s completed by tracing back to the instances in the target spaces. We show that this p ath of linkage can be modeled using a Markov chain and risk minimization. Throu gh experiments on the text-aided image classification and cross-language c l ssification tasks, we demonstrate that our translated learning framework can gre atly outperform many state-of-the-art baseline methods.",
"title": ""
}
] |
[
{
"docid": "9bbb8ff8e8d498709ee68c6797b00588",
"text": "Studies often report that bilingual participants possess a smaller vocabulary in the language of testing than monolinguals, especially in research with children. However, each study is based on a small sample so it is difficult to determine whether the vocabulary difference is due to sampling error. We report the results of an analysis of 1,738 children between 3 and 10 years old and demonstrate a consistent difference in receptive vocabulary between the two groups. Two preliminary analyses suggest that this difference does not change with different language pairs and is largely confined to words relevant to a home context rather than a school context.",
"title": ""
},
{
"docid": "c824b5274ce6afb54c58fae2dd68ff8f",
"text": "User modeling plays an important role in delivering customized web services to the users and improving their engagement. However, most user models in the literature do not explicitly consider the temporal behavior of users. More recently, continuous-time user modeling has gained considerable attention and many user behavior models have been proposed based on temporal point processes. However, typical point process-based models often considered the impact of peer influence and content on the user participation and neglected other factors. Gamification elements are among those factors that are neglected, while they have a strong impact on user participation in online services. In this article, we propose interdependent multi-dimensional temporal point processes that capture the impact of badges on user participation besides the peer influence and content factors. We extend the proposed processes to model user actions over the community-based question and answering websites, and propose an inference algorithm based on Variational-Expectation Maximization that can efficiently learn the model parameters. Extensive experiments on both synthetic and real data gathered from Stack Overflow show that our inference algorithm learns the parameters efficiently and the proposed method can better predict the user behavior compared to the alternatives.",
"title": ""
},
{
"docid": "379bc1336026fab6225e39b6c69d55a0",
"text": "We show that a recurrent neural network is able to learn a model to represent sequences of communications between computers on a network and can be used to identify outlier network traffic. Defending computer networks is a challenging problem and is typically addressed by manually identifying known malicious actor behavior and then specifying rules to recognize such behavior in network communications. However, these rule-based approaches often generalize poorly and identify only those patterns that are already known to researchers. An alternative approach that does not rely on known malicious behavior patterns can potentially also detect previously unseen patterns. We tokenize and compress netflow into sequences of “words” that form “sentences” representative of a conversation between computers. These sentences are then used to generate a model that learns the semantic and syntactic grammar of the newly generated language. We use Long-Short-Term Memory (LSTM) cell Recurrent Neural Networks (RNN) to capture the complex relationships and nuances of this language. The language model is then used predict the communications between two IPs and the prediction error is used as a measurement of how typical or atyptical the observed communication are. By learning a model that is specific to each network, yet generalized to typical computer-to-computer traffic within and outside the network, a language model is able to identify sequences of network activity that are outliers with respect to the model. We demonstrate positive unsupervised attack identification performance (AUC 0.84) on the ISCX IDS dataset which contains seven days of network activity with normal traffic and four distinct attack patterns.",
"title": ""
},
{
"docid": "d0d5d9e1eabc1b282c1db08d8da38214",
"text": "Climate change is altering the availability of resources and the conditions that are crucial to plant performance. One way plants will respond to these changes is through environmentally induced shifts in phenotype (phenotypic plasticity). Understanding plastic responses is crucial for predicting and managing the effects of climate change on native species as well as crop plants. Here, we provide a toolbox with definitions of key theoretical elements and a synthesis of the current understanding of the molecular and genetic mechanisms underlying plasticity relevant to climate change. By bringing ecological, evolutionary, physiological and molecular perspectives together, we hope to provide clear directives for future research and stimulate cross-disciplinary dialogue on the relevance of phenotypic plasticity under climate change.",
"title": ""
},
{
"docid": "1d2f72587e694aa8d6435e176e87d4cb",
"text": "It is well known that the performance of context-based image processing systems can be improved by allowing the processor (e.g., an encoder or a denoiser) a delay of several samples before making a processing decision. Often, however, for such systems, traditional delayed-decision algorithms can become computationally prohibitive due to the growth in the size of the space of possible solutions. In this paper, we propose a reduced-complexity, one-pass, delayed-decision algorithm that systematically reduces the size of the search space, while also preserving its structure. In particular, we apply the proposed algorithm to two examples of adaptive context-based image processing systems, an image coding system that employs a context-based entropy coder, and a spatially adaptive image-denoising system. For these two types of widely used systems, we show that the proposed delayed-decision search algorithm outperforms instantaneous-decision algorithms with only a small increase in complexity. We also show that the performance of the proposed algorithm is better than that of other, higher complexity, delayed-decision algorithms.",
"title": ""
},
{
"docid": "6b19893324e4012a622c0250436e1ab3",
"text": "Nowadays, email is one of the fastest ways to conduct communications through sending out information and attachments from one to another. Individuals and organizations are all benefit the convenience from email usage, but at the same time they may also suffer the unexpected user experience of receiving spam email all the time. Spammers flood the email servers and send out mass quantity of unsolicited email to the end users. From a business perspective, email users have to spend time on deleting received spam email which definitely leads to the productivity decrease and cause potential loss for organizations. Thus, how to detect the email spam effectively and efficiently with high accuracy becomes a significant study. In this study, data mining will be utilized to process machine learning by using different classifiers for training and testing and filters for data preprocessing and feature selection. It aims to seek out the optimal hybrid model with higher accuracy or base on other metric’s evaluation. The experiment results show accuracy improvement in email spam detection by using hybrid techniques compared to the single classifiers used in this research. The optimal hybrid model provides 93.00% of accuracy and 7.80% false positive rate for email spam detection.",
"title": ""
},
{
"docid": "5fa2dfc9cbf6568d5282601781e14b58",
"text": "Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős–Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible. Artificial neural networks are artificial intelligence computing methods which are inspired by biological neural networks. Here the authors propose a method to design neural networks as sparse scale-free networks, which leads to a reduction in computational time required for training and inference.",
"title": ""
},
{
"docid": "cd8eeaeb81423fcb1c383f2b60e928df",
"text": "Detecting and representing changes to data is important for active databases, data warehousing, view maintenance, and version and configuration management. Most previous work in change management has dealt with flat-file and relational data; we focus on hierarchically structured data. Since in many cases changes must be computed from old and new versions of the data, we define the hierarchical change detection problem as the problem of finding a \"minimum-cost edit script\" that transforms one data tree to another, and we present efficient algorithms for computing such an edit script. Our algorithms make use of some key domain characteristics to achieve substantially better performance than previous, general-purpose algorithms. We study the performance of our algorithms both analytically and empirically, and we describe the application of our techniques to hierarchically structured documents.",
"title": ""
},
{
"docid": "2f30301143dc626a3013eb24629bfb45",
"text": "A vast array of devices, ranging from industrial robots to self-driven cars or smartphones, require increasingly sophisticated processing of real-world input data (image, voice, radio, ...). Interestingly, hardware neural network accelerators are emerging again as attractive candidate architectures for such tasks. The neural network algorithms considered come from two, largely separate, domains: machine-learning and neuroscience. These neural networks have very different characteristics, so it is unclear which approach should be favored for hardware implementation. Yet, few studies compare them from a hardware perspective. We implement both types of networks down to the layout, and we compare the relative merit of each approach in terms of energy, speed, area cost, accuracy and functionality.\n Within the limit of our study (current SNN and machine-learning NN algorithms, current best effort at hardware implementation efforts, and workloads used in this study), our analysis helps dispel the notion that hardware neural network accelerators inspired from neuroscience, such as SNN+STDP, are currently a competitive alternative to hardware neural networks accelerators inspired from machine-learning, such as MLP+BP: not only in terms of accuracy, but also in terms of hardware cost for realistic implementations, which is less expected. However, we also outline that SNN+STDP carry potential for reduced hardware cost compared to machine-learning networks at very large scales, if accuracy issues can be controlled (or for applications where they are less important). We also identify the key sources of inaccuracy of SNN+STDP which are less related to the loss of information due to spike coding than to the nature of the STDP learning algorithm. Finally, we outline that for the category of applications which require permanent online learning and moderate accuracy, SNN+STDP hardware accelerators could be a very cost-efficient solution.",
"title": ""
},
{
"docid": "c27ba892408391234da524ffab0e7418",
"text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.",
"title": ""
},
{
"docid": "c35fa79bd405ec0fb6689d395929c055",
"text": "This study examines the potential profit of bull flag technical trading rules using a template matching technique based on pattern recognition for the Nasdaq Composite Index (NASDAQ) and Taiwan Weighted Index (TWI). To minimize measurement error due to data snooping, this study performed a series of experiments to test the effectiveness of the proposed method. The empirical results indicated that all of the technical trading rules correctly predict the direction of changes in the NASDAQ and TWI. This finding may provide investors with important information on asset allocation. Moreover, better bull flag template price fit is associated with higher average return. The empirical results demonstrated that the average return of trading rules conditioned on bull flag significantly better than buying every day for the study period, especially for TWI. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5b021c0223ee25535508eb1d6f63ff55",
"text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications",
"title": ""
},
{
"docid": "60306e39a7b281d35e8a492aed726d82",
"text": "The aim of this study was to assess the efficiency of four anesthetic agents, tricaine methanesulfonate (MS-222), clove oil, 7 ketamine, and tobacco extract on juvenile rainbow trout. Also, changes of blood indices were evaluated at optimum doses of four anesthetic agents. Basal effective concentrations determined were 40 mg L−1 (induction, 111 ± 16 s and recovery time, 246 ± 36 s) for clove oil, 150 mg L−1 (induction, 287 ± 59 and recovery time, 358 ± 75 s) for MS-222, 1 mg L−1 (induction, 178 ± 38 and recovery time, 264 ± 57 s) for ketamine, and 30 mg L−1 (induction, 134 ± 22 and recovery time, 285 ± 42 s) for tobacco. According to our results, significant changes in hematological parameters including white blood cells (WBCs), red blood cells (RBCs), hematocrit (Ht), and hemoglobin (Hb) were found between four anesthetics agents. Also, significant differences were observed in some plasma parameters including cortical, glucose, and lactate between experimental treatments. Induction and recovery times for juvenile Oncorhynchus mykiss anesthetized with anesthetic agents were dose-dependent.",
"title": ""
},
{
"docid": "2550502036aac5cf144cb8a0bc2d525b",
"text": "Significant achievements have been made on the development of next-generation filtration and separation membranes using graphene materials, as graphene-based membranes can afford numerous novel mass-transport properties that are not possible in state-of-art commercial membranes, making them promising in areas such as membrane separation, water desalination, proton conductors, energy storage and conversion, etc. The latest developments on understanding mass transport through graphene-based membranes, including perfect graphene lattice, nanoporous graphene and graphene oxide membranes are reviewed here in relation to their potential applications. A summary and outlook is further provided on the opportunities and challenges in this arising field. The aspects discussed may enable researchers to better understand the mass-transport mechanism and to optimize the synthesis of graphene-based membranes toward large-scale production for a wide range of applications.",
"title": ""
},
{
"docid": "48d778934127343947b494fe51f56a33",
"text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.",
"title": ""
},
{
"docid": "7a5167ffb79f35e75359c979295c22ee",
"text": "Precise forecast of the electrical load plays a highly significant role in the electricity industry and market. It provides economic operations and effective future plans for the utilities and power system operators. Due to the intermittent and uncertain characteristic of the electrical load, many research studies have been directed to nonlinear prediction methods. In this paper, a hybrid prediction algorithm comprised of Support Vector Regression (SVR) and Modified Firefly Algorithm (MFA) is proposed to provide the short term electrical load forecast. The SVR models utilize the nonlinear mapping feature to deal with nonlinear regressions. However, such models suffer from a methodical algorithm for obtaining the appropriate model parameters. Therefore, in the proposed method the MFA is employed to obtain the SVR parameters accurately and effectively. In order to evaluate the efficiency of the proposed methodology, it is applied to the electrical load demand in Fars, Iran. The obtained results are compared with those obtained from the ARMA model, ANN, SVR-GA, SVR-HBMO, SVR-PSO and SVR-FA. The experimental results affirm that the proposed algorithm outperforms other techniques. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91e2dadb338fbe97b009efe9e8f60446",
"text": "An efficient smoke detection algorithm on color video sequences obtained from a stationary camera is proposed. Our algorithm considers dynamic and static features of smoke and is composed of basic steps: preprocessing; slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction; merge slowly moving areas with pixels into blobs; classification of the blobs obtained before. We use adaptive background subtraction at a stage of moving detection. Moving blobs classification is based on optical flow calculation, Weber contrast analysis and takes into account primary direction of smoke propagation. Real video surveillance sequences were used for smoke detection with utilization our algorithm. A set of experimental results is presented in the paper.",
"title": ""
},
{
"docid": "c5f9f3beff52655f72d2d5870df6fa60",
"text": "The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.",
"title": ""
},
{
"docid": "17d0da8dd05d5cfb79a5f4de4449fcdd",
"text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in",
"title": ""
},
{
"docid": "4cb2c365abfbb29830557654f015daa2",
"text": "The excellent electrical, optical and mechanical properties of graphene have driven the search to find methods for its large-scale production, but established procedures (such as mechanical exfoliation or chemical vapour deposition) are not ideal for the manufacture of processable graphene sheets. An alternative method is the reduction of graphene oxide, a material that shares the same atomically thin structural framework as graphene, but bears oxygen-containing functional groups. Here we use molecular dynamics simulations to study the atomistic structure of progressively reduced graphene oxide. The chemical changes of oxygen-containing functional groups on the annealing of graphene oxide are elucidated and the simulations reveal the formation of highly stable carbonyl and ether groups that hinder its complete reduction to graphene. The calculations are supported by infrared and X-ray photoelectron spectroscopy measurements. Finally, more effective reduction treatments to improve the reduction of graphene oxide are proposed.",
"title": ""
}
] |
scidocsrr
|
629c7ba37ff27dfbd3c5867f2e7e0e61
|
MUTE: Majority under-sampling technique
|
[
{
"docid": "a0b862a758c659b62da2114143bf7687",
"text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.",
"title": ""
},
{
"docid": "5a3b8a2ec8df71956c10b2eb10eabb99",
"text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.",
"title": ""
}
] |
[
{
"docid": "2171c57b911161d805ffc08fbe02f92a",
"text": "The past decade has witnessed a growing interest in vehicular networking and its vast array of potential applications. Increased wireless accessibility of the Internet from vehicles has triggered the emergence of vehicular safety applications, locationspecific applications, and multimedia applications. Recently, Professor Olariu and his coworkers have promoted the vision of Vehicular Clouds (VCs), a non-trivial extension, along several dimensions, of conventional Cloud Computing. In a VC, the under-utilized vehicular resources including computing power, storage and Internet connectivity can be shared between drivers or rented out over the Internet to various customers, very much as conventional cloud resources are. The goal of this chapter is to introduce and review the challenges and opportunities offered by what promises to be the Next Paradigm Shift:From Vehicular Networks to Vehicular Clouds. Specifically, the chapter introduces VCs and discusses some of their distinguishing characteristics and a number of fundamental research challenges. To illustrate the huge array of possible applications of the powerful VC concept, a number of possible application scenarios are presented and discussed. As the adoption and success of the vehicular cloud concept is inextricably related to security and privacy issues, a number of security and privacy issues specific to vehicular clouds are discussed as well. Additionally, data aggregation and empirical results are presented. Mobile Ad Hoc Networking: Cutting Edge Directions, Second Edition. Edited by Stefano Basagni, Marco Conti, Silvia Giordano, and Ivan Stojmenovic. © 2013 by The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.",
"title": ""
},
{
"docid": "3cd383e547b01040261dc1290d87b02e",
"text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.",
"title": ""
},
{
"docid": "75ce2ccca2afcae56101e141a42ac2a2",
"text": "Hip disarticulation is an amputation through the hip joint capsule, removing the entire lower extremity, with closure of the remaining musculature over the exposed acetabulum. Tumors of the distal and proximal femur were treated by total femur resection; a hip disarticulation sometimes is performance for massive trauma with crush injuries to the lower extremity. This article discusses the design a system for rehabilitation of a patient with bilateral hip disarticulations. The prosthetics designed allowed the patient to do natural gait suspended between parallel articulate crutches with the body weight support between the crutches. The care of this patient was a challenge due to bilateral amputations at such a high level and the special needs of a patient mobility. Keywords— Amputation, prosthesis, mobility,",
"title": ""
},
{
"docid": "b5b91947716e3594e3ddbb300ea80d36",
"text": "In this paper, a novel drive method, which is different from the traditional motor drive techniques, for high-speed brushless DC (BLDC) motor is proposed and verified by a series of experiments. It is well known that the BLDC motor can be driven by either pulse-width modulation (PWM) techniques with a constant dc-link voltage or pulse-amplitude modulation (PAM) techniques with an adjustable dc-link voltage. However, to our best knowledge, there is rare study providing a proper drive method for a high-speed BLDC motor with a large power over a wide speed range. Therefore, the detailed theoretical analysis comparison of the PWM control and the PAM control for high-speed BLDC motor is first given. Then, a conclusion that the PAM control is superior to the PWM control at high speed is obtained because of decreasing the commutation delay and high-frequency harmonic wave. Meanwhile, a new high-speed BLDC motor drive method based on the hybrid approach combining PWM and PAM is proposed. Finally, the feasibility and effectiveness of the performance analysis comparison and the new drive method are verified by several experiments.",
"title": ""
},
{
"docid": "55d0ce47c7864e42412b4532869e66d6",
"text": "Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datasets. While traditional machine learning algorithms can address these drawbacks, they are not typically capable of the performance levels achieved by deep neural networks. To improve performance, ensemble methods are used to combine multiple base learners. Super learning is an ensemble that finds the optimal combination of diverse learning algorithms. This paper proposes deep super learning as an approach which achieves log loss and accuracy results competitive to deep neural networks while employing traditional machine learning algorithms in a hierarchical structure. The deep super learner is flexible, adaptable, and easy to train with good performance across different tasks using identical hyper-parameter values. Using traditional machine learning requires fewer hyper-parameters, allows transparency into results, and has relatively fast convergence on smaller datasets. Experimental results show that the deep super learner has superior performance compared to the individual base learners, single-layer ensembles, and in some cases deep neural networks. Performance of the deep super learner may further be improved with task-specific tuning.",
"title": ""
},
{
"docid": "5bac6135af1c6014352d6ce5e91ec8d3",
"text": "Acute necrotizing fasciitis (NF) in children is a dangerous illness characterized by progressive necrosis of the skin and subcutaneous tissue. The present study summarizes our recent experience with the treatment of pediatric patients with severe NF. Between 2000 and 2009, eight children suffering from NF were admitted to our department. Four of the children received an active treatment strategy including continuous renal replacement therapy (CRRT), radical debridement, and broad-spectrum antibiotics. Another four children presented at a late stage of illness, and did not complete treatment. Clinical data for these two patient groups were retrospectively analyzed. The four patients that completed CRRT, radical debridement, and a course of broad-spectrum antibiotics were cured without any significant residual morbidity. The other four infants died shortly after admission. Early diagnosis, timely debridement, and aggressive use of broad-spectrum antibiotics are key factors for achieving a satisfactory outcome for cases of acute NF. Early intervention with CRRT to prevent septic shock may also improve patient outcome.",
"title": ""
},
{
"docid": "b771737351b984881e0fce7f9bb030e8",
"text": "BACKGROUND\nConsidering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone.\n\n\nMATERIAL/METHODS\nIn this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing.\n\n\nRESULTS\nThe Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group.\n\n\nCONCLUSIONS\nThis study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.",
"title": ""
},
{
"docid": "11e220528f9d4b6a51cdb63268934586",
"text": "The function of DIRCM (directed infrared countermeasures) jamming is to cause the missile to miss its intended target by disturbing the seeker tracking process. The DIRCM jamming uses the pulsing flashes of infrared (IR) energy and its frequency, phase and intensity have the influence on the missile guidance system. In this paper, we analyze the DIRCM jamming effect of the spin-scan reticle seeker. The simulation results show that the jamming effect is greatly influenced by frequency, phase and intensity of the jammer signal.",
"title": ""
},
{
"docid": "62b24fad8ab9d1c426ed3ff7c3c5fb49",
"text": "In the present paper we have reported a wavelet based time-frequency multiresolution analysis of an ECG signal. The ECG (electrocardiogram), which records hearts electrical activity, is able to provide with useful information about the type of Cardiac disorders suffered by the patient depending upon the deviations from normal ECG signal pattern. We have plotted the coefficients of continuous wavelet transform using Morlet wavelet. We used different ECG signal available at MIT-BIH database and performed a comparative study. We demonstrated that the coefficient at a particular scale represents the presence of QRS signal very efficiently irrespective of the type or intensity of noise, presence of unusually high amplitude of peaks other than QRS peaks and Base line drift errors. We believe that the current studies can enlighten the path towards development of very lucid and time efficient algorithms for identifying and representing the QRS complexes that can be done with normal computers and processors. KeywordsECG signal, Continuous Wavelet Transform, Morlet Wavelet, Scalogram, QRS Detector.",
"title": ""
},
{
"docid": "d15dc60ef2fb1e6096a3aba372698fd9",
"text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.",
"title": ""
},
{
"docid": "abe729a351eb9dbc1688abe5133b28c2",
"text": "C. H. Tian B. K. Ray J. Lee R. Cao W. Ding This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a serviceanalysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.",
"title": ""
},
{
"docid": "2afbb4e8963b9e6953fd6f7f8c595c06",
"text": "Large-scale linguistically annotated corpora have played a crucial role in advancing the state of the art of key natural language technologies such as syntactic, semantic and discourse analyzers, and they serve as training data as well as evaluation benchmarks. Up till now, however, most of the evaluation has been done on monolithic corpora such as the Penn Treebank, the Proposition Bank. As a result, it is still unclear how the state-of-the-art analyzers perform in general on data from a variety of genres or domains. The completion of the OntoNotes corpus, a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information, makes it possible to perform such an evaluation. This paper presents an analysis of the performance of publicly available, state-of-the-art tools on all layers and languages in the OntoNotes v5.0 corpus. This should set the benchmark for future development of various NLP components in syntax and semantics, and possibly encourage research towards an integrated system that makes use of the various layers jointly to improve overall performance.",
"title": ""
},
{
"docid": "00614d23a028fe88c3f33db7ace25a58",
"text": "Cloud Computing and The Internet of Things are the two hot points in the Internet field. The application of the two new technologies is in hot discussion and research, but quite less on the field of agriculture and forestry. Thus, in this paper, we analyze the study and application of Cloud Computing and The Internet of Things on agriculture and forestry. Then we put forward an idea that making a combination of the two techniques and analyze the feasibility, applications and future prospect of the combination.",
"title": ""
},
{
"docid": "cfc9a6e0a99ba5ba5668037650e95d1d",
"text": "This paper tries to estimate post-legalization production costs for indoor and outdoor cannabis cultivation as well as parallel estimates for processing costs. Commercial production for general use is not legal anywhere. Hence, this is an exercise in inference based on imperfect analogs supplemented by spare and unsatisfactory data of uncertain provenance. While some parameters are well grounded, many come from the gray literature and/or conversations with others making similar estimates, marijuana growers, and farmers of conventional goods. Hence, this exercise should be taken with more than a few grains of salt. Nevertheless, to the extent that the results are even approximately correct, they suggest that wholesale prices after legalization could be dramatically lower than they are today, quite possibly a full order of magnitude lower than are current prices.",
"title": ""
},
{
"docid": "9beff0659cc5aad37097d212caaeef40",
"text": "Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance.",
"title": ""
},
{
"docid": "6d3e5f798cee29e0039965d450b36cf3",
"text": "Many mammals are born in a very immature state and develop their rich repertoire of behavioral and cognitive functions postnatally. This development goes in parallel with changes in the anatomical and functional organization of cortical structures which are involved in most complex activities. The emerging spatiotemporal activity patterns in multi-neuronal cortical networks may indeed form a direct neuronal correlate of systemic functions like perception, sensorimotor integration, decision making or memory formation. During recent years, several studies--mostly in rodents--have shed light on the ontogenesis of such highly organized patterns of network activity. While each local network has its own peculiar properties, some general rules can be derived. We therefore review and compare data from the developing hippocampus, neocortex and--as an intermediate region--entorhinal cortex. All cortices seem to follow a characteristic sequence starting with uncorrelated activity in uncoupled single neurons where transient activity seems to have mostly trophic effects. In rodents, before and shortly after birth, cortical networks develop weakly coordinated multineuronal discharges which have been termed synchronous plateau assemblies (SPAs). While these patterns rely mostly on electrical coupling by gap junctions, the subsequent increase in number and maturation of chemical synapses leads to the generation of large-scale coherent discharges. These patterns have been termed giant depolarizing potentials (GDPs) for predominantly GABA-induced events or early network oscillations (ENOs) for mostly glutamatergic bursts, respectively. During the third to fourth postnatal week, cortical areas reach their final activity patterns with distinct network oscillations and highly specific neuronal discharge sequences which support adult behavior. While some of the mechanisms underlying maturation of network activity have been elucidated much work remains to be done in order to fully understand the rules governing transition from immature to mature patterns of network activity.",
"title": ""
},
{
"docid": "d5fcc6e6046ca293fc9b5afcc236325f",
"text": "Purpose – The purpose of this study is to conduct a meta-analysis of prior scientometric research of the knowledge management (KM) field. Design/methodology/approach – A total of 108 scientometric studies of the KM discipline were subjected to meta-analysis techniques. Findings – The overall volume of scientometric KM works has been growing, reaching up to ten publications per year by 2012, but their key findings are somewhat inconsistent. Most scientometric KM research is published in non-KM-centric journals. The KM discipline has deep historical roots. It suffers from a high degree of over-differentiation and is represented by dissimilar research streams. The top six most productive countries for KM research are the USA, the UK, Canada, Germany, Australia, and Spain. KM exhibits attributes of a healthy academic domain with no apparent anomalies and is progressing towards academic maturity. Practical implications – Scientometric KM researchers should use advanced empirical methods, become aware of prior scientometric research, rely on multiple databases, develop a KM keyword classification scheme, publish their research in KM-centric outlets, focus on rigorous research of the forums for KM publications, improve their cooperation, conduct a comprehensive study of individual and institutional productivity, and investigate interdisciplinary collaboration. KM-centric journals should encourage authors to employ under-represented empirical methods and conduct meta-analysis studies and should discourage conceptual publications, especially the development of new frameworks. To improve the impact of KM research on the state of practice, knowledge dissemination channels should be developed. Originality/value – This is the first documented attempt to conduct a meta-analysis of scientometric research of the KM discipline.",
"title": ""
},
{
"docid": "260e574e9108e05b98df7e4ed489e5fc",
"text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.",
"title": ""
},
{
"docid": "5d753a475a18f250b2e3143cf80a6e33",
"text": "In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.",
"title": ""
},
{
"docid": "70c9fe96604c617a2e94fd721add3fb5",
"text": "Multi-task learning aims to boost the performance of multiple prediction tasks by appropriately sharing relevant information among them. However, it always suffers from the negative transfer problem. And due to the diverse learning difficulties and convergence rates of different tasks, jointly optimizing multiple tasks is very challenging. To solve these problems, we present a weighted multi-task deep convolutional neural network for person attribute analysis. A novel validation loss trend algorithm is, for the first time proposed to dynamically and adaptively update the weight for learning each task in the training process. Extensive experiments on CelebA, Market-1501 attribute and Duke attribute datasets clearly show that state-of-the-art performance is obtained; and this validates the effectiveness of our proposed framework.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.