query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
28d09c85e0f76aad940af2155d4992bd
|
Coreference Resolution : Current Trends and Future Directions
|
[
{
"docid": "31ef7a4b5cfe8a4c00ebf0c6fedd8b9f",
"text": "Coreference analysis, also known as record linkage or identity uncertainty, is a difficult and important problem in natural language processing, databases, citation matching and many other tasks. This paper introduces several discriminative, conditional-probability models for coreference analysis, all examples of undirected graphical models. Unlike many historical approaches to coreference, the models presented here are relational—they do not assume that pairwise coreference decisions should be made independently from each other. Unlike other relational models of coreference that are generative, the conditional model here can incorporate a great variety of features of the input without having to be concerned about their dependencies—paralleling the advantages of conditional random fields over hidden Markov models. We present positive results on noun phrase coreference in two standard text data sets.",
"title": ""
}
] |
[
{
"docid": "8c3e3a120d63cca6808fef94d2922843",
"text": "Python offers basic facilities for interactive work and a comprehensive library on top of which more sophisticated systems can be built. The IPython project provides on enhanced interactive environment that includes, among other features, support for data visualization and facilities for distributed and parallel computation",
"title": ""
},
{
"docid": "869f492020b06dbd7795251858beb6f7",
"text": "Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.",
"title": ""
},
{
"docid": "df155f17d4d810779ee58bafcaab6f7b",
"text": "OBJECTIVE\nTo explore the types, prevalence and associated variables of cyberbullying among students with intellectual and developmental disability attending special education settings.\n\n\nMETHODS\nStudents (n = 114) with intellectual and developmental disability who were between 12-19 years of age completed a questionnaire containing questions related to bullying and victimization via the internet and cellphones. Other questions concerned sociodemographic characteristics (IQ, age, gender, diagnosis), self-esteem and depressive feelings.\n\n\nRESULTS\nBetween 4-9% of students reported bullying or victimization of bullying at least once a week. Significant associations were found between cyberbullying and IQ, frequency of computer usage and self-esteem and depressive feelings. No associations were found between cyberbullying and age and gender.\n\n\nCONCLUSIONS\nCyberbullying is prevalent among students with intellectual and developmental disability in special education settings. Programmes should be developed to deal with this issue in which students, teachers and parents work together.",
"title": ""
},
{
"docid": "00e8c142e7f059c10cd9eabdb78e0120",
"text": "Running average method and its modified version are two simple and fast methods for background modeling. In this paper, some weaknesses of running average method and standard background subtraction are mentioned. Then, a fuzzy approach for background modeling and background subtraction is proposed. For fuzzy background modeling, fuzzy running average is suggested. Background modeling and background subtraction algorithms are very commonly used in vehicle detection systems. To demonstrate the advantages of fuzzy running average and fuzzy background subtraction, these methods and their standard versions are compared in vehicle detection application. Experimental results show that fuzzy approach is relatively more accurate than classical approach.",
"title": ""
},
{
"docid": "d25a34b3208ee28f9cdcddb9adf46eb4",
"text": "1 Umeå University, Department of Computing Science, SE-901 87 Umeå, Sweden, {jubo,thomasj,marie}@cs.umu.se Abstract The transition to object-oriented programming is more than just a matter of programming language. Traditional syllabi fail to teach students the “big picture” and students have difficulties taking advantage of objectoriented concepts. In this paper we present a holistic approach to a CS1 course in Java favouring general objectoriented concepts over the syntactical details of the language. We present goals for designing such a course and a case study showing interesting results.",
"title": ""
},
{
"docid": "95403ce714b2102ca5f50cfc4d838e07",
"text": "Recently, vision-based Advanced Driver Assist Systems have gained broad interest. In this work, we investigate free-space detection, for which we propose to employ a Fully Convolutional Network (FCN). We show that this FCN can be trained in a self-supervised manner and achieve similar results compared to training on manually annotated data, thereby reducing the need for large manually annotated training sets. To this end, our self-supervised training relies on a stereo-vision disparity system, to automatically generate (weak) training labels for the color-based FCN. Additionally, our self-supervised training facilitates online training of the FCN instead of offline. Consequently, given that the applied FCN is relatively small, the free-space analysis becomes highly adaptive to any traffic scene that the vehicle encounters. We have validated our algorithm using publicly available data and on a new challenging benchmark dataset that is released with this paper. Experiments show that the online training boosts performance with 5% when compared to offline training, both for Fmax and AP .",
"title": ""
},
{
"docid": "956690691cffe76be26bcbb45d88071c",
"text": "We analyze different strategies aimed at optimizing routing policies in the Internet. We first show that for a simple deterministic algorithm the local properties of the network deeply influence the time needed for packet delivery between two arbitrarily chosen nodes. We next rely on a real Internet map at the autonomous system level and introduce a score function that allows us to examine different routing protocols and their efficiency in traffic handling and packet delivery. Our results suggest that actual mechanisms are not the most efficient and that they can be integrated in a more general, though not too complex, scheme.",
"title": ""
},
{
"docid": "86d8a61771cd14a825b6fc652f77d1d6",
"text": "The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.",
"title": ""
},
{
"docid": "660998f8595df10e67bdb550c7ac5a5c",
"text": "The role of information technology (IT) in education has significantly increased, but resistance to technology by public school teachers worldwide remains high. This study examined public school teachers’ technology acceptance decision-making by using a research model that is based on key findings from relevant prior research and important characteristics of the targeted user acceptance phenomenon. The model was longitudinally tested using responses from more than 130 teachers attending an intensive 4-week training program on Microsoft PowerPoint, a common but important classroom presentation technology. In addition to identifying key acceptance determinants, we examined plausible changes in acceptance drivers over the course of the training, including their influence patterns and magnitudes. Overall, our model showed a reasonably good fit with the data and exhibited satisfactory explanatory power, based on the responses collected from training commencement and completion. Our findings suggest a highly prominent and significant core influence path from job relevance to perceived usefulness and then technology acceptance. Analysis of data collected at the beginning and the end of the training supports most of our hypotheses and sheds light on plausible changes in their influences over time. Specifically, teachers appear to consider a rich set of factors in initial acceptance but concentrate on fundamental determinants (e.g. perceived usefulness and perceived ease of use) in their continued acceptance. # 2003 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "c756302f0a8e43b57d10a23a2cd926f2",
"text": "In the last decade, mobile ad hoc networks (MANETs) have emerged as a major next generation wireless networking technology. However, MANETs are vulnerable to various attacks at all layers, including in particular the network layer, because the design of most MANET routing protocols assumes that there is no malicious intruder node in the network. In this paper, we present a survey of the main types of attack at the network layer, and we then review intrusion detection and protection mechanisms that have been proposed in the literature. We classify these mechanisms as either point detection algorithms that deal with a single type of attack, or as intrusion detection systems (IDSs) that can deal with a range of attacks. A comparison of the proposed protection mechanisms is also included in this paper. Finally, we identify areas where further research could focus.",
"title": ""
},
{
"docid": "353761bae5088e8ee33025fc04695297",
"text": " Land use can exert a powerful influence on ecological systems, yet our understanding of the natural and social factors that influence land use and land-cover change is incomplete. We studied land-cover change in an area of about 8800 km2 along the lower part of the Wisconsin River, a landscape largely dominated by agriculture. Our goals were (a) to quantify changes in land cover between 1938 and 1992, (b) to evaluate the influence of abiotic and socioeconomic variables on land cover in 1938 and 1992, and (c) to characterize the major processes of land-cover change between these two points in time. The results showed a general shift from agricultural land to forest. Cropland declined from covering 44% to 32% of the study area, while forests and grassland both increased (from 32% to 38% and from 10% to 14% respectively). Multiple linear regressions using three abiotic and two socioeconomic variables captured 6% to 36% of the variation in land-cover categories in 1938 and 9% to 46% of the variation in 1992. Including socioeconomic variables always increased model performance. Agricultural abandonment and a general decline in farming intensity were the most important processes of land-cover change among the processes considered. Areas characterized by the different processes of land-cover change differed in the abiotic and socioeconomic variables that had explanatory power and can be distinguished spatially. Understanding the dynamics of landscapes dominated by human impacts requires methods to incorporate socioeconomic variables and anthropogenic processes in the analyses. Our method of hypothesizing and testing major anthropogenic processes may be a useful tool for studying the dynamics of cultural landscapes.",
"title": ""
},
{
"docid": "ff27d6a0bb65b7640ca1dbe03abc4652",
"text": "The psychometric properties of the Depression Anxiety Stress Scales (DASS) were evaluated in a normal sample of N = 717 who were also administered the Beck Depression Inventory (BDI) and the Beck Anxiety Inventory (BAI). The DASS was shown to possess satisfactory psychometric properties, and the factor structure was substantiated both by exploratory and confirmatory factor analysis. In comparison to the BDI and BAI, the DASS scales showed greater separation in factor loadings. The DASS Anxiety scale correlated 0.81 with the BAI, and the DASS Depression scale correlated 0.74 with the BDI. Factor analyses suggested that the BDI differs from the DASS Depression scale primarily in that the BDI includes items such as weight loss, insomnia, somatic preoccupation and irritability, which fail to discriminate between depression and other affective states. The factor structure of the combined BDI and BAI items was virtually identical to that reported by Beck for a sample of diagnosed depressed and anxious patients, supporting the view that these clinical states are more severe expressions of the same states that may be discerned in normals. Implications of the results for the conceptualisation of depression, anxiety and tension/stress are considered, and the utility of the DASS scales in discriminating between these constructs is discussed.",
"title": ""
},
{
"docid": "4f73815cc6bbdfbacee732d8724a3f74",
"text": "Networks can be considered as approximation schemes. Multilayer networks of the perceptron type can approximate arbitrarily well continuous functions (Cybenko 1988, 1989; Funahashi 1989; Stinchcombe and White 1989). We prove that networks derived from regularization theory and including Radial Basis Functions (Poggio and Girosi 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property ofbest approximation. The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.",
"title": ""
},
{
"docid": "5940949b1fd6f6b8ab2c45dcb1ece016",
"text": "Despite significant work on the problem of inferring a Twitter user’s gender from her online content, no systematic investigation has been made into leveraging the most obvious signal of a user’s gender: first name. In this paper, we perform a thorough investigation of the link between gender and first name in English tweets. Our work makes several important contributions. The first and most central contribution is two different strategies for incorporating the user’s self-reported name into a gender classifier. We find that this yields a 20% increase in accuracy over a standard baseline classifier. These classifiers are the most accurate gender inference methods for Twitter data developed to date. In order to evaluate our classifiers, we developed a novel way of obtaining gender-labels for Twitter users that does not require analysis of the user’s profile or textual content. This is our second contribution. Our approach eliminates the troubling issue of a label being somehow derived from the same text that a classifier will use to",
"title": ""
},
{
"docid": "e3316e7fa5a042d0a973c621cec5c3bc",
"text": "Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions.",
"title": ""
},
{
"docid": "b5abd9cf5d035e0b0a18665c030ce0d1",
"text": "Wetland vegetation plays a key role in the ecological functions of wetland environments. Remote sensing techniques offer timely, up-to-date, and relatively accurate information for sustainable and effective management of wetland vegetation. This article provides an overview on the status of remote sensing applications in discriminating and mapping wetland vegetation, and estimating some of the biochemical and biophysical parameters of wetland vegetation. Research needs for successful applications of remote sensing in wetland vegetation mapping and the major challenges are also discussed. The review focuses on providing fundamental information relating to the spectral characteristics of wetland vegetation, discriminating wetland vegetation using broad- and narrow-bands, as well as estimating water content, biomass, and leaf area index. It can be concluded that the remote sensing of wetland vegetation has some particular challenges that require careful consideration in order to obtain successful results. These include an in-depth understanding of the factors affecting the interaction between electromagnetic radiation and wetland vegetation in a particular environment, selecting appropriate spatial and spectral resolution as well as suitable processing techniques for extracting spectral information of wetland vegetation.",
"title": ""
},
{
"docid": "f747e72f0d6934c60c8527f978a0d0b2",
"text": "In this paper, the modeling and model reference adaptive control (MRAC) for longitudinal attitude of a twin-rotor tail-sitter unmanned aerial vehicle (UAV), which is highly unstable during flight, are presented. First, the attitude dynamic models are established. Linearized model for longitudinal attitude in vertical flight mode is given that is used in later derivation of controller as well as for testing the algorithms in simulation. Then, a control law based on the MRAC technique is utilized to stabilize the longitudinal attitude control system with uncertainty. Simulation results show that the MRAC of pitch angle has good trajectory tracking and the designed control law has strong adaptive ability and anti-jamming ability.",
"title": ""
},
{
"docid": "1f5dfe426dae2b352fc67ed681c46e56",
"text": "We report a haematoma in a hydrocele of the canal of Nuck in a 69-year-old female. She presented with a right-sided groin swelling, the differential for which included an irreducible inguinal hernia or haematoma given her aspirin and clopidegrel use. Successful treatment involved evacuation of the haematoma with excision of the sac. Despite a high index of suspicion for a haematoma, these swellings should ideally be explored given the potential for co-existence of a hernia.",
"title": ""
},
{
"docid": "290fad1d2f0778ecb1807a461f8e8c2c",
"text": "We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs. A prior on the latent variables expresses the preference for faster computation. The amount of computation for an input is determined via amortized maximum a posteriori (MAP) inference. MAP inference is performed using a novel stochastic variational optimization method. The recently proposed Adaptive Computation Time mechanism can be seen as an ad-hoc relaxation of this model. We demonstrate training using the generalpurpose Concrete relaxation of discrete variables. Evaluation on ResNet shows that our method matches the speed-accuracy trade-off of Adaptive Computation Time, while allowing for evaluation with a simple deterministic procedure that has a lower memory footprint.",
"title": ""
},
{
"docid": "2de04c57a9034cf2b4eb7055b4e150f6",
"text": "This paper presents an online detection-based two-stage multi-object tracking method in dense visual surveillances scenarios with a single camera. In the local stage, a particle filter with observer selection that could deal with partial object occlusion is used to generate a set of reliable tracklets. In the global stage, the detection responses are collected from a temporal sliding window to deal with ambiguity caused by full object occlusion to generate a set of potential tracklets. The reliable tracklets generated in the local stage and the potential tracklets generated within the temporal sliding window are associated by Hungarian algorithm on a modified pairwise tracklets association cost matrix to get the global optimal association. This method is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results prove the effectiveness of our method.",
"title": ""
}
] |
scidocsrr
|
16c415e08f3bc06c80b5184359e0d817
|
Active visual SLAM for robotic area coverage: Theory and experiment
|
[
{
"docid": "975019aa11bde7dfed5f8392f26260a7",
"text": "This paper reports a real-time monocular visual simultaneous localization and mapping (SLAM) algorithm and results for its application in the area of autonomous underwater ship hull inspection. The proposed algorithm overcomes some of the specific challenges associated with underwater visual SLAM, namely, limited field of view imagery and feature-poor regions. It does so by exploiting our SLAM navigation prior within the image registration pipeline and by being selective about which imagery is considered informative in terms of our visual SLAM map. A novel online bag-of-words measure for intra and interimage saliency are introduced and are shown to be useful for image key-frame selection, information-gain-based link hypothesis, and novelty detection. Results from three real-world hull inspection experiments evaluate the overall approach, including one survey comprising a 3.4-h/2.7-km-long trajectory.",
"title": ""
}
] |
[
{
"docid": "2e0190ff3874bcdb0cc129401f24a3ae",
"text": "End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT). However, little is known about linguistic patterns of morphology, syntax and semantics learned during the training of NMT systems, and more importantly, which parts of the architecture are responsible for learning each of these phenomena. In this paper we i) analyze how much morphology an NMT decoder learns, and ii) investigate whether injecting target morphology into the decoder helps it produce better translations. To this end we present three methods: i) joint generation, ii) joint-data learning, and iii) multi-task learning. Our results show that explicit morphological information helps the decoder learn target language morphology and improves the translation quality by 0.2–0.6 BLEU points.",
"title": ""
},
{
"docid": "a7e0ff324e4bf4884f0a6e35adf588a3",
"text": "Named Entity Recognition (NER) is a subtask of information extraction and aims to identify atomic entities in text that fall into predefined categories such as person, location, organization, etc. Recent efforts in NER try to extract entities and link them to linked data entities. Linked data is a term used for data resources that are created using semantic web standards such as DBpedia. There are a number of online tools that try to identify named entities in text and link them to linked data resources. Although one can use these tools via their APIs and web interfaces, they use different data resources and different techniques to identify named entities and not all of them reveal this information. One of the major tasks in NER is disambiguation that is identifying the right entity among a number of entities with the same names; for example \"apple\" standing for both \"Apple, Inc.\" the company and the fruit. We developed a similar tool called NERSO, short for Named Entity Recognition Using Semantic Open Data, to automatically extract named entities, disambiguating and linking them to DBpedia entities. Our disambiguation method is based on constructing a graph of linked data entities and scoring them using a graph-based centrality algorithm. We evaluate our system by comparing its performance with two publicly available NER tools. The results show that NERSO performs better.",
"title": ""
},
{
"docid": "af63f1e1efbb15f2f41a91deb6ec1e32",
"text": "OBJECTIVES\n: A systematic review of the literature to determine the ability of dynamic changes in arterial waveform-derived variables to predict fluid responsiveness and compare these with static indices of fluid responsiveness. The assessment of a patient's intravascular volume is one of the most difficult tasks in critical care medicine. Conventional static hemodynamic variables have proven unreliable as predictors of volume responsiveness. Dynamic changes in systolic pressure, pulse pressure, and stroke volume in patients undergoing mechanical ventilation have emerged as useful techniques to assess volume responsiveness.\n\n\nDATA SOURCES\n: MEDLINE, EMBASE, Cochrane Register of Controlled Trials and citation review of relevant primary and review articles.\n\n\nSTUDY SELECTION\n: Clinical studies that evaluated the association between stroke volume variation, pulse pressure variation, and/or stroke volume variation and the change in stroke volume/cardiac index after a fluid or positive end-expiratory pressure challenge.\n\n\nDATA EXTRACTION AND SYNTHESIS\n: Data were abstracted on study design, study size, study setting, patient population, and the correlation coefficient and/or receiver operating characteristic between the baseline systolic pressure variation, stroke volume variation, and/or pulse pressure variation and the change in stroke index/cardiac index after a fluid challenge. When reported, the receiver operating characteristic of the central venous pressure, global end-diastolic volume index, and left ventricular end-diastolic area index were also recorded. Meta-analytic techniques were used to summarize the data. Twenty-nine studies (which enrolled 685 patients) met our inclusion criteria. Overall, 56% of patients responded to a fluid challenge. The pooled correlation coefficients between the baseline pulse pressure variation, stroke volume variation, systolic pressure variation, and the change in stroke/cardiac index were 0.78, 0.72, and 0.72, respectively. The area under the receiver operating characteristic curves were 0.94, 0.84, and 0.86, respectively, compared with 0.55 for the central venous pressure, 0.56 for the global end-diastolic volume index, and 0.64 for the left ventricular end-diastolic area index. The mean threshold values were 12.5 +/- 1.6% for the pulse pressure variation and 11.6 +/- 1.9% for the stroke volume variation. The sensitivity, specificity, and diagnostic odds ratio were 0.89, 0.88, and 59.86 for the pulse pressure variation and 0.82, 0.86, and 27.34 for the stroke volume variation, respectively.\n\n\nCONCLUSIONS\n: Dynamic changes of arterial waveform-derived variables during mechanical ventilation are highly accurate in predicting volume responsiveness in critically ill patients with an accuracy greater than that of traditional static indices of volume responsiveness. This technique, however, is limited to patients who receive controlled ventilation and who are not breathing spontaneously.",
"title": ""
},
{
"docid": "223b74ccdafcd3fafa372cd6a4fbb6cb",
"text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%e99% and a false positive rate of 0.06% e2%, under all tested datasets and settings. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "e9e0ae150dbbfd2aa4f79c1119aef1b0",
"text": "Modern datacenter (DC) workloads are characterized by increasing diversity and differentiated QoS requirements in terms of the average or worst-case performance. The shift towards DC calls for the new OS architectures that not only gracefully achieve disparate performance goals, but also protect software investments. This paper presents the \"isolate first, then share\" OS architecture. We decompose the OS into the supervisor and several subOSes running side by side: a subOS directly manages physical resources without intervention from the supervisor (isolate resources first), while the supervisor can create, destroy, resize a subOS on-the-fly (then share). We confine state sharing among the supervisor and SubOSes (isolate states first), and provide fast inter-subOS communication mechanisms on demand (then share). We present the first implementation—RainForest, which supports unmodified Linux binaries. Our comprehensive evaluations show RainForest outperforms Linux with three different kernels, LXC, and Xen in terms of improving resource utilization, throughput, scalability, and worst-case performance. The RainForest source code is soon available.",
"title": ""
},
{
"docid": "da694b74b3eaae46d15f589e1abef4b8",
"text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bdcc0547fe01857f524d6a295da70387",
"text": "[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about \"usability\" and \"reliability\", but the majority of statements are on a subcharacteristic level, most notably regarding \"operability\", \"adaptability\", \"fault tolerance\", and \"interoperability\". A set of 16 language patterns regarding \"usability\" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.",
"title": ""
},
{
"docid": "0cf67f363a2912b287ae0321d0a2097e",
"text": "We survey the most recent BIS proposals for the credit risk measurement of retail credits in capital regulations. We also describe the recent trend away from relationship lending toward transactional lending in the small business loan arena. These trends create the opportunity to adopt more analytical, data-based approaches to credit risk measurement. We survey proprietary credit scoring models (such as Fair Isaac), as well as options-theoretic structural models (such as KMV and Moody’s RiskCalc), and reduced-form models (such as Credit Risk Plus). These models allow lenders and regulators to develop techniques that rely on portfolio aggregation to measure retail credit risk exposure. 2003 Elsevier B.V. All rights reserved. JEL classification: G21; G28",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "ea9f5956e09833c107d79d5559367e0e",
"text": "This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability.",
"title": ""
},
{
"docid": "f122373d44be16dadd479c75cca34a2a",
"text": "This paper presents the design, fabrication, and evaluation of a novel type of valve that uses an electropermanent magnet [1]. This valve is then used to build actuators for a soft robot. The developed EPM valves require only a brief (5 ms) pulse of current to turn flow on or off for an indefinite period of time. EPMvalves are characterized and demonstrated to be well suited for the control of elastomer fluidic actuators. The valves drive the pressurization and depressurization of fluidic channels within soft actuators. Furthermore, the forward locomotion of a soft, multi-actuator rolling robot is driven by EPM valves. The small size and energy-efficiency of EPM valves may make them valuable in soft mobile robot applications.",
"title": ""
},
{
"docid": "7beb0fa9fa3519d291aa3d224bfc1b74",
"text": "In comparisons among Chicago neighbourhoods, homicide rates in 1988-93 varied more than 100-fold, while male life expectancy at birth ranged from 54 to 77 years, even with effects of homicide mortality removed. This \"cause deleted\" life expectancy was highly correlated with homicide rates; a measure of economic inequality added significant additional prediction, whereas median household income did not. Deaths from internal causes (diseases) show similar age patterns, despite different absolute levels, in the best and worst neighbourhoods, whereas deaths from external causes (homicide, accident, suicide) do not. As life expectancy declines across neighbourhoods, women reproduce earlier; by age 30, however, neighbourhood no longer affects age specific fertility. These results support the hypothesis that life expectancy itself may be a psychologically salient determinant of risk taking and the timing of life transitions.",
"title": ""
},
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
},
{
"docid": "98b536786ecfeab870467c5951924662",
"text": "An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network research—the binary, linear, and continuous-nonlinear models—are noted. The remainder of the article describes results about continuous-nonlinear models: Many models of contentaddressable memory are shown to be special cases of the Cohen-Grossberg model and global Liapunov function, including the additive, brain-state-in-a-box, McCulloch-Pitts, Boltzmann machine, Hartline-Ratliff-Miller, shunting, masking field, bidirectional associative memory, Volterra-Lotka, Gilpin-Ayala, and Eigen-Schuster models. A Liapunov functional method is described for proving global limit or oscillation theorems Purchase Export",
"title": ""
},
{
"docid": "be70a14152656eb886c8a28e7e0dd613",
"text": "OBJECTIVES\nTranscutaneous electrical nerve stimulation (TENS) is an analgesic current that is used in many acute and chronic painful states. The aim of this study was to investigate central pain modulation by low-frequency TENS.\n\n\nMETHODS\nTwenty patients diagnosed with subacromial impingement syndrome of the shoulder were enrolled in the study. Patients were randomized into 2 groups: low-frequency TENS and sham TENS. Painful stimuli were delivered during which functional magnetic resonance imaging scans were performed, both before and after treatment. Ten central regions of interest that were reported to have a role in pain perception were chosen and analyzed bilaterally on functional magnetic resonance images. Perceived pain intensity during painful stimuli was evaluated using visual analog scale (VAS).\n\n\nRESULTS\nIn the low-frequency TENS group, there was a statistically significant decrease in the perceived pain intensity and pain-specific activation of the contralateral primary sensory cortex, bilateral caudal anterior cingulate cortex, and of the ipsilateral supplementary motor area. There was a statistically significant correlation between the change of VAS value and the change of activity in the contralateral thalamus, prefrontal cortex, and the ipsilateral posterior parietal cortex. In the sham TENS group, there was no significant change in VAS value and activity of regions of interest.\n\n\nDISCUSSION\nWe suggest that a 1-session low-frequency TENS may induce analgesic effect through modulation of discriminative, affective, and motor aspects of central pain perception.",
"title": ""
},
{
"docid": "2eb344b6701139be184624307a617c1b",
"text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .",
"title": ""
},
{
"docid": "0fd37a459c95b20e3d80021da1bb281d",
"text": "Social media data are increasingly used as the source of research in a variety of domains. A typical example is urban analytics, which aims at solving urban problems by analyzing data from different sources including social media. The potential value of social media data in tourism studies, which is one of the key topics in urban research, however has been much less investigated. This paper seeks to understand the relationship between social media dynamics and the visiting patterns of visitors to touristic locations in real-world cases. By conducting a comparative study, we demonstrate how social media characterizes touristic locations differently from other data sources. Our study further shows that social media data can provide real-time insights of tourists’ visiting patterns in big events, thus contributing to the understanding of social media data utility in tourism studies.",
"title": ""
},
{
"docid": "8d6b3e28ba335f2c3c98d18994610319",
"text": "We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.",
"title": ""
},
{
"docid": "58f39c555b96cb7bbc4d2bc76a19e937",
"text": "A corona discharge generator for surface treatment without the use of a step-up transformer with a high-voltage secondary is presented. The oil bath for high-voltage components is eliminated and still a reasonable volume, efficiency, and reliability of the generator are obtained. The voltage multiplication is achieved by an LC series resonant circuit. The resonant circuit is driven by a bridge type voltage-source resonant inverter. First, feasibility of the proposed method is proved by calculations. Closed form design expressions for key components of the electronic generator are provided. Second, a prototype of the electronic generator is built and efficiency measurements are performed. For power measurement, Lissajous figures and direct averaging of the instantaneous voltage-current product are used. The overall efficiency achieved is in the range between 80% and 90%.",
"title": ""
},
{
"docid": "6a2a1f6ff3fea681c37b19ac51c17fe6",
"text": "The present research investigates the influence of culture on telemedicine adoption and patient information privacy, security, and policy. The results, based on the SEM analysis of the data collected in the United States, demonstrate that culture plays a significant role in telemedicine adoption. The results further show that culture also indirectly influences telemedicine adoption through information security, information privacy, and information policy. Our empirical results further indicate that information security, privacy, and policy impact telemedicine adoption.",
"title": ""
}
] |
scidocsrr
|
1ea787c36a7b5ebb29ec4b6ca0b72170
|
OFDM SIMULATION in MATLAB
|
[
{
"docid": "01d82c7936d2a6c6beee5cc01158e486",
"text": "Orthogonal frequency division multiplexing (OFDM) is becoming the chosen modulation technique for wireless communications. OFDM can provide large data rates with sufficient robustness to radio channel impairments. Many research centers in the world have specialized teams working in the optimization of OFDM for countless applications. Here, at the Georgia Institute of Technology, one of such teams is in Dr. M. A. Ingram's Smart Antenna Research Laboratory (SARL), a part of the Georgia Center for Advanced Telecommunications Technology (GCATT). The purpose of this report is to provide Matlab code to simulate the basic processing involved in the generation and reception of an OFDM signal in a physical channel and to provide a description of each of the steps involved. For this purpose, we shall use, as an example, one of the proposed OFDM signals of the Digital Video Broadcasting (DVB) standard for the European terrestrial digital television (DTV) service.",
"title": ""
}
] |
[
{
"docid": "4b250bd1c7bcca08f011f5ebc2808e4c",
"text": "As a result of the rapid growth of available services provided via Internet, as well as multiple accounts a person owns, reliable user authentication schemes are mandatory for security purposes. OTP systems have prevailed as the best viable solution for security over sensitive information and pose an interesting field for research. Although, OTP schemes enhance authentication's security through various algorithmic customizations and extensions, certain compromises should be made; especially since excessively tolerable to vulnerability systems tend to have high computational and storage needs. In order to minimize the risk of a non-authenticated user having access to sensitive data, depending on the use, OTP system's architecture differs; as its tolerance towards already known attack methods. In this paper, the most widely accepted and promising OTP schemes are described and evaluated in terms of resistance against security attacks and in terms of computational intensity (performance efficiency). The results showed that there is a correlation between the security level, the computational efficiency and the storage needs of an OTP system.",
"title": ""
},
{
"docid": "f670b91f8874c2c2db442bc869889dbd",
"text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "3de7dd15d2b8bb5d08eb548bf3f19230",
"text": "Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT). To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.",
"title": ""
},
{
"docid": "23c8dd52480d1193b2728b05c9458080",
"text": "This article presents an overview of highway cooperative collision avoidance (CCA), which is an emerging vehicular safety application using the IEEE- and ASTM-adopted Dedicated Short Range Communication (DSRC) standard. Along with a description of the DSRC architecture, we introduce the concept of CCA and its implementation requirements in the context of a vehicle-to-vehicle wireless network, primarily at the Medium Access Control (MAC) and the routing layer. An overview is then provided to establish that the MAC and routing protocols from traditional Mobile Ad Hoc networks arc not directly applicable for CCA and similar safety-critical applications. Specific constraints and future research directions are then identified for packet routing protocols used to support such applications in the DSRC environment. In order to further explain the interactions between CCA and its underlying networking protocols, we present an example of the safety performance of CCA using simulated vehicle crash experiments. The results from these experiments arc also used to demonstrate the need for network data prioritization for safety-critical applications such as CCA. Finally, the performance sensitivity of CCA to unreliable wireless channels is discussed based on the experimental results.",
"title": ""
},
{
"docid": "3a17d60c2eb1df3bf491be3297cffe79",
"text": "Received: 3 October 2009 Revised: 22 June 2011 Accepted: 3 July 2011 Abstract Studies claiming to use the Grounded theory methodology (GTM) have been quite prevalent in information systems (IS) literature. A cursory review of this literature reveals conflict in the understanding of GTM, with a variety of grounded theory approaches apparent. The purpose of this investigation was to establish what alternative grounded theory approaches have been employed in IS, and to what extent each has been used. In order to accomplish this goal, a comprehensive set of IS articles that claimed to have followed a grounded theory approach were reviewed. The articles chosen were those published in the widely acknowledged top eight IS-centric journals, since these journals most closely represent exemplar IS research. Articles for the period 1985-2008 were examined. The analysis revealed four main grounded theory approaches in use, namely (1) the classic grounded theory approach, (2) the evolved grounded theory approach, (3) the use of the grounded theory approach as part of a mixed methodology, and (4) the application of grounded theory techniques, typically for data analysis purposes. The latter has been the most common approach in IS research. The classic approach was the least often employed, with many studies opting for an evolved or mixed method approach. These and other findings are discussed and implications drawn. European Journal of Information Systems (2013) 22, 119–129. doi:10.1057/ejis.2011.35; published online 30 August 2011",
"title": ""
},
{
"docid": "cddc653dc48a094897aa287f95c0d21d",
"text": "We present a real-time approach for image-based localization within large scenes that have been reconstructed offline using structure from motion (Sfm). From monocular video, our method continuously computes a precise 6-DOF camera pose, by efficiently tracking natural features and matching them to 3D points in the Sfm point cloud. Our main contribution lies in efficiently interleaving a fast keypoint tracker that uses inexpensive binary feature descriptors with a new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the need for online extraction of scale-invariant features. Instead, offline we construct an indexed database containing multiple DAISY descriptors per 3D point extracted at multiple scales. The key to the efficiency of our method lies in invoking DAISY descriptor extraction and matching sparingly during localization, and in distributing this computation over a window of successive frames. This enables the algorithm to run in real-time, without fluctuations in the latency over long durations. We evaluate the method in large indoor and outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a low-power, mobile computer suitable for onboard computation on a quadrotor micro aerial vehicle.",
"title": ""
},
{
"docid": "25a2a9be57f33415c95446511a259446",
"text": "While machine learning approaches to image restoration offer great promise, current methods risk training “onetrick ponies” that perform well only for image corruption of a particular level of difficulty—such as a certain level of noise or blur. First, we examine the weakness of a one-trick pony model and demonstrate that training general models to handle arbitrary levels of corruption is indeed non-trivial. Then, we propose an on-demand learning algorithm for training image restoration models with deep convolutional neural networks. The main idea is to exploit a feedback mechanism to self-generate training instances where they are needed most, thereby learning models that can generalize across difficulty levels. On four restoration tasks—image inpainting, pixel interpolation, image deblurring, and image denoising—and three diverse datasets, our approach consistently outperforms both the status quo training procedure and curriculum learning alternatives.",
"title": ""
},
{
"docid": "62a405be34c1ce733c0ded8dfe72e1cf",
"text": "This paper presents a new formulation of the artificial potential approach to the obstacle avoidance problem for a mobile robot or a manipulator in a known environment. Previous formulations of artificial potentials, for obstacle avoidance, have exhibited local minima in a cluttered environment. To build an artificial potential field, we use harmonic functions which completely eliminate local minima even for a cluttered environment. We use the panel method to represent arbitrarily shaped obstacles and to derive the potential over the whole space. Based on this potential function, we propose an elegant conml strategy for the real-time control of a robot. We test the harmonic potential, the panel method and the control strategy with a bar-shaped mobile robot and a 3 dof planar redundant manipulator.",
"title": ""
},
{
"docid": "dde075f427d729d028d6d382670f8346",
"text": "Using social media Web sites is among the most common activity of today's children and adolescents. Any Web site that allows social interaction is considered a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. Such sites offer today's youth a portal for entertainment and communication and have grown exponentially in recent years. For this reason, it is important that parents become aware of the nature of social media sites, given that not all of them are healthy environments for children and adolescents. Pediatricians are in a unique position to help families understand these sites and to encourage healthy use and urge parents to monitor for potential problems with cyberbullying, \"Facebook depression,\" sexting, and exposure to inappropriate content.",
"title": ""
},
{
"docid": "e69ecf0d4d04a956b53f34673e353de3",
"text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.",
"title": ""
},
{
"docid": "5bff5809ff470084497011a1860148e0",
"text": "A statistical meta-analysis of the technology acceptance model (TAM) as applied in various fields was conducted using 88 published studies that provided sufficient data to be credible. The results show TAM to be a valid and robust model that has been widely used, but which potentially has wider applicability. A moderator analysis involving user types and usage types was performed to investigate conditions under which TAM may have different effects. The study confirmed the value of using students as surrogates for professionals in some TAM studies, and perhaps more generally. It also revealed the power of meta-analysis as a rigorous alternative to qualitative and narrative literature review methods. # 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "67bfcfb41ef6fcffa90f699354c5e67f",
"text": "This paper presents a new modular and integrative sensory information system inspired by the way the brain performs information processing, in particular, pattern recognition. Spiking neural networks are used to model human-like visual and auditory pathways. This bimodal system is trained to perform the specific task of person authentication. The two unimodal systems are individually tuned and trained to recognize faces and speech signals from spoken utterances, respectively. New learning procedures are designed to operate in an online evolvable and adaptive way. Several ways of modelling sensory integration using spiking neural network architectures are suggested and evaluated in computer experiments.",
"title": ""
},
{
"docid": "80d859e26c815e5c6a8c108ab0141462",
"text": "StarCraft II poses a grand challenge for reinforcement learning. The main difficulties include huge state space, varying action space, long horizon, etc. In this paper, we investigate a set of techniques of reinforcement learning for the full-length game of StarCraft II. We investigate a hierarchical approach, where the hierarchy involves two levels of abstraction. One is the macro-actions extracted from expert’s demonstration trajectories, which can reduce the action space in an order of magnitude yet remains effective. The other is a two-layer hierarchical architecture, which is modular and easy to scale. We also investigate a curriculum transfer learning approach that trains the agent from the simplest opponent to harder ones. On a 64×64 map and using restrictive units, we train the agent on a single machine with 4 GPUs and 48 CPU threads. We achieve a winning rate of more than 99% against the difficulty level-1 built-in AI. Through the curriculum transfer learning algorithm and a mixture of combat model, we can achieve over 93% winning rate against the most difficult non-cheating built-in AI (level-7) within days. We hope this study could shed some light on the future research of large-scale reinforcement learning.",
"title": ""
},
{
"docid": "556f33d199e6516a4aa8ebca998facf2",
"text": "R ecommender systems have become important tools in ecommerce. They combine one user’s ratings of products or services with ratings from other users to answer queries such as “Would I like X?” with predictions and suggestions. Users thus receive anonymous recommendations from people with similar tastes. While this process seems innocuous, it aggregates user preferences in ways analogous to statistical database queries, which can be exploited to identify information about a particular user. This is especially true for users with eclectic tastes who rate products across different types or domains in the systems. These straddlers highlight the conflict between personalization and privacy in recommender systems. While straddlers enable serendipitous recommendations, information about their existence could be used in conjunction with other data sources to uncover identities and reveal personal details. We use a graph-theoretic model to study the benefit from and risk to straddlers.",
"title": ""
},
{
"docid": "5798d93d03b9ab2b10b5bea7ccbb58ce",
"text": "A wealth of information is available only in web pages, patents, publications etc. Extracting information from such sources is challenging, both due to the typically complex language processing steps required and to the potentially large number of texts that need to be analyzed. Furthermore, integrating extracted data with other sources of knowledge often is mandatory for subsequent analysis. In this demo, we present the AliBaba system for scalable information extraction from biomedical documents. Unlike many other systems, AliBaba performs both entity extraction and relationship extraction and graphically visualizes the resulting network of inter-connected objects. It leverages the PubMed search engine for selection of relevant documents. The technical novelty of AliBaba is twofold: (a) its ability to automatically learn language patterns for relationship extraction without an annotated corpus, and (b) its high performance pattern matching algorithm. We show that a simple yet effective pattern filtering technique improves the runtime of the system drastically without harming its extraction effectiveness. Although AliBaba has been implemented for biomedical texts, its underlying principles should also be applicable in any other domain.",
"title": ""
},
{
"docid": "f97086d856ebb2f1c5e4167f725b5890",
"text": "In this paper, an ac-linked hybrid electrical energy system comprising of photo voltaic (PV) and fuel cell (FC) with electrolyzer for standalone applications is proposed. PV is the primary power source of the system, and an FC-electrolyzer combination is used as a backup and as long-term storage system. A Fuzzy Logic controller is developed for the maximum power point tracking for the PV system. A simple power management strategy is designed for the proposed system to manage power flows among the different energy sources. A simulation model for the hybrid energy has been developed using MATLAB/Simulink.",
"title": ""
},
{
"docid": "58d66911afe35370309ae0bd6ee71045",
"text": "The face inversion effect (FIE) is defined as the larger decrease in recognition performance for faces than for other mono-oriented objects when they are presented upside down. Behavioral studies suggest the FIE takes place at the perceptual encoding stage and is mainly due to the decrease in ability to extract relational information when discriminating individual faces. Recently, functional magnetic resonance imaging and scalp event-related potentials studies found that turning faces upside down slightly but significantly decreases the response of face-selective brain regions, including the so-called fusiform face area (FFA), and increases activity of other areas selective for nonface objects. Face inversion leads to a significantly delayed (sometimes larger) N170 component, an occipito-temporal scalp potential associated with the perceptual encoding of faces and objects. These modulations are in agreement with the perceptual locus of the FIE and reinforce the view that the FFA and N170 are sensitive to individual face discrimination.",
"title": ""
},
{
"docid": "17fb585ff12cff879febb32c2a16b739",
"text": "An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large-scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the-art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.",
"title": ""
},
{
"docid": "b191e7773eecc2562b1261e97ae0b0f4",
"text": "The American journal 0/ Occupational Therapl' ThiS case report describes the effects of deeppressure tactile stimulation in reducing self-stimulating behaviors in a child with multiple disabilities including autism. These behaviors include hitting the hands together, one hand on top of the other, so that the palm of one hand hits the dorsum of the other, or hitting a surface with one or both hands. Such behaviors not only made classroom efforts to have her use her hands for selfcare functions such as holding an adapted spoon difficult or impossible, but also called attention to her disabling condition. These behaviors also were disruptive and noisy.",
"title": ""
}
] |
scidocsrr
|
6b434ffe708b5cc272147bc6ef1d272a
|
Inpainting of Long Audio Segments With Similarity Graphs
|
[
{
"docid": "8f57297e1a36638bef4d9bc5b4f4924a",
"text": "We introduce the beat spectrum, a new method of automatically characterizing the rhythm and tempo of music and audio. The beat spectrum is a measure of acoustic self-similarity as a function of time lag. Highly structured or repetitive music will have strong beat spectrum peaks at the repetition times. This reveals both tempo and the relative strength of particular beats, and therefore can distinguish between different kinds of rhythms at the same tempo. We also introduce the beat spectrogram which graphically illustrates rhythm variation over time. Unlike previous approaches to tempo analysis, the beat spectrum does not depend on particular attributes such as energy or frequency, and thus will work for any music or audio in any genre. We present tempo estimation results which are accurate to within 1% for a variety of musical genres. This approach has a variety of applications, including music retrieval by similarity and automatically generating music videos.",
"title": ""
},
{
"docid": "d569902303b93274baf89527e666adc0",
"text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.",
"title": ""
}
] |
[
{
"docid": "a144509c91a0cc8f50f0bb7e3d8dbdd6",
"text": "The prefrontal cortex is necessary for directing thought and planning action. Working memory, the active, transient maintenance of information in mind for subsequent monitoring and manipulation, lies at the core of many simple, as well as high-level, cognitive functions. Working memory has been shown to be compromised in a number of neurological and psychiatric conditions and may contribute to the behavioral and cognitive deficits associated with these disorders. It has been theorized that working memory depends upon reverberating circuits within the prefrontal cortex and other cortical areas. However, recent work indicates that intracellular signals and protein dephosphorylation are critical for working memory. The present article will review recent research into the involvement of the modulatory neurotransmitters and their receptors in working memory. The intracellular signaling pathways activated by these receptors and evidence that indicates a role for G(q)-initiated PI-PLC and calcium-dependent protein phosphatase calcineurin activity in working memory will be discussed. Additionally, the negative influence of calcium- and cAMP-dependent protein kinase (i.e., calcium/calmodulin-dependent protein kinase II (CaMKII), calcium/diacylglycerol-activated protein kinase C (PKC), and cAMP-dependent protein kinase A (PKA)) activities on working memory will be reviewed. The implications of these experimental findings on the observed inverted-U relationship between D(1) receptor stimulation and working memory, as well as age-associated working memory dysfunction, will be presented. Finally, we will discuss considerations for the development of clinical treatments for working memory disorders.",
"title": ""
},
{
"docid": "52e28bd011df723642b6f4ee83ab448d",
"text": "Researchers in a variety of fields, including aeolian science, biology, and environmental science, have already made use of stationary and mobile remote sensing equipment to increase their variety of data collection opportunities. However, due to mobility challenges, remote sensing opportunities relevant to desert environments and in particular dune fields have been limited to stationary equipment. We describe here an investigative trip to two well-studied experimental deserts in New Mexico with DRHex, a mobile remote sensing platform oriented towards desert research. D-RHex is the latest iteration of the RHex family of robots, which are six-legged, biologically inspired, small (10kg) platforms with good mobility in a variety of rough terrains, including on inclines and over obstacles of higher than robot hip height.",
"title": ""
},
{
"docid": "2c9cfc7bf3b88f27046b9366b6053867",
"text": "The purpose of this thesis project is to study and evaluate a UWB Synthetic Aperture Radar (SAR) data image formation algorithm, that was previously less familiar and, that has recently got much attention in this field. Certain properties of it made it acquire a status in radar signal processing branch. This is a fast time-domain algorithm named Local Backprojection (LBP). The LBP algorithm has been implemented for SAR image formation. The algorithm has been simulated in MATLAB using standard values of pertinent parameters. Later, an evaluation of the LBP algorithm has been performed and all the comments, estimation and judgment have been done on the basis of the resulting images. The LBP has also been compared with the basic time-domain algorithm Global Backprojection (GBP) with respect to the SAR images. The specialty of LBP algorithm is in its reduced computational load than in GBP. LBP is a two-stage algorithm — it forms the beam first for a particular subimage and, in a later stage, forms the image of that subimage area. The signal data collected from the target is processed and backprojected locally for every subimage individually. This is the reason of naming it Local backprojection. After the formation of all subimages, these are arranged and combined coherently to form the full SAR image.",
"title": ""
},
{
"docid": "fb09d91b8e572cc9d0179f14bdd74b53",
"text": "Being grateful has been associated with many positive outcomes, including greater happiness, positive affect, optimism, and self-esteem. There is limited research, however, on the associations between gratitude and different domains of life satisfaction across cultures. The current study examined the associations between gratitude and three domains of life satisfaction, including satisfaction in relationships, work, and health, and overall life satisfaction, in the United States and Japan. A total of 945 participants were drawn from two samples of middle aged and older adults, the Midlife Development in the United States and the Midlife Development in Japan. There were significant positive bivariate associations between gratitude and all four measures of life satisfaction. In addition, after adjusting for demographics, neuroticism, extraversion, and the other measures of satisfaction, gratitude was uniquely and positively associated with satisfaction with relationships and life overall but not with satisfaction with work or health. Furthermore, results indicated that women and individuals who were more extraverted and lived in the United States were more grateful and individuals with less than a high school degree were less grateful. The findings from this study suggest that gratitude is uniquely associated with specific domains of life satisfaction. Results are discussed with respect to future research and the design and implementation of gratitude interventions, particularly when including individuals from different cultures.",
"title": ""
},
{
"docid": "f2e9083262c2680de3cf756e7960074a",
"text": "Social commerce is a new development in e-commerce generated by the use of social media to empower customers to interact on the Internet. The recent advancements in ICTs and the emergence of Web 2.0 technologies along with the popularity of social media and social networking sites have seen the development of new social platforms. These platforms facilitate the use of social commerce. Drawing on literature from marketing and information systems (IS) the author proposes a new model to develop our underocial media ocial networking site rust LS-SEM standing of social commerce using a PLS-SEM methodology to test the model. Results show that Web 2.0 applications are attracting individuals to have interactions as well as generate content on the Internet. Consumers use social commerce constructs for these activities, which in turn increase the level of trust and intention to buy. Implications, limitations, discussion, and future research directions are discussed at the end of the paper. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "5816f70a7f4d7d0beb6e0653db962df3",
"text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.",
"title": ""
},
{
"docid": "af1257e27c0a6010a902e78dc8301df4",
"text": "A 20-MHz to 3-GHz wide-range multiphase delay-locked loop (DLL) has been realized in 90-nm CMOS technology. The proposed delay cell extends the operation frequency range. A scaling circuit is adopted to lower the large delay gain when the frequency of the input clock is low. The core area of this DLL is 0.005 mm2. The measured power consumption values are 0.4 and 3.6 mW for input clocks of 20 MHz and 3 GHz, respectively. The measured peak-to-peak and root-mean-square jitters are 2.3 and 16 ps at 3 GHz, respectively.",
"title": ""
},
{
"docid": "ad2d21232d8a9af42ea7339574739eb3",
"text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"title": ""
},
{
"docid": "8976cba604fdc5b00b506098941a6805",
"text": "Influenza is an acute respiratory illness that occurs virtually every year and results in substantial disease, death and expense. Detection of Influenza in its earliest stage would facilitate timely action that could reduce the spread of the illness. Existing systems such as CDC and EISS which try to collect diagnosis data, are almost entirely manual, resulting in about two-week delays for clinical data acquisition. Twitter, a popular microblogging service, provides us with a perfect source for early-stage flu detection due to its realtime nature. For example, when a flu breaks out, people that get the flu may post related tweets which enables the detection of the flu breakout promptly. In this paper, we investigate the real-time flu detection problem on Twitter data by proposing Flu Markov Network (Flu-MN): a spatio-temporal unsupervised Bayesian algorithm based on a 4 phase Markov Network, trying to identify the flu breakout at the earliest stage. We test our model on real Twitter datasets from the United States along with baselines in multiple applications, such as real-time flu breakout detection, future epidemic phase prediction, or Influenza-like illness (ILI) physician visits. Experimental results show the robustness and effectiveness of our approach. We build up a real time flu reporting system based on the proposed approach, and we are hopeful that it would help government or health organizations in identifying flu outbreaks and facilitating timely actions to decrease unnecessary mortality.",
"title": ""
},
{
"docid": "78e561cfb2578cc9d5634f008a4e6c7e",
"text": "The TCP transport layer protocol is designed for connections that traverse a single path between the sender and receiver. However, there are several environments in which multiple paths can be used by a connection simultaneously. In this paper we consider the problem of supporting striped connections that operate over multiple paths. We propose an end-to-end transport layer protocol called pTCP that allows connections to enjoy the aggregate bandwidths offered by the multiple paths, irrespective of the individual characteristics of the paths. We show that pTCP can have a varied range of applications through instantiations in three different environments: (a) bandwidth aggregation on multihomed mobile hosts, (b) service differentiation using purely end-to-end mechanisms, and (c) end-systems based network striping. In each of the applications we demonstrate the applicability of pTCP and how its efficacy compares with existing approaches through simulation results.",
"title": ""
},
{
"docid": "914f9bf7d24d0a0ee8c42e1263a04646",
"text": "With the rapid growth in the usage of social networks worldwide, uploading and sharing of user-generated content, both text and visual, has become increasingly prevalent. An analysis of the content a user shares and engages with can provide valuable insights into an individual's preferences and lifestyle. In this paper, we present a system to automatically infer a user's interests by analysing the content of the photos they share online. We propose a way to leverage web image search engines for detecting high-level semantic concepts, such as interests, in images, without relying on a large set of labeled images. We demonstrate the effectiveness of our system through quantitative and qualitative results on data collected from Instagram.",
"title": ""
},
{
"docid": "0cd077bec6516b3cdb86a8ccd185df78",
"text": "In this paper, a general purpose multi-agent classifier system based on the blackboard architecture using reinforcement Learning techniques is proposed for tackling complex data classification problems. A trust metric for evaluating agent’s performance and expertise based on Q-learning and employing different voting processes is formulated. Specifically, multiple heterogeneous machine learning agents, are devised to form the expertise group for the proposed Coordinated Heterogeneous Intelligent Multi-Agent Classifier System (CHIMACS). To evaluate the effectiveness of CHIMACS, a variety of benchmark problems are used, including small and high dimensional datasets with and without noise. The results from CHIMACS are compared with those of individual ML models and ensemble methods. The results indicate that CHIMACS is effective in identifying classifier agent expertise and can combine their knowledge to improve the overall prediction performance.",
"title": ""
},
{
"docid": "bab7a21f903157fcd0d3e70da4e7261a",
"text": "The clinical, electrophysiological and morphological findings (light and electron microscopy of the sural nerve and gastrocnemius muscle) are reported in an unusual case of Guillain-Barré polyneuropathy with an association of muscle hypertrophy and a syndrome of continuous motor unit activity. Fasciculation, muscle stiffness, cramps, myokymia, impaired muscle relaxation and percussion myotonia, with their electromyographic accompaniments, were abolished by peripheral nerve blocking, carbamazepine, valproic acid or prednisone therapy. Muscle hypertrophy, which was confirmed by morphometric data, diminished 2 months after the beginning of prednisone therapy. Electrophysiological and nerve biopsy findings revealed a mixed process of axonal degeneration and segmental demyelination. Muscle biopsy specimen showed a marked predominance and hypertrophy of type-I fibres and atrophy, especially of type-II fibres.",
"title": ""
},
{
"docid": "2391d0ea67da55155a8bffbf7b9b5776",
"text": "The way we talk about complex and abstract ideas is suffused with metaphor. In five experiments, we explore how these metaphors influence the way that we reason about complex issues and forage for further information about them. We find that even the subtlest instantiation of a metaphor (via a single word) can have a powerful influence over how people attempt to solve social problems like crime and how they gather information to make \"well-informed\" decisions. Interestingly, we find that the influence of the metaphorical framing effect is covert: people do not recognize metaphors as influential in their decisions; instead they point to more \"substantive\" (often numerical) information as the motivation for their problem-solving decision. Metaphors in language appear to instantiate frame-consistent knowledge structures and invite structurally consistent inferences. Far from being mere rhetorical flourishes, metaphors have profound influences on how we conceptualize and act with respect to important societal issues. We find that exposure to even a single metaphor can induce substantial differences in opinion about how to solve social problems: differences that are larger, for example, than pre-existing differences in opinion between Democrats and Republicans.",
"title": ""
},
{
"docid": "d67e0fa20185e248a18277e381c9d42d",
"text": "Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.",
"title": ""
},
{
"docid": "525bb459b53b35d6b6084756220594eb",
"text": "We provide a simple closed-form solution to the Perspective three orthogonal angles (P3oA) problem: given the projection of three orthogonal lines in a calibrated camera, find their 3D directions. Upon this solution, an algorithm for the estimation of the camera relative rotation between two frames is proposed. The key idea is to detect triplets of orthogonal lines in a hypothesize-and-test framework and use all of them to compute the camera rotation in a robust way. This approach is suitable for human-made environments where numerous groups of orthogonal lines exist. We evaluate the numerical stability of the P3oA solution and the estimation of the relative rotation with synthetic and real data, comparing our results to other state-of-the-art approaches.",
"title": ""
},
{
"docid": "16e0f05272a33d0fe4ffeb5da918aed3",
"text": "BACKGROUND\nWith respect to the pathogenesis of periorbital and midfacial aging, gravity may play a greater role than volume loss.\n\n\nOBJECTIVES\nThe authors determined the effect of shifting from the upright to the supine position on specific attributes of facial appearance and ascertained whether facial appearance in the supine position bore any resemblance to its appearance in youth.\n\n\nMETHODS\nParticipants who showed signs of midface aging were positioned in the upright and supine positions, and photographs were obtained during smiling and repose. For each photograph, examiners graded the following anatomic parameters, using a standardized scale: brow position, tear trough length and depth, steatoblepharon, cheek volume, malar bags/festoons, and nasolabial folds. Some participants provided photographs of themselves taken 10 to 15 years earlier; these were compared with the study images.\n\n\nRESULTS\nInterobserver correlation was strong. When participants were transferred from upright to supine, all anatomic parameters examined became more youthful in appearance; findings were statistically significant. The grading of anatomic parameters of the earlier photographs most closely matched that of current supine photographs of the subjects smiling.\n\n\nCONCLUSIONS\nIn the supine position, as opposed to the upright position, participants with signs of midface aging appear to have much more volume in the periorbita and midface. For the subset of participants who provided photographs obtained 10 to 15 years earlier, the appearance of facial volume was similar between those images and the current supine photographs. This suggests that volume displacement due to gravitational forces plays an integral role in the morphogenesis of midface aging.",
"title": ""
},
{
"docid": "5157063545b7ec7193126951c3bdb850",
"text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.",
"title": ""
},
{
"docid": "92025b98fec6619aac2849cbc6fb7a5a",
"text": "BACKGROUND\nTransthyretin amyloidosis (ATTR) is a heterogeneous disorder with multiorgan involvement and a genetic or nongenetic basis.\n\n\nOBJECTIVES\nThe goal of this study was to describe ATTR in the United States by using data from the THAOS (Transthyretin Amyloidosis Outcomes Survey) registry.\n\n\nMETHODS\nDemographic, clinical, and genetic features of patients enrolled in the THAOS registry in the United States (n = 390) were compared with data from patients from other regions of the world (ROW) (n = 2,140). The focus was on the phenotypic expression and survival in the majority of U.S. subjects with valine-to-isoleucine substitution at position 122 (Val122Ile) (n = 91) and wild-type ATTR (n = 189).\n\n\nRESULTS\nU.S. subjects are older (70 vs. 46 years), more often male (85.4% vs. 50.6%), and more often of African descent (25.4% vs. 0.5%) than the ROW. A significantly higher percentage of U.S. patients with ATTR amyloid seen at cardiology sites had wild-type disease than the ROW (50.5% vs. 26.2%). In the United States, 34 different mutations (n = 201) have been reported, with the most common being Val122Ile (n = 91; 45.3%) and Thr60Ala (n = 41; 20.4%). Overall, 91 (85%) of 107 patients with Val122Ile were from the United States, where Val122Ile subjects were younger and more often female and black than patients with wild-type disease, and had similar cardiac phenotype but a greater burden of neurologic symptoms (pain, numbness, tingling, and walking disability) and worse quality of life. Advancing age and lower mean arterial pressure, but not the presence of a transthyretin mutation, were independently associated with higher mortality from a multivariate analysis of survival.\n\n\nCONCLUSIONS\nIn the THAOS registry, ATTR in the United States is overwhelmingly a disorder of older adult male subjects with a cardiac-predominant phenotype. Val122Ile is the most common transthyretin mutation, and neurologic phenotypic expression differs between wild-type disease and Val122Ile, but survival from enrollment in THAOS does not. (Transthyretin-Associated Amyloidoses Outcome Survey [THAOS]; NCT00628745).",
"title": ""
}
] |
scidocsrr
|
340f11ea734e1e45d1c7a0ec80eea75a
|
CAT: Credibility Analysis of Arabic Content on Twitter
|
[
{
"docid": "f5f1b6e660b5010eb3d2ca60734511ca",
"text": "Arabic is the official language of hundreds of millions of people in twenty Middle East and northern African countries, and is the religious language of all Muslims of various ethnicities around the world. Surprisingly little has been done in the field of computerised language and lexical resources. It is therefore motivating to develop an Arabic (WordNet) lexical resource that discovers the richness of Arabic as described in Elkateb (2005). This paper describes our approach towards building a lexical resource in Standard Arabic. Arabic WordNet (AWN) will be based on the design and contents of the universally accepted Princeton WordNet (PWN) and will be mappable straightforwardly onto PWN 2.0 and EuroWordNet (EWN), enabling translation on the lexical level to English and dozens of other languages. Several tools specific to this task will be developed. AWN will be a linguistic resource with a deep formal semantic foundation. Besides the standard wordnet representation of senses, word meanings are defined with a machine understandable semantics in first order logic. The basis for this semantics is the Suggested Upper Merged Ontology (SUMO) and its associated domain ontologies. We will greatly extend the ontology and its set of mappings to provide formal terms and definitions equivalent to each synset.",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
}
] |
[
{
"docid": "e0021ce3472bd0cc87bbddef0dc24a07",
"text": "A complex signal demodulation technique is proposed to eliminate the null detection point problem in non-contact vital sign detection. This technique is robust against DC offset in direct conversion system. Based on the complex signal demodulation, a random body movement cancellation technique is developed to cancel out strong noise caused by random body movement in non-contact vital sign monitoring. Multiple transceivers and antennas with polarization and frequency multiplexing are used to detect signals from different body orientations. The noise due to random body movement is cancelled out based on different patterns of the desired and undesired signals. Experiments by highly compact 5–6 GHz portable radar systems have been performed to verify these two techniques.",
"title": ""
},
{
"docid": "a35a564a2f0e16a21e0ef5e26601eab9",
"text": "The social media revolution has created a dynamic shift in the digital marketing landscape. The voice of influence is moving from traditional marketers towards consumers through online social interactions. In this study, we focus on two types of online social interactions, namely, electronic word of mouth (eWOM) and observational learning (OL), and explore how they influence consumer purchase decisions. We also examine how receiver characteristics, consumer expertise and consumer involvement, moderate consumer purchase decision process. Analyzing panel data collected from a popular online beauty forum, we found that consumer purchase decisions are influenced by their online social interactions with others and that action-based OL information is more influential than opinion-based eWOM. Further, our results show that both consumer expertise and consumer involvement play an important moderating role, albeit in opposite direction: Whereas consumer expertise exerts a negative moderating effect, consumer involvement is found to have a positive moderating effect. The study makes important contributions to research and practice.",
"title": ""
},
{
"docid": "d2e434f472b60e17ab92290c78706945",
"text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.",
"title": ""
},
{
"docid": "c918f662a60b0ccb36159cf2f0bd051e",
"text": "Graph embedding is an eective method to represent graph data in a low dimensional space for graph analytics. Most existing embedding algorithms typically focus on preserving the topological structure or minimizing the reconstruction errors of graph data, but they have mostly ignored the data distribution of the latent codes from the graphs, which oen results in inferior embedding in real-world graph data. In this paper, we propose a novel adversarial graph embedding framework for graph data. e framework encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure. Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme. To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed. Experimental studies on real-world graphs validate our design and demonstrate that our algorithms outperform baselines by a wide margin in link prediction, graph clustering, and graph visualization tasks.",
"title": ""
},
{
"docid": "83071476dae1d2a52e137683616668c2",
"text": "We present a strategy to make productive use of semantically-related social data, from a user-centered semantic network, in order to help users (tourists and citizens in general) to discover cultural heritage, points of interest and available services in a smart city. This data can be used to personalize recommendations in a smart tourism application. Our approach is based on flow centrality metrics typically used in social network analysis: flow betweenness, flow closeness and eccentricity. These metrics are useful to discover relevant nodes within the network yielding nodes that can be interpreted as suggestions (venues or services) to users. We describe the semantic network built on graph model, as well as social metrics algorithms used to produce recommendations. We also present challenges and results from a prototypical implementation applied to the case study of the City of Puebla, Mexico.",
"title": ""
},
{
"docid": "c977fe8fd0a4a2d80f3cedaf10981087",
"text": "This research tries to know how the political interaction of the citizens from the center of Ecuador is on the Internet. Every day more people use new technologies in order to learn about political issues. Virtual media allow different forms of citizen participation, making the traditional ones such as television, radio o written press to be obsolete. Through an online questionnaire, people were asked about the use of the political information in the different channels, and especially, about their participation in the last election campaign of 2017. The results show different patterns according to factors such as gender, age or occupation, confirming a panorama of transition between offline and online media.",
"title": ""
},
{
"docid": "247c5975699bb0d39fc6080eacdf2fb9",
"text": "Probabilistic modeling has been a dominant approach in Machine Learning research. As the field evolves, the problems of interest become increasingly challenging and complex. Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. However, incorporating nonlocal dependencies in a probabilistic model can lead to intractable training and inference. This paper presents Constraints Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. We further show that declarative constraints can be used to take advantage of unlabeled data when training the probabilistic model.",
"title": ""
},
{
"docid": "b9fa59783549f9ba2f3e9a1d63405eeb",
"text": "Selective classification techniques (also known as reject option) have not yet been considered in the context of deep neural networks (DNNs). These techniques can potentially significantly improve DNNs prediction performance by trading-off coverage. In this paper we propose a method to construct a selective classifier given a trained neural network. Our method allows a user to set a desired risk level. At test time, the classifier rejects instances as needed, to grant the desired risk (with high probability). Empirical results over CIFAR and ImageNet convincingly demonstrate the viability of our method, which opens up possibilities to operate DNNs in mission-critical applications. For example, using our method an unprecedented 2% error in top-5 ImageNet classification can be guaranteed with probability 99.9%, and almost 60% test coverage.",
"title": ""
},
{
"docid": "c53e4ab482ff23697d75a4b3872c57b5",
"text": "Climate Change during and after the Roman Empire: Reconstructing the Past from Scientiac and Historical Evidence When this journal pioneered the study of history and climate in 1979, the questions quickly outstripped contemporary science and history. Today climate science uses a formidable and expanding array of new methods to measure pre-modern environments, and to open the way to exploring how Journal of Interdisciplinary History, xliii:2 (Autumn, 2012), 169–220.",
"title": ""
},
{
"docid": "e9c52fb24425bff6ed514de6b92e8ba2",
"text": "This paper proposes a ultra compact Wilkinson power combiner (WPC) incorporating synthetic transmission lines at K-band in CMOS technology. The 50 % improvement on the size reduction can be achieved by increasing the slow-wave factor of synthetic transmission line. The presented Wilkinson power combiner design is analyzed and fabricated by using standard 0.18 µm 1P6M CMOS technology. The prototype has only a chip size of 480 µm × 90 µm, corresponding to 0.0002λ02 at 21.5 GHz. The measured insertion losses and return losses are less and higher than 4 dB and 17.5 dB from 16 GHz to 27 GHz, respectively. Furthermore, the proposed WPC is also integrated into the phase shifter to confirm its feasibility. The prototype of phase shifter shows 15 % size reduction and on-wafer measurements show good linearity of full 360-degree phase shifting from 21 GHz to 27 GHz.",
"title": ""
},
{
"docid": "febbef05f9100e9c0301685b13157e48",
"text": "Virtual reality (VR) shows promise in the application of healthcare and because it presents patients an immersive, often entertaining, approach to accomplish the goal of improvement in performance. Eighteen studies were reviewed to understand human performance and health outcomes after utilizing VR rehabilitation systems. We aimed to understand: (1) the influence of immersion in VR performance and health outcomes; (2) the relationship between enjoyment and potential patient adherence to VR rehabilitation routine; and (3) the influence of haptic feedback on performance in VR. Performance measures including postural stability, navigation task performance, and joint mobility showed varying relations to immersion. Limited data did not allow a solid conclusion between enjoyment and adherence, but patient enjoyment and willingness to participate were reported in care plans that incorporates VR. Finally, different haptic devices such as gloves and controllers provided both strengths and weakness in areas such movement velocity, movement accuracy, and path efficiency.",
"title": ""
},
{
"docid": "c96e8afc0c3e0428a257ba044cd2a35a",
"text": "The tumor necrosis factor ligand superfamily member receptor activator of nuclear factor-kB (NF-kB) ligand (RANKL), its cellular receptor, receptor activator of NF-kB (RANK), and the decoy receptor, osteoprotegerin (OPG) represent a novel cytokine triad with pleiotropic effects on bone metabolism, the immune system, and endocrine functions (1). RANKL is produced by osteoblastic lineage cells and activated T lymphocytes (2– 4) and stimulates its receptor, RANK, which is located on osteoclasts and dendritic cells (DC) (4, 5). The effects of RANKL within the skeleton include osteoblast –osteoclast cross-talks, resulting in enhanced differentiation, fusion, activation, and survival of osteoclasts (3, 6), while in the immune system, RANKL promotes the survival and immunostimulatory capacity of DC (1, 7). OPG acts as a soluble decoy receptor that neutralizes RANKL, thus preventing activation of RANK (8). The RANKL/RANK/OPG system has been implicated in various skeletal and immune-mediated diseases characterized by increased bone resorption and bone loss, including several forms of osteoporosis (postmenopausal, glucocorticoid-induced, and senile osteoporosis) (9), bone metastases (10), periodontal disease (11), and rheumatoid arthritis (2). While a relative deficiency of OPG has been found to be associated with osteoporosis in various animal models (9), the parenteral administration of OPG to postmenopausal women (3 mg/kg) was beneficial in rapidly reducing enhanced biochemical markers of bone turnover by 30–80% (12). These studies have clearly established the RANKL/ OPG system as a key cytokine network involved in the regulation of bone cell biology, osteoblast–osteoclast and bone-immune cross-talks, and maintenance of bone mass. In addition to providing substantial and detailed insights into the pathogenesis of various metabolic bone diseases, the administration of OPG may become a promising therapeutic option in the prevention and treatment of benign and malignant bone disease. Several studies have attempted to evaluate the clinical relevance and potential applications of serum OPG measurements in humans. Yano et al. were the first to assess systematically OPG serum levels (by an ELISA system) in women with osteoporosis (13). Intriguingly, OPG serum levels were negatively correlated with bone mineral density (BMD) at various sites (lumbar spine, femoral neck, and total body) and positively correlated with biochemical markers of bone turnover. In view of the established protective effects of OPG on bone, these findings came as a surprise, and were interpreted as an insufficient counter-regulatory mechanism to prevent bone loss. Another group which employed a similar design (but a different OPG ELISA system) could not detect a correlation between OPG serum levels and biochemical markers of bone turnover (14), but confirmed the negative correlation of OPG serum concentrations with BMD in postmenopausal women (15). In a recent study, Szulc and colleagues (16) evaluated OPG serum levels in an age-stratified male cohort, and observed positive correlations of OPG serum levels with bioavailable testosterone and estrogen levels, negative correlations with parathyroid hormone (PTH) serum levels and urinary excretion of total deoxypyridinoline, but no correlation with BMD at any site (16). The finding that PTH serum levels and gene expression of OPG by bone cells are inversely correlated was also reported in postmenopausal women (17), and systemic administration of human PTH(1-34) to postmenopausal women with osteoporosis inhibited circulating OPG serum levels (18). Finally, a study of patients with renal diseases showed a decline of serum OPG levels following initiation of systemic glucocorticoid therapy (19). The regulation pattern of OPG by systemic hormones has been described in vitro, and has led to the hypothesis that most hormones and cytokines regulate bone resorption by modulating either RANKL, OPG, or both (9). Interestingly, several studies showed that serum OPG levels increased with ageing and were higher in postmenopausal women (who have an increased rate of bone loss) as compared with men, thus supporting the hypothesis of a counter-regulatory function of OPG in order to prevent further bone loss (13 –16). In this issue of the Journal, Ueland and associates (20) add another important piece to the picture of OPG regulation in humans in vivo. By studying well-characterized patient cohorts with endocrine and immune diseases such as Cushing’s syndrome, acromegaly, growth hormone deficiency, HIV infection, and common variable immunodeficiency (CVI), the investigators reported European Journal of Endocrinology (2001) 145 681–683 ISSN 0804-4643",
"title": ""
},
{
"docid": "9500e8bbbb21df9cde0b2e4b8ea72d89",
"text": "The practice of crowdsourcing is transforming the Web and giving rise to a new field.",
"title": ""
},
{
"docid": "470d3ce4828288a775043a6b74175e14",
"text": "Ascites is a common complication of liver cirrhosis associated with a poor prognosis. The treatment of ascites requires dietary sodium restriction and the judicious use of distal and loop diuretics, sequential at an earlier stage of ascites, and a combination at a later stage of ascites. The diagnosis of refractory ascites requires the demonstration of diuretic non-responsiveness, despite dietary sodium restriction, or the presence of diuretic-related complications. Patients with refractory ascites require second-line treatments of repeat large-volume paracentesis (LVP) or the insertion of a transjugular intrahepatic portosystemic shunt (TIPS), and assessment for liver transplantation. Careful patient selection is paramount for TIPS to be successful as a treatment for ascites. Patients not suitable for TIPS insertion should receive LVP. The use of albumin as a volume expander is recommended for LVP of >5-6 L to prevent the development of circulatory dysfunction, although the clinical significance of post-paracentesis circulatory dysfunction is still debated. Significant mortality is still being observed in cirrhotic patients with ascites and relatively preserved liver and renal function, as indicated by a lower Model for End-Stage Liver Disease (MELD) score. It is proposed that patients with lower MELD scores and ascites should receive additional points in calculating their priority for liver transplantation. Potential new treatment options for ascites include the use of various vasoconstrictors, vasopressin V(2) receptor antagonists, or the insertion of a peritoneo-vesical shunt, all of which could possibly improve the management of ascites.",
"title": ""
},
{
"docid": "eb20856f797f35ea6eb05f4646e54f34",
"text": "Malware in smartphones is growing at a signi cant rate. There are currently more than 250 million smartphone users in the world and this number is expected to grow in coming years [44]. In the past few years, smartphones have evolved from simple mobile phones into sophisticated computers. This evolution has enabled smartphone users to access and browse the Internet, to receive and send emails, SMS and MMS messages and to connect devices in order to exchange information. All of these features make the smartphone a useful tool in our daily lives, but at the same time they render it more vulnerable to attacks by malicious applications. Given that most users store sensitive information on their mobile phones, such as phone numbers, SMS messages, emails, pictures and videos, smartphones are a very appealing target for attackers and malware developers. The need to maintain security and data con dentiality on the Android platform makes the analysis of malware on this platform an urgent issue. We have based this report on previous approaches to the dynamic analysis of application behavior, and have adapted one approach in order to detect malware on the Android platform. The detector is embedded in a framework to collect traces from a number of real users and is based on crowdsourcing. Our framework has been tested by analyzing data collected at the central server using two types of data sets: data from arti cial malware created for test purposes and data from real malware found in the wild. The method used is shown to be an e ective means of isolating malware and alerting users of downloaded malware, which suggests that it has great potential for helping to stop the spread of detected malware to a larger community. Finally, the report will give a complete review of results for self written and real Android Malware applications that have been tested with the system. This thesis project shows that it is feasible to create an Android malware detection system with satisfactory results.",
"title": ""
},
{
"docid": "160d0ba08cfade25b512c8fd46363451",
"text": "We present structured data fusion (SDF) as a framework for the rapid prototyping of knowledge discovery in one or more possibly incomplete data sets. In SDF, each data set-stored as a dense, sparse, or incomplete tensor-is factorized with a matrix or tensor decomposition. Factorizations can be coupled, or fused, with each other by indicating which factors should be shared between data sets. At the same time, factors may be imposed to have any type of structure that can be constructed as an explicit function of some underlying variables. With the right choice of decomposition type and factor structure, even well-known matrix factorizations such as the eigenvalue decomposition, singular value decomposition and QR factorization can be computed with SDF. A domain specific language (DSL) for SDF is implemented as part of the software package Tensorlab, with which we offer a library of tensor decompositions and factor structures to choose from. The versatility of the SDF framework is demonstrated by means of four diverse applications, which are all solved entirely within Tensorlab's DSL.",
"title": ""
},
{
"docid": "3a7427c67b7758516af15da12b663c40",
"text": "The initial focus of recombinant protein production by filamentous fungi related to exploiting the extraordinary extracellular enzyme synthesis and secretion machinery of industrial strains, including Aspergillus, Trichoderma, Penicillium and Rhizopus species, was to produce single recombinant protein products. An early recognized disadvantage of filamentous fungi as hosts of recombinant proteins was their common ability to produce homologous proteases which could degrade the heterologous protein product and strategies to prevent proteolysis have met with some limited success. It was also recognized that the protein glycosylation patterns in filamentous fungi and in mammals were quite different, such that filamentous fungi are likely not to be the most suitable microbial hosts for production of recombinant human glycoproteins for therapeutic use. By combining the experience gained from production of single recombinant proteins with new scientific information being generated through genomics and proteomics research, biotechnologists are now poised to extend the biomanufacturing capabilities of recombinant filamentous fungi by enabling them to express genes encoding multiple proteins, including, for example, new biosynthetic pathways for production of new primary or secondary metabolites. It is recognized that filamentous fungi, most species of which have not yet been isolated, represent an enormously diverse source of novel biosynthetic pathways, and that the natural fungal host harboring a valuable biosynthesis pathway may often not be the most suitable organism for biomanufacture purposes. Hence it is expected that substantial effort will be directed to transforming other fungal hosts, non-fungal microbial hosts and indeed non microbial hosts to express some of these novel biosynthetic pathways. But future applications of recombinant expression of proteins will not be confined to biomanufacturing. Opportunities to exploit recombinant technology to unravel the causes of the deleterious impacts of fungi, for example as human, mammalian and plant pathogens, and then to bring forward solutions, is expected to represent a very important future focus of fungal recombinant protein technology.",
"title": ""
},
{
"docid": "965a9347ea33394aaa702c74c27a4642",
"text": "Underwater wireless communications can be carried out through acoustic, radio frequency (RF), and optical waves. Compared to its bandwidth limited acoustic and RF counterparts, underwater optical wireless communications (UOWCs) can support higher data rates at low latency levels. However, severe aquatic channel conditions (e.g., absorption, scattering, turbulence, etc.) pose great challenges for UOWCs and significantly reduce the attainable communication ranges, which necessitates efficient networking and localization solutions. Therefore, we provide a comprehensive survey on the challenges, advances, and prospects of underwater optical wireless networks (UOWNs) from a layer by layer perspective which includes: 1) Potential network architectures; 2) Physical layer issues including propagation characteristics, channel modeling, and modulation techniques 3) Data link layer problems covering link configurations, link budgets, performance metrics, and multiple access schemes; 4) Network layer topics containing relaying techniques and potential routing algorithms; 5) Transport layer subjects such as connectivity, reliability, flow and congestion control; 6) Application layer goals and state-of-the-art UOWN applications, and 7) Localization and its impacts on UOWN layers. Finally, we outline the open research challenges and point out the future directions for underwater optical wireless communications, networking, and localization research.",
"title": ""
},
{
"docid": "c61c111c5b5d1c4663905371b638e703",
"text": "Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyper parameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validation performance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.",
"title": ""
},
{
"docid": "f13ff4d4526e62bd7f2aa91356fea1a5",
"text": "This work presents a general simulation tool to evaluate the performance of a set of cable suspended rehabilitation robots. Such a simulator is based on the mechanical model of the upper limb of a patient. The tool was employed to assess the performances of two cable-driven robots, the NeReBot and the MariBot, developed at the Robotics & Mechatronics Laboratories of the Department of Innovation in Mechanics and Management (DIMEG) of University of Padua, Italy. This comparison demonstrates that the second machine, which was conceived as an evolution of the first one, yields much better results in terms of patient's arm trajectories.",
"title": ""
}
] |
scidocsrr
|
a77c36531f5f1241061edb768bef757e
|
Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps
|
[
{
"docid": "609cc8dd7323e817ddfc5314070a68bf",
"text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.",
"title": ""
}
] |
[
{
"docid": "8d3e93e59a802535e9d5ef7ca7ace362",
"text": "Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain's function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.",
"title": ""
},
{
"docid": "23a77ef19b59649b50f168b1cb6cb1c5",
"text": "A novel interleaved high step-up converter with voltage multiplier cell is proposed in this paper to avoid the extremely narrow turn-off period and to reduce the current ripple, which flows through the power devices compared with the conventional interleaved boost converter in high step-up applications. Interleaved structure is employed in the input side to distribute the input current, and the voltage multiplier cell is adopted in the output side to achieve a high step-up gain. The voltage multiplier cell is composed of the secondary windings of the coupled inductors, a series capacitor, and two diodes. Furthermore, the switch voltage stress is reduced due to the transformer function of the coupled inductors, which makes low-voltage-rated MOSFETs available to reduce the conduction losses. Moreover, zero-current-switching turn- on soft-switching performance is realized to reduce the switching losses. In addition, the output diode turn-off current falling rate is controlled by the leakage inductance of the coupled inductors, which alleviates the diode reverse recovery problem. Additional active device is not required in the proposed converter, which makes the presented circuit easy to design and control. Finally, a 1-kW 40-V-input 380-V-output prototype operating at 100 kHz switching frequency is built and tested to verify the effectiveness of the presented converter.",
"title": ""
},
{
"docid": "28625bffdddecacbf217aef469df68c8",
"text": "An ultrawide-band coplanar waveguide (CPW) fed slot antenna is presented. A rectangular slot antenna is excited by a 50-/spl Omega/ CPW with a U-shaped tuning stub. The impedance bandwidth, from both measurement and simulation, is about 110% (S11<-10 dB). The antenna radiates bi-directionally. The radiation patterns obtained from simulations are found to be stable across the matching band and experimental verification is provided at the high end of the band.",
"title": ""
},
{
"docid": "3718dbbcdf7d89ba4d41a4d29770d0da",
"text": "Sequential pattern mining is a popular data mining task with wide applications. However, it may present too many sequential patterns to users, which makes it difficult for users to comprehend the results. As a solution, it was proposed to mine maximal sequential patterns, a compact representation of the set of sequential patterns, which is often several orders of magnitude smaller than the set of all sequential patterns. However, the task of mining maximal patterns remains computationally expensive. To address this problem, we introduce a vertical mining algorithm named VMSP (Vertical mining of Maximal Sequential Patterns). It is to our knowledge the first vertical mining algorithm for mining maximal sequential patterns. An experimental study on five real datasets shows that VMSP is up to two orders of magnitude faster than the current state-of-the-art algorithm.",
"title": ""
},
{
"docid": "d5d2b61493ed11ee74d566b7713b57ba",
"text": "BACKGROUND\nSymptomatic breakthrough in proton pump inhibitor (PPI)-treated gastro-oesophageal reflux disease (GERD) patients is a common problem with a range of underlying causes. The nonsystemic, raft-forming action of alginates may help resolve symptoms.\n\n\nAIM\nTo assess alginate-antacid (Gaviscon Double Action, RB, Slough, UK) as add-on therapy to once-daily PPI for suppression of breakthrough reflux symptoms.\n\n\nMETHODS\nIn two randomised, double-blind studies (exploratory, n=52; confirmatory, n=262), patients taking standard-dose PPI who had breakthrough symptoms, assessed by Heartburn Reflux Dyspepsia Questionnaire (HRDQ), were randomised to add-on Gaviscon or placebo (20 mL after meals and bedtime). The exploratory study endpoint was change in HRDQ score during treatment vs run-in. The confirmatory study endpoint was \"response\" defined as ≥3 days reduction in the number of \"bad\" days (HRDQ [heartburn/regurgitation] >0.70) during treatment vs run-in.\n\n\nRESULTS\nIn the exploratory study, significantly greater reductions in HRDQ scores (heartburn/regurgitation) were observed in the Gaviscon vs placebo (least squares mean difference [95% CI] -2.10 [-3.71 to -0.48]; P=.012). Post hoc \"responder\" analysis of the exploratory study also revealed significantly more Gaviscon patients (75%) achieved ≥3 days reduction in \"bad\" days vs placebo patients (36%), P=.005. In the confirmatory study, symptomatic improvement was observed with add-on Gaviscon (51%) but there was no significant difference in response vs placebo (48%) (OR (95% CI) 1.15 (0.69-1.91), P=.5939).\n\n\nCONCLUSIONS\nAdding Gaviscon to PPI reduced breakthrough GERD symptoms but a nearly equal response was observed for placebo. Response to intervention may vary according to whether symptoms are functional in origin.",
"title": ""
},
{
"docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7",
"text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.",
"title": ""
},
{
"docid": "3f33882e4bece06e7a553eb9133f8aa9",
"text": "Research on the relationship between affect and cognition in Artificial Intelligence in Education (AIEd) brings an important dimension to our understanding of how learning occurs and how it can be facilitated. Emotions are crucial to learning, but their nature, the conditions under which they occur, and their exact impact on learning for different learners in diverse contexts still needs to be mapped out. The study of affect during learning can be challenging, because emotions are subjective, fleeting phenomena that are often difficult for learners to report accurately and for observers to perceive reliably. Context forms an integral part of learners’ affect and the study thereof. This review provides a synthesis of the current knowledge elicitation methods that are used to aid the study of learners’ affect and to inform the design of intelligent technologies for learning. Advantages and disadvantages of the specific methods are discussed along with their respective potential for enhancing research in this area, and issues related to the interpretation of data that emerges as the result of their use. References to related research are also provided together with illustrative examples of where the individual methods have been used in the past. Therefore, this review is intended as a resource for methodological decision making for those who want to study emotions and their antecedents in AIEd contexts, i.e. where the aim is to inform the design and implementation of an intelligent learning environment or to evaluate its use and educational efficacy.",
"title": ""
},
{
"docid": "eceb513e5d67d66986597555cf16c814",
"text": "This study examines the statistical validation of a recently developed, fourth-generation (4G) risk–need assessment system (Correctional Offender Management Profiling for Alternative Sanctions; COMPAS) that incorporates a range of theoretically relevant criminogenic factors and key factors emerging from meta-analytic studies of recidivism. COMPAS’s automated scoring provides decision support for correctional agencies for placement decisions, offender management, and treatment planning. The article describes the basic features of COMPAS and then examines the predictive validity of the COMPAS risk scales by fitting Cox proportional hazards models to recidivism outcomes in a sample of presentence investigation and probation intake cases (N = 2,328). Results indicate that the predictive validities for the COMPAS recidivism risk model, as assessed by the area under the receiver operating characteristic curve (AUC), equal or exceed similar 4G instruments. The AUCs ranged from .66 to .80 for diverse offender subpopulations across three outcome criteria, with a majority of these exceeding .70.",
"title": ""
},
{
"docid": "0da5935c1630e7ed7e129410096e971d",
"text": "In this paper, an analysis of power line interference in two-electrode biopotential measurement amplifiers is presented. A model of the amplifier that includes its input stage and takes into account the effects of the common mode input impedance Z/sub C/ is proposed. This approach is valid for high Z/sub C/ values, and also for some recently proposed low-Z/sub C/ strategies. It is shown that power line interference rejection becomes minimal for extreme Z/sub C/ values (null or infinite), depending on the electrode-skin impedance's unbalance /spl Delta/Z/sub E/. For low /spl Delta/Z/sub E/ values, minimal interference is achieved by a low Z/sub C/ strategy (Z/sub C/=0), while for high /spl Delta/Z/sub E/ values a very high Z/sub C/ is required. A critical /spl Delta/Z/sub E/ is defined to select the best choice, as a function of the amplifier's Common Mode Rejection Ratio (CMRR) and stray coupling capacitances. Conclusions are verified experimentally using a biopotential amplifier specially designed for this test.",
"title": ""
},
{
"docid": "61359ded391acaaaab0d4b9a0d851b8c",
"text": "A laparoscopic Heller myotomy with partial fundoplication is considered today in most centers in the United States and abroad the treatment of choice for patients with esophageal achalasia. Even though the operation has initially a very high success rate, dysphagia eventually recurs in some patients. In these cases, it is important to perform a careful work-up to identify the cause of the failure and to design a tailored treatment plan by either endoscopic means or revisional surgery. The best results are obtained by a team approach, in Centers where radiologists, gastroenterologists, and surgeons have experience in the diagnosis and treatment of this disease.",
"title": ""
},
{
"docid": "fe57837e690669a5e3083c4c3f06b186",
"text": "The complexity of advanced driver-assistance systems (ADASs) is steadily increasing. While the first applications were based on mere warnings, current systems actively intervene in the driving process. Due to this development, such systems have to automatically choose between different action alternatives. From an algorithmic point of view, this requires automatic decision making on the basis of uncertain data. In this paper, the application of decision networks for this problem is proposed. It is demonstrated how this approach facilitates automatic maneuver decisions in a prototypical lane change assistance system. Furthermore, relevant research questions and unsolved problems related to this topic are identified.",
"title": ""
},
{
"docid": "0f17262293f98685383c71381ca10bd9",
"text": "This paper presents the application of frequency selective surfaces in antenna arrays as an alternative to improve radiation parameters of the array. A microstrip antenna array between two FSS was proposed for application in WLAN and LTE 4G systems. Several parameters have been significantly improved, in particular the bandwidth, gain and radiation efficiency, compared with a conventional array. Numerical and measured results are presented.",
"title": ""
},
{
"docid": "f127a40480887dd9b740fec5064a45ea",
"text": "Distributed word representations are very useful for capturing semantic information and have been successfully applied in a variety of NLP tasks, especially on English. In this work, we innovatively develop two component-enhanced Chinese character embedding models and their bigram extensions. Distinguished from English word embeddings, our models explore the compositions of Chinese characters, which often serve as semantic indictors inherently. The evaluations on both word similarity and text classification demonstrate the effectiveness of our models.",
"title": ""
},
{
"docid": "28b6c4302d61583758fa06fa3f1f59ff",
"text": "Non-destructive eddy current testing (ECT) is widely used to examine structural defects in ferromagnetic pipe in the oil and gas industry. Implementation of giant magnetoresistance (GMR) sensors as magnetic field sensors to detect the changes of magnetic field continuity have increased the sensitivity of eddy current techniques in detecting the material defect profile. However, not many researchers have described in detail the structure and issues of GMR sensors and their application in eddy current techniques for nondestructive testing. This paper will describe the implementation of GMR sensors in non-destructive testing eddy current testing. The first part of this paper will describe the structure and principles of GMR sensors. The second part outlines the principles and types of eddy current testing probe that have been studied and developed by previous researchers. The influence of various parameters on the GMR measurement and a factor affecting in eddy current testing will be described in detail in the third part of this paper. Finally, this paper will discuss the limitations of coil probe and compensation techniques that researchers have applied in eddy current testing probes. A comprehensive review of previous studies on the application of GMR sensors in non-destructive eddy current testing also be given at the end of this paper.",
"title": ""
},
{
"docid": "424221955406c1da9a97e8fd5c3de2f1",
"text": "BACKGROUND\nAn important contribution of the social determinants of health perspective has been to inquire about non-medical determinants of population health. Among these, labour market regulations are of vital significance. In this study, we investigate the labour market regulations among low- and middle-income countries (LMICs) and propose a labour market taxonomy to further understand population health in a global context.\n\n\nMETHODS\nUsing Gross National Product per capita, we classify 113 countries into either low-income (n = 71) or middle-income (n = 42) strata. Principal component analysis of three standardized indicators of labour market inequality and poverty is used to construct 2 factor scores. Factor score reliability is evaluated with Cronbach's alpha. Using these scores, we conduct a hierarchical cluster analysis to produce a labour market taxonomy, conduct zero-order correlations, and create box plots to test their associations with adult mortality, healthy life expectancy, infant mortality, maternal mortality, neonatal mortality, under-5 mortality, and years of life lost to communicable and non-communicable diseases. Labour market and health data are retrieved from the International Labour Organization's Key Indicators of Labour Markets and World Health Organization's Statistical Information System.\n\n\nRESULTS\nSix labour market clusters emerged: Residual (n = 16), Emerging (n = 16), Informal (n = 10), Post-Communist (n = 18), Less Successful Informal (n = 22), and Insecure (n = 31). Primary findings indicate: (i) labour market poverty and population health is correlated in both LMICs; (ii) association between labour market inequality and health indicators is significant only in low-income countries; (iii) Emerging (e.g., East Asian and Eastern European countries) and Insecure (e.g., sub-Saharan African nations) clusters are the most advantaged and disadvantaged, respectively, with the remaining clusters experiencing levels of population health consistent with their labour market characteristics.\n\n\nCONCLUSIONS\nThe labour market regulations of LMICs appear to be important social determinant of population health. This study demonstrates the heuristic value of understanding the labour markets of LMICs and their health effects using exploratory taxonomy approaches.",
"title": ""
},
{
"docid": "c2e92f8289ebf50ca363840133dc2a43",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.08.042 ⇑ Address: WOLNM & ESIME Zacatenco, Instituto Politécnico Nacional, U. Profesional Adolfo López Mateos, Edificio Z-4, 2do piso, cubiculo 6, Miguel Othón de Mendizábal S/N, La Escalera, Gustavo A. Madero, D.F., C.P. 07320, Mexico. Tel.: +52 55 5694 0916/+52 55 5454 2611 (cellular); fax: +52 55 5694 0916. E-mail address: apenaa@ipn.mx URL: http://www.wolnm.org/apa 1 AIWBES: adaptive and intelligent web-based educational systems; BKT: Bayesian knowledge tracing; CBES: computer-based educational systems; CBIS: computerbased information system,; DM: data mining; DP: dynamic programming; EDM: educational data mining; EM: expectation maximization; HMM: hidden Markov model; IBL: instances-based learning; IRT: item response theory; ITS: intelligent tutoring systems; KDD: knowledge discovery in databases; KT: knowledge tracing; LMS: learning management systems; SNA: social network analysis; SWOT: strengths, weakness, opportunities, and threats; WBC: web-based courses; WBES: web-based educational systems. Alejandro Peña-Ayala ⇑",
"title": ""
},
{
"docid": "a3cb6d84445bea04c5da888d34928c94",
"text": "In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-the-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo1 and code2 are provided.",
"title": ""
},
{
"docid": "466c0d9436e1f1878aaafa2297022321",
"text": "Acetic acid was used topically at concentrations of between 0.5% and 5% to eliminate Pseudomonas aeruginosa from the burn wounds or soft tissue wounds of 16 patients. In-vitro studies indicated the susceptibility of P. aeruginosa to acetic acid; all strains exhibited a minimum inhibitory concentration of 2 per cent. P. aeruginosa was eliminated from the wounds of 14 of the 16 patients within two weeks of treatment. Acetic acid was shown to be an inexpensive and efficient agent for the elimination of P. aeruginosa from burn and soft tissue wounds.",
"title": ""
},
{
"docid": "d9214591462b0780ede6d58dab42f48c",
"text": "Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.",
"title": ""
},
{
"docid": "17cbead431425018818b649b1b69b527",
"text": "In this letter, a flexible memory simulator - NVMain 2.0, is introduced to help the community for modeling not only commodity DRAMs but also emerging memory technologies, such as die-stacked DRAM caches, non-volatile memories (e.g., STT-RAM, PCRAM, and ReRAM) including multi-level cells (MLC), and hybrid non-volatile plus DRAM memory systems. Compared to existing memory simulators, NVMain 2.0 features a flexible user interface with compelling simulation speed and the capability of providing sub-array-level parallelism, fine-grained refresh, MLC and data encoder modeling, and distributed energy profiling.",
"title": ""
}
] |
scidocsrr
|
60b4cf6bce1cd3b2cf55480498edd93a
|
Highly wearable cuff-less blood pressure and heart rate monitoring with single-arm electrocardiogram and photoplethysmogram signals
|
[
{
"docid": "4ad535f3b4f1afba4497a4026236424e",
"text": "We study the problem of noninvasively estimating Blood Pressure (BP) without using a cuff, which is attractive for continuous monitoring of BP over Body Area Networks. It has been shown that the Pulse Arrival Time (PAT) measured as the delay between the ECG peak and a point in the finger PPG waveform can be used to estimate systolic and diastolic BP. Our aim is to evaluate the performance of such a method using the available MIMIC database, while at the same time improve the performance of existing techniques. We propose an algorithm to estimate BP from a combination of PAT and heart rate, showing improvement over PAT alone. We also show how the method achieves recalibration using an RLS adaptive algorithm. Finally, we address the use case of ECG and PPG sensors wirelessly communicating to an aggregator and study the effect of skew and jitter on BP estimation.",
"title": ""
},
{
"docid": "7163ac38e34ba281fdfeb5b473d378b2",
"text": "The clinical demand for a device to monitor blood pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called pulse wave velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides nonocclusive beat-by-beat estimations of mean arterial pressure (MAP) by measuring the pulse transit time (PTT) of arterial pressure pulses travelling from the ascending aorta toward the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3, and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15 mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way toward an ambulatory compliant, continuous, and nonocclusive BP monitoring system.",
"title": ""
}
] |
[
{
"docid": "49740b1faa60a212297926fec63de0ce",
"text": "In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problemempirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children’s fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a na ı̈ve baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions.",
"title": ""
},
{
"docid": "61ce4f9ec7e72e88294ab0db4ad0b639",
"text": "Although sexist attitudes are generally thought to undermine support for employment equity (EE) policies supporting women, we argue that the effects of benevolent sexism are more complex. Across 4 studies, we extend the ambivalent sexism literature by examining both the positive and the negative effects benevolent sexism has for the support of gender-based EE policies. On the positive side, we show that individuals who endorse benevolent sexist attitudes on trait measures of sexism (Study 1) and individuals primed with benevolent sexist attitudes (Study 2) are more likely to support an EE policy, and that this effect is mediated by feelings of compassion. On the negative side, we find that this support extends only to EE policies that promote the hiring of women in feminine, and not in masculine, positions (Study 3 and 4). Thus, while benevolent sexism may appear to promote gender equality, it subtly undermines it by contributing to occupational gender segregation and leading to inaction in promoting women in positions in which they are underrepresented (i.e., masculine positions). (PsycINFO Database Record",
"title": ""
},
{
"docid": "0ecded7fad85b79c4c288659339bc18b",
"text": "We present an end-to-end supervised based system for detecting malware by analyzing network traffic. The proposed method extracts 972 behavioral features across different protocols and network layers, and refers to different observation resolutions (transaction, session, flow and conversation windows). A feature selection method is then used to identify the most meaningful features and to reduce the data dimensionality to a tractable size. Finally, various supervised methods are evaluated to indicate whether traffic in the network is malicious, to attribute it to known malware “families” and to discover new threats. A comparative experimental study using real network traffic from various environments indicates that the proposed system outperforms existing state-of-the-art rule-based systems, such as Snort and Suricata. In particular, our chronological evaluation shows that many unknown malware incidents could have been detected at least a month before their static rules were introduced to either the Snort or Suricata systems.",
"title": ""
},
{
"docid": "f81cd7e1cfbfc15992fba9368c1df30b",
"text": "The most challenging issue of conventional Time Amplifiers (TAs) is their limited Dynamic Range (DR). This paper presents a mathematical analysis to clarify principle of operation of conventional 2× TA's. The mathematical derivations release strength reduction of the current sources of the TA is the simplest way to increase DR. Besides, a new technique is presented to expand the Dynamic Range (DR) of conventional 2× TAs. Proposed technique employs current subtraction in place of changing strength of current sources using conventional gain compensation methods, which results in more stable gain over a wider DR. The TA is simulated using Spectre-rf in TSMC 0.18um COMS technology. DR of the 2× TA is expanded to 300ps only with 9% gain error while it consumes only 28uW from a 1.2V supply voltage.",
"title": ""
},
{
"docid": "f597c21404b091c0f4046b7c6429c98c",
"text": "We report on an architecture for the unsupervised discovery of talker-invariant subword embeddings. It is made out of two components: a dynamic-time warping based spoken term discovery (STD) system and a Siamese deep neural network (DNN). The STD system clusters word-sized repeated fragments in the acoustic streams while the DNN is trained to minimize the distance between time aligned frames of tokens of the same cluster, and maximize the distance between tokens of different clusters. We use additional side information regarding the average duration of phonemic units, as well as talker identity tags. For evaluation we use the datasets and metrics of the Zero Resource Speech Challenge. The model shows improvement over the baseline in subword unit modeling.",
"title": ""
},
{
"docid": "eed515cb3a2a990e67bf76c176c16d29",
"text": "This paper describes the question generation system developed at UPenn for QGSTEC, 2010. The system uses predicate argument structures of sentences along with semantic roles for the question generation task from paragraphs. The semantic role labels are used to identify relevant parts of text before forming questions over them. The generated questions are then ranked to pick final six best questions.",
"title": ""
},
{
"docid": "9321905fe504f3a1f5c5e63e92f9d5ec",
"text": "The principles of implementation of the control system with sinusoidal PWM inverter voltage frequency scalar and vector control induction motor are reviewed. Comparisons of simple control system with sinusoidal PWM control system and sinusoidal PWM control with an additional third-harmonic signal and gain modulated control signal are carried out. There are shown the maximum amplitude and actual values phase and line inverter output voltage at the maximum amplitude of the control signals. Recommendations on the choice of supply voltage induction motor electric drive with frequency scalar control are presented.",
"title": ""
},
{
"docid": "984b9737cd2566ff7d18e6e2f9e5bed2",
"text": "Advances in anatomic understanding are frequently the basis upon which surgical techniques are advanced and refined. Recent anatomic studies of the superficial tissues of the face have led to an increased understanding of the compartmentalized nature of the subcutaneous fat. This report provides a review of the locations and characteristics of the facial fat compartments and provides examples of how this knowledge can be used clinically, specifically with regard to soft tissue fillers.",
"title": ""
},
{
"docid": "f8e6f97f5c797d490e2490dad676f62a",
"text": "Both patients and clinicians may incorrectly diagnose vulvovaginitis symptoms. Patients often self-treat with over-the-counter antifungals or home remedies, although they are unable to distinguish among the possible causes of their symptoms. Telephone triage practices and time constraints on office visits may also hamper effective diagnosis. This review is a guide to distinguish potential causes of vulvovaginal symptoms. The first section describes both common and uncommon conditions associated with vulvovaginitis, including infectious vulvovaginitis, allergic contact dermatitis, systemic dermatoses, rare autoimmune diseases, and neuropathic vulvar pain syndromes. The focus is on the clinical presentation, specifically 1) the absence or presence and characteristics of vaginal discharge; 2) the nature of sensory symptoms (itch and/or pain, localized or generalized, provoked, intermittent, or chronic); and 3) the absence or presence of mucocutaneous changes, including the types of lesions observed and the affected tissue. Additionally, this review describes how such features of the clinical presentation can help identify various causes of vulvovaginitis.",
"title": ""
},
{
"docid": "b7b7835712cd65e1983fb3cff6d26622",
"text": "Arctigenin is a herb compound extract from Arctium lappa and is reported to exhibit pharmacological properties, including neuronal protection and antidiabetic, antitumor, and antioxidant properties. However, the effects of arctigenin on autoimmune inflammatory diseases of the CNS, multiple sclerosis (MS), and its animal model experimental autoimmune encephalomyelitis (EAE) are still unclear. In this study, we demonstrated that arctigenin-treated mice are resistant to EAE; the clinical scores of arctigenin-treated mice are significantly reduced. Histochemical assays of spinal cord sections also showed that arctigenin reduces inflammation and demyelination in mice with EAE. Furthermore, the Th1 and Th17 cells in peripheral immune organs are inhibited by arctigenin in vivo. In addition, the Th1 cytokine IFN-γ and transcription factor T-bet, as well as the Th17 cytokines IL-17A, IL-17F, and transcription factor ROR-γt are significantly suppressed upon arctigenin treatment in vitro and in vivo. Interestedly, Th17 cells are obviously inhibited in CNS of mice with EAE, while Th1 cells do not significantly change. Besides, arctigenin significantly restrains the differentiation of Th17 cells. We further demonstrate that arctigenin activates AMPK and inhibits phosphorylated p38, in addition, upregulates PPAR-γ, and finally suppresses ROR-γt. These findings suggest that arctigenin may have anti-inflammatory and immunosuppressive properties via inhibiting Th17 cells, indicating that it could be a potential therapeutic drug for multiple sclerosis or other autoimmune inflammatory diseases.",
"title": ""
},
{
"docid": "51da24a6bdd2b42c68c4465624d2c344",
"text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.",
"title": ""
},
{
"docid": "5dddbc2b2c53436c9d2176045118dce5",
"text": "This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Graphical abstract .",
"title": ""
},
{
"docid": "154c40c2fab63ad15ded9b341ff60469",
"text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.",
"title": ""
},
{
"docid": "7ba61c8c5eba7d8140c84b3e7cbc851a",
"text": "One of the aims of modern First-Person Shooter (FPS ) design is to provide an immersive experience to the player. This paper examines the role of sound in enabling s uch immersion and argues that, even in ‘realism’ FPS ga mes, it may be achieved sonically through a focus on carica ture rather than realism. The paper utilizes and develo ps previous work in which both a conceptual framework for the d sign and analysis of run and gun FPS sound is developed and the notion of the relationship between player and FPS soundscape as an acoustic ecology is put forward (G rimshaw and Schott 2007a; Grimshaw and Schott 2007b). Some problems of sound practice and sound reproduction i n the game are highlighted and a conceptual solution is p roposed.",
"title": ""
},
{
"docid": "52ab79410044bd29c11cdd8352d10a6e",
"text": "Fashion markets are synonymous with rapid change and, as a result, commercial success or failure in those markets is largely determined by the organisation’s flexibility and responsiveness. Responsiveness is characterised by short time-to-market, the ability to scale up (or down) quickly and the rapid incorporation of consumer preferences into the design process. In this paper it is argued that conventional organisational structures and forecast-driven supply chains are not adequate to meet the challenges of volatile and turbulent demand which typify fashion markets today. Instead, the requirement is for the creation of an agile organisation embedded within an agile supply chain INTRODUCTION Fashion markets have long attracted the interest of researchers. More often the focus of their work was the psychology and sociology of fashion and with the process by which fashions were adopted across populations (see for example Wills and Midgley, 1973). In parallel with this, a body of work has developed seeking to identify cycles in fashions (e.g. Carman, 1966). Much of this earlier work was intended to create insights and even tools to help improve the demand forecasting of fashion products. However, the reality that is now gradually being accepted both by those who work in the industry and those who study it, is that the demand for fashion products cannot be forecast. Instead, we need to recognise that fashion markets are complex open systems that frequently demonstrate high levels of ‘chaos’. In such conditions managerial effort may be better expended on devising strategies",
"title": ""
},
{
"docid": "a82dba8f935b746b9ca98133a0a92739",
"text": "We study a symmetric collaborative dialogue setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.",
"title": ""
},
{
"docid": "ca81d2df30f75485567c0dec62e6779e",
"text": "Content accessibility is a key feature in highly usable Web sites, but reports in the popular press typically report that 95% or more of all Web sites are inaccessible to users with disabilities. The present study is a content accessibility compliance audit of 50 of the Web's most popular sites, undertaken to determine if content accessibility can be conceived and reported in continuous, rather than dichotomous, terms. Preliminary results suggest that a meaningful ordinal ranking of content accessibility is not only possible, but also correlates significantly with the results of independent automated usability assessment procedures.",
"title": ""
},
{
"docid": "cd274d98201f27fe6159e6db2f7db8aa",
"text": "Due to the appearance of antibiotic resistance and the toxicity associated with currently used antibiotics, peptide antibiotics are the need of the hour. Thus, demand for new antimicrobial agents has brought great interest in new technologies to enhance safety. One such antimicrobial molecule is bacteriocin, synthesised by various micro-organisms. Bacteriocins are widely used in agriculture, veterinary medicine as a therapeutic, and as a food preservative agent to control various infectious and food-borne pathogens. In this review, we highlight the potential therapeutic and food preservative applications of bacteriocin.",
"title": ""
},
{
"docid": "fcd0c523e74717c572c288a90c588259",
"text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.",
"title": ""
},
{
"docid": "5a0fe40414f7881cc262800a43dfe4d0",
"text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.",
"title": ""
}
] |
scidocsrr
|
7679f348cccbfb25cf73b301fcd6ec20
|
Evaluation of machine learning classifiers for mobile malware detection
|
[
{
"docid": "06860bf1ede8dfe83d3a1b01fe4df835",
"text": "The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. a 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2f5107659cba0db161fbdf390ef05d26",
"text": "Currently, in the smartphone market, Android is the platform with the highest share. Due to this popularity and also to its open source nature, Android-based smartphones are now an ideal target for attackers. Since the number of malware designed for Android devices is increasing fast, Android users are looking for security solutions aimed at preventing malicious actions from damaging their smartphones. In this paper, we describe MADAM, a Multi-level Anomaly Detector for Android Malware. MADAM concurrently monitors Android at the kernel-level and user-level to detect real malware infections using machine learning techniques to distinguish between standard behaviors and malicious ones. The first prototype of MADAM is able to detect several real malware found in the wild. The device usability is not affected by MADAM due to the low number of false positives generated after the learning phase.",
"title": ""
}
] |
[
{
"docid": "2523edc5c48e212204f68863748947ac",
"text": "In this paper, the warpage simulation of a high-density multilayer printed circuit board (PCB) for solid-state disk drive (SSD) and microelectronic package was performed using the anisotropic viscoelastic shell modeling technique. The thermomechanical properties of various copper patterns were homogenized with the anisotropic shell model, which considered their viscoelastic properties. Then, warpage simulations of an SSD PCB unit/array and a full microelectronic package were performed; these simulations accounted for the initial warpage that occurred during fabrication using ABAQUS combined with a user-defined subroutine. Finally, it was demonstrated that both the maximum warpage and the remaining residual warpage of the full microelectronic package can be accurately predicted.",
"title": ""
},
{
"docid": "277cf6fa4b5085287593ee2ca86e67fc",
"text": "What can we learn of the human mind by examining its products? Here it is argued that a great deal can be learned, and that the study of human minds through its creations in the real world could be a promising field of study within the cognitive sciences. The city is a case in point. Since the beginning of cities human ideas about them have been dominated by geometric ideas, and the real history of cities has always oscillated between the geometric and the ‘organic’. Set in the context of the suggestion from cognitive neuroscience that we impose more geometric order on the world that it actually possesses, an intriguing question arises: what is the role of geometric intuition in how we understand cities and how we create them? Here we argue that all cities, the organic as well as the geometric, are pervasively ordered by geometric intuition, so that neither the forms of the cities nor their functioning can be understood without insight into their distinctive and pervasive emergent geometrical forms. The city is, as it is often said to be, the creation of economic and social processes, but, it is argued, these processes operate within an envelope of geometric possibility defined by human minds in its interaction with spatial laws that govern the relations between objects and spaces in the ambient world. Note: I have included only selected images here. All the examples will be shown fully in the presentation. Introduction: the Ideal and the Organic The most basic distinction we make about the form of cities is between the ideal and the organic. The ideal are geometric, the organic are not — or seem not to be. The geometric we define in terms of straight lines and 90 or 45 degree angles, the organic in terms of the lack of either (Fig. 1). The ideal seem to be top-down impositions of the human mind, the outcome of reason, often in association with power. We easily grasp their patterns when seen ‘all at once’. The organic we take to be the outcome of unplanned bottom up processes reflecting the",
"title": ""
},
{
"docid": "c02d207ed8606165e078de53a03bf608",
"text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: anand.bodapati@anderson.ucla.edu), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: rbucklin@anderson.ucla.edu), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*",
"title": ""
},
{
"docid": "04b62ed72ddf8f97b9cb8b4e59a279c1",
"text": "This paper aims to explore some of the manifold and changing links that official Pakistani state discourses forged between women and work from the 1940s to the late 2000s. The focus of the analysis is on discursive spaces that have been created for women engaged in non-domestic work. Starting from an interpretation of the existing academic literature, this paper argues that Pakistani women’s non-domestic work has been conceptualised in three major ways: as a contribution to national development, as a danger to the nation, and as non-existent. The paper concludes that although some conceptualisations of work have been more powerful than others and, at specific historical junctures, have become part of concrete state policies, alternative conceptualisations have always existed alongside them. Disclosing the state’s implication in the discursive construction of working women’s identities might contribute to the destabilisation of hegemonic concepts of gendered divisions of labour in Pakistan. DOI: https://doi.org/10.1016/j.wsif.2013.05.007 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-78605 Accepted Version Originally published at: Grünenfelder, Julia (2013). Discourses of gender identities and gender roles in Pakistan: Women and non-domestic work in political representations. Women’s Studies International Forum, 40:68-77. DOI: https://doi.org/10.1016/j.wsif.2013.05.007",
"title": ""
},
{
"docid": "ee8b20f685d4c025e1d113a676728359",
"text": "Two experiments were conducted to evaluate the effects of increasing concentrations of glycerol in concentrate diets on total tract digestibility, methane (CH4) emissions, growth, fatty acid profiles, and carcass traits of lambs. In both experiments, the control diet contained 57% barley grain, 14.5% wheat dried distillers grain with solubles (WDDGS), 13% sunflower hulls, 6.5% beet pulp, 6.3% alfalfa, and 3% mineral-vitamin mix. Increasing concentrations (7, 14, and 21% dietary DM) of glycerol in the dietary DM were replaced for barley grain. As glycerol was added, alfalfa meal and WDDGS were increased to maintain similar concentrations of CP and NDF among diets. In Exp.1, nutrient digestibility and CH4 emissions from 12 ram lambs were measured in a replicated 4 × 4 Latin square experiment. In Exp. 2, lamb performance was evaluated in 60 weaned lambs that were blocked by BW and randomly assigned to 1 of the 4 dietary treatments and fed to slaughter weight. In Exp. 1, nutrient digestibility and CH4 emissions were not altered (P = 0.15) by inclusion of glycerol in the diets. In Exp.2, increasing glycerol in the diet linearly decreased DMI (P < 0.01) and tended (P = 0.06) to reduce ADG, resulting in a linearly decreased final BW. Feed efficiency was not affected by glycerol inclusion in the diets. Carcass traits and total SFA or total MUFA proportions of subcutaneous fat were not affected (P = 0.77) by inclusion of glycerol, but PUFA were linearly decreased (P < 0.01). Proportions of 16:0, 10t-18:1, linoleic acid (18:2 n-6) and the n-6/n-3 ratio were linearly reduced (P < 0.01) and those of 18:0 (stearic acid), 9c-18:1 (oleic acid), linearly increased (P < 0.01) by glycerol. When included up to 21% of diet DM, glycerol did not affect nutrient digestibility or CH4 emissions of lambs fed barley based finishing diets. Glycerol may improve backfat fatty acid profiles by increasing 18:0 and 9c-18:1 and reducing 10t-18:1 and the n-6/n-3 ratio.",
"title": ""
},
{
"docid": "4552cfbd0aa36deeaa2e4a8c0b363f25",
"text": "This is a critical review of the literature on many-worlds interpretations (MWI), with arguments drawn partly from earlier critiques by Bell and Stein. The essential postulates involved in various MWI are extracted, and their consistency with the evident physical world is examined. Arguments are presented against MWI proposed by Everett, Graham and DeWitt. The relevance of frequency operators to MWI is examined; it is argued that frequency operator theorems of Hartle and Farhi-Goldstone-Gutmann do not in themselves provide a probability interpretation for quantum mechanics, and thus neither support existing MWI nor would be useful in constructing new MWI. Comments are made on papers by Geroch and Deutsch that advocate MWI. It is concluded that no plausible set of axioms exists for an MWI that describes",
"title": ""
},
{
"docid": "3f30c821132e07838de325c4f2183f84",
"text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.",
"title": ""
},
{
"docid": "2b42cf158d38153463514ed7bc00e25f",
"text": "The Disney Corporation made their first princess film in 1937 and has continued producing these movies. Over the years, Disney has received criticism for their gender interpretations and lack of racial diversity. This study will examine princess films from the 1990’s and 2000’s and decide whether race or time has an effect on the gender role portrayal of each character. By using a content analysis, this study identified the changes with each princess. The findings do suggest the princess characters exhibited more egalitarian behaviors over time. 1 The Disney Princess franchise began in 1937 with Snow White and the Seven Dwarfs and continues with the most recent film was Tangled (Rapunzel) in 2011. In past years, Disney film makers were criticized by the public audience for lack of ethnic diversity. In 1995, Disney introduced Pocahontas and three years later Mulan emerged creating racial diversity to the collection. Eleven years later, Disney released The Princess and the Frog (2009). The ongoing question is whether diverse princesses maintain the same qualities as their European counterparts. Walt Disney’s legacy lives on, but viewers are still curious about the all white princess collection which did not gain racial counterparts until 58 years later. It is important to recognize the role the Disney Corporation plays in today’s society. The company has several princesses’ films with matching merchandise. Parents purchase the items for their children and through film and merchandise, children are receiving messages such as how a woman ought to act, think or dress. Gender construction in Disney princess films remains important because of the messages it sends to children. We need to know whether gender roles presented in the films downplay the intellect of a woman in a modern society or whether Disney princesses are constricted to the female gender roles such as submissiveness and nurturing. In addition, we need to consider whether the messages are different for diverse princesses. The purpose of the study is to investigate the changes in gender construction in Disney princess characters related to the race of the character. This research also examines how gender construction of Disney princess characters changed from the 1900’s to 2000’s. A comparative content analysis will analyze gender role differences between women of color and white princesses. In particular, the study will ask whether race does matter in the gender roles revealed among each female character. By using social construction perspectives, Disney princesses of color were more masculine, but the most recent films became more egalitarian. 2 LITERATURE REVIEW Women in Disney film Davis (2006) examined women in Disney animated films by creating three categories: The Classic Years, The Middle Era, and The Eisner Era. The Classic Years, 19371967 were described as the beginning of Disney. During this period, women were rarely featured alone in films, but held central roles in the mid-1930s (Davis 2006:84). Three princess films were released and the characters carried out traditional feminine roles such as domestic work and passivity. Davis (2006) argued the princesses during The Classic Era were the least active and dynamic. The Middle Era, 1967-1988, led to a downward spiral for the company after the deaths of Walt and Roy Disney. The company faced increased amounts of debt and only eight Disney films were produced. The representation of women remained largely static (Davis 2006:137). The Eisner Era, 1989-2005, represented a revitalization of Disney with the release of 12 films with leading female roles. Based on the eras, Davis argued there was a shift after Walt Disney’s death which allowed more women in leading roles and released them from traditional gender roles. Independence was a new theme in this era allowing women to be selfsufficient unlike women in The Classic Era who relied on male heroines. Gender Role Portrayal in films England, Descartes, and Meek (2011) examined the Disney princess films and challenged the ideal of traditional gender roles among the prince and princess characters. The study consisted of all nine princess films divided into three categories based on their debut: early, middle and most current. The researchers tested three hypotheses: 1) gender roles among males and female characters would differ, 2) males would rescue or attempt to rescue the princess, and 3) characters would display more egalitarian behaviors over time (England, et al. 2011:557-58). The researchers coded traits as masculine and feminine. They concluded that princesses 3 displayed a mixture of masculine and feminine characteristics. These behaviors implied women are androgynous beings. For example, princesses portrayed bravery almost twice as much as princes (England, et al. 2011). The findings also showed males rescued women more and that women were rarely shown as rescuers. Overall, the data indicated Disney princess films had changed over time as women exhibited more masculine behaviors in more recent films. Choueiti, Granados, Pieper, and Smith (2010) conducted a content analysis regarding gender roles in top grossing Grated films. The researchers considered the following questions: 1) What is the male to female ratio? 2) Is gender related to the presentation of the character demographics such as role, type, or age? and 3) Is gender related to the presentation of character’s likeability, and the equal distribution of male and females from 1990-2005(Choueiti et al. 2010:776-77). The researchers concluded that there were more male characters suggesting the films were patriarchal. However, there was no correlation with demographics of the character and males being viewed as more likeable. Lastly, female representation has slightly decreased from 214 characters or 30.1% in 1990-94 to 281 characters or 29.4% in 2000-2004 (Choueiti et al. 2010:783). From examining gender role portrayals, females have become androgynous while maintaining minimal roles in animated film.",
"title": ""
},
{
"docid": "98a43d9fbd319039f8b22c2fdfaab496",
"text": "Ethereum’s smart contracts present an attractive incentive toward participating in the network. Deploying a smart contract allows a user to run a distributed application (Dapp) that includes storage, payment features, and cryptographic services all within the context of just a contract script and its layout. However, recently exploited vulnerabilities in the Solidity smart contract language have undermined the integrity of Ethereum’s smart contract implementations. After some discussion of previous work, we examine whether known vulnerabilities can be detected as attacks post factum from information available on the Ethereum blockchain. Then, we present findings on what information is available for a few selected contracts. Finally, we propose our design for a live monitoring and protection system based on our research findings, the prototypes we developed to gather data, and documented plans for extension.",
"title": ""
},
{
"docid": "41fb7141a8833c38921a273ddb9eae20",
"text": "Word-sense recognition and disambiguation (WERD) is the task of identifying word phrases and their senses in natural language text. Though it is well understood how to disambiguate noun phrases, this task is much less studied for verbs and verbal phrases. We present Werdy, a framework for WERD with particular focus on verbs and verbal phrases. Our framework first identifies multi-word expressions based on the syntactic structure of the sentence; this allows us to recognize both contiguous and non-contiguous phrases. We then generate a list of candidate senses for each word or phrase, using novel syntactic and semantic pruning techniques. We also construct and leverage a new resource of pairs of senses for verbs and their object arguments. Finally, we feed the so-obtained candidate senses into standard word-sense disambiguation (WSD) methods, and boost their precision and recall. Our experiments indicate that Werdy significantly increases the performance of existing WSD methods.",
"title": ""
},
{
"docid": "cd67a23b3ed7ab6d97a198b0e66a5628",
"text": "A growing number of children and adolescents are involved in resistance training in schools, fitness centers, and sports training facilities. In addition to increasing muscular strength and power, regular participation in a pediatric resistance training program may have a favorable influence on body composition, bone health, and reduction of sports-related injuries. Resistance training targeted to improve low fitness levels, poor trunk strength, and deficits in movement mechanics can offer observable health and fitness benefits to young athletes. However, pediatric resistance training programs need to be well-designed and supervised by qualified professionals who understand the physical and psychosocial uniqueness of children and adolescents. The sensible integration of different training methods along with the periodic manipulation of programs design variables over time will keep the training stimulus effective, challenging, and enjoyable for the participants.",
"title": ""
},
{
"docid": "f30a47ffc303584728e0bdddd1a1c478",
"text": "2 Introduction 1 An intense debate has raged for years over Africa's economic difficulties. Aside from the obvious problems of warfare, drought, and disease, the usual suspect is economic policy. However, the record of over a decade of structural adjustment efforts is difficult to read. A recent analysis by the World Bank provides significant evidence that improved policies lead to improved prospects for growth, and that the continuing economic problems in Africa are the result of a failure to carry liberalization far enough (World Bank 1993a). According to that analysis, no African governments were rated as having \" good \" economic policies and only one, Ghana, was deemed \" adequate; \" with an annual growth rate of 1.3 percent per capita (1987-1991). Opponents of World Bank/IMF policy have criticized the Bank's analysis on numerous grounds, but even World Bank economists mutter that rates of private investment and economic growth are higher in Viet Nam and China (whose economic policies still bear a strong socialist imprint) than almost anywhere in Africa. Something more than standard macroeconomic policy failures must be at work. This paper focuses on one of the \" usual suspects \"-rent seeking by officials at the highest government levels. Based on both theory and concrete African examples, it demonstrates how such rent seeking can harm an economy and stifle investment and growth. \" Rent seeking \" is often used interchangeably with \" corruption, \" and there is a large area of overlap. While corruption involves the misuse of public power for private gain, rent seeking derives from the economic concept of \" rent \"-earnings in excess of all relevant costs (including a market rate of return on invested assets). Rent is equivalent to what most non-economists think of as monopoly 3 profits. Rent seeking is then the effort to acquire access to or control over opportunities for earning rents. These efforts are not necessarily illegal, or even immoral. They include much lobbying and some forms of advertising. Some can be efficient, such as an auction of scare and valuable assets. However, economists and public sector management specialists are concerned with what Jagdish Bhagwati termed \" directly unproductive \" rent seeking activities, because they waste resources and can contribute to economic inefficiency (Bhagwati 1974, see also Krueger 1974). Corruption and other forms of rent seeking have been well-documented in every society on earth, from the banks of the Congo River …",
"title": ""
},
{
"docid": "21d84bd9ea7896892a3e69a707b03a6a",
"text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "f3f27a324736617f20abbf2ffd806f6d",
"text": "516",
"title": ""
},
{
"docid": "dd871ea6560655b730b2cff38a09eab3",
"text": "Advances in transfer learning have let go the limitations of traditional supervised machine learning algorithms for being dependent on annotated training data for training new models for every new domain. However, several applications encounter scenarios where models need to transfer/adapt across domains when the label sets vary both in terms of count of labels as well as their connotations. This paper presents first-of-its-kind transfer learning algorithm for cross-domain classification with multiple source domains and disparate label sets. It starts with identifying transferable knowledge from across multiple domains that can be useful for learning the target domain task. This knowledge in the form of selective labeled instances from different domains is congregated to form an auxiliary training set which is used for learning the target domain task. Experimental results validate the efficacy of the proposed algorithm against strong baselines on a real world social media and the 20 Newsgroups datasets.",
"title": ""
},
{
"docid": "e53b8b1f3aaaab107685bc1a873e62b2",
"text": "The paper considers a stylized model of a dynamic assortment optimization problem, where given a limited capacity constraint, we must decide the assortment of products to offer to customers to maximize the profit. Our model is motivated by the problem faced by retailers of stocking products on a shelf with limited capacities and by the problem of placing a limited number of ads on a web page. We assume that each customer chooses to purchase the product (or to click on the ad) that maximizes her utility. We use the multinomial logit choice model to represent demand. However, we do not know the demand for each product. We can learn the demand distribution by offering different product assortments, observing resulting selections, and inferring the demand distribution from past selections and assortment decisions. We present an adaptive policy for joint parameter estimation and assortment optimization. To evaluate our proposed policy, we define a benchmark profit as the maximum expected profit that we can earn if we know the underlying demand distribution in advance. We show that the running average expected profit generated by our policy converges to the benchmark profit and establish its convergence rate. Numerical experiments based on sales data from an online retailer indicate that our policy performs well, generating over 90% of the optimal profit after less than two days of sales. 1. Motivation and Problem Formulation Companies have realized the importance of offering products that are tailored to the demand of customers in each region. For instance, Wal-mart stocks specific lines of clothes targeted exclusively to certain groups of customers (Zimmerman (2006)). Car manufacturers are well-known for ∗School of Operations Research and Information Engineering, Cornell University, Ithaca, NY 14853, USA. E-mail: paatrus@cornell.edu †Department of Industrial Engineering and Operations Research, University of California–Berkeley, 4129 Etcheverry Hall, Berkeley, CA 94720, USA. E-mail: shen@ieor.berkeley.edu ‡School of Operations Research and Information Engineering and Department of Computer Science, Cornell University, Ithaca, NY 14853, USA. E-mail: shmoys@cs.cornell.edu",
"title": ""
},
{
"docid": "4474a6b36b2da68b9ad2da4c782049e4",
"text": "A novel stochastic adaptation of the recurrent reinforcement learning (RRL) methodology is applied to daily, weekly, and monthly stock index data, and compared to results obtained elsewhere using genetic programming (GP). The data sets used have been a considered a challenging test for algorithmic trading. It is demonstrated that RRL can reliably outperform buy-and-hold for the higher frequency data, in contrast to GP which performed best for monthly data.",
"title": ""
},
{
"docid": "0b0e1f2b8771d618ae2d317b1f55f3fd",
"text": "3D hand pose tracking/estimation will be very important in the next generation of human-computer interaction. Most of the currently available algorithms rely on low-cost active depth sensors. However, these sensors can be easily interfered by other active sources and require relatively high power consumption. As a result, they are currently not suitable for outdoor environments and mobile devices. This paper aims at tracking/estimating hand poses using passive stereo which avoids these limitations. A benchmark with 18,000 stereo image pairs and 18,000 depth images captured from different scenarios and the ground-truth 3D positions of palm and finger joints (obtained from the manual label) is thus proposed. This paper demonstrates that the performance of the state-of-theart tracking/estimation algorithms can be maintained with most stereo matching algorithms on the proposed benchmark, as long as the hand segmentation is correct. As a result, a novel stereo-based hand segmentation algorithm specially designed for hand tracking/estimation is proposed. The quantitative evaluation demonstrates that the proposed algorithm is suitable for the state-of-the-art hand pose tracking/estimation algorithms and the tracking quality is comparable to the use of active depth sensors under different challenging scenarios.",
"title": ""
},
{
"docid": "589756d7ff12b1d162d3bdf00212482b",
"text": "We study polynomial-time clearing algorithms for the barter exchange problem. We put forward a family of carefully designed approximation algorithms with desirable worst-case guarantees. We further apply a series of novel heuristics to implement these algorithms. We demonstrate via kidney exchange data sets that these algorithms achieve near-optimal performances while outperforming the state-of-the-art ILP based algorithms in running time by orders of magnitude.",
"title": ""
}
] |
scidocsrr
|
237feb17049f43dacd632c497cccdd50
|
A Practical Algorithm for Solving the Incoherence Problem of Topic Models In Industrial Applications
|
[
{
"docid": "2bdfeabf15a4ca096c2fe5ffa95f3b17",
"text": "This paper studies how to incorporate the external word correlation knowledge to improve the coherence of topic modeling. Existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics. To solve this problem, we build a Markov Random Field (MRF) regularized Latent Dirichlet Allocation (LDA) model, which defines a MRF on the latent topic layer of LDA to encourage words labeled as similar to share the same topic label. Under our model, the topic assignment of each word is not independent, but rather affected by the topic labels of its correlated words. Similar words have better chance to be put into the same topic due to the regularization of MRF, hence the coherence of topics can be boosted. In addition, our model can accommodate the subtlety that whether two words are similar depends on which topic they appear in, which allows word with multiple senses to be put into different topics properly. We derive a variational inference method to infer the posterior probabilities and learn model parameters and present techniques to deal with the hardto-compute partition function in MRF. Experiments on two datasets demonstrate the effectiveness of our model.",
"title": ""
},
{
"docid": "6a23480588ca47b9e53de0fd4ff1ecb1",
"text": "We present the nested Chinese restaurant process (nCRP), a stochastic process that assigns probability distributions to ensembles of infinitely deep, infinitely branching trees. We show how this stochastic process can be used as a prior distribution in a Bayesian nonparametric model of document collections. Specifically, we present an application to information retrieval in which documents are modeled as paths down a random tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algorithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this algorithm on collections of scientific abstracts from several journals. This model exemplifies a recent trend in statistical machine learning—the use of Bayesian nonparametric methods to infer distributions on flexible data structures.",
"title": ""
},
{
"docid": "6f6667e4c485978b566d25837083b565",
"text": "Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory.",
"title": ""
},
{
"docid": "e83ae69dea6d34e169fc34c64d33ee93",
"text": "Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.",
"title": ""
}
] |
[
{
"docid": "93d8b8afe93d10e54bf4a27ba3b58220",
"text": "Researchers interested in emotion have long struggled with the problem of how to elicit emotional responses in the laboratory. In this article, we summarise five years of work to develop a set of films that reliably elicit each of eight emotional states (amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise). After evaluating over 250 films, we showed selected film clips to an ethnically diverse sample of 494 English-speaking subjects. We then chose the two best films for each of the eight target emotions based on the intensity and discreteness of subjects' responses to each film. We found that our set of 16 films successfully elicited amusement, anger, contentment. disgust, sadness, surprise, a relatively neutral state, and, to a lesser extent, fear. We compare this set of films with another set recently described by Philippot (1993), and indicate that detailed instructions for creating our set of film stimuli will be provided on request.",
"title": ""
},
{
"docid": "ddc73328c18db1e4ef585671fb3a838d",
"text": "Gamification has drawn the attention of academics, practitioners and business professionals in domains as diverse as education, information studies, human–computer interaction, and health. As yet, the term remains mired in diverse meanings and contradictory uses, while the concept faces division on its academic worth, underdeveloped theoretical foundations, and a dearth of standardized guidelines for application. Despite widespread commentary on its merits and shortcomings, little empirical work has sought to validate gamification as a meaningful concept and provide evidence of its effectiveness as a tool for motivating and engaging users in non-entertainment contexts. Moreover, no work to date has surveyed gamification as a field of study from a human–computer studies perspective. In this paper, we present a systematic survey on the use of gamification in published theoretical reviews and research papers involving interactive systems and human participants. We outline current theoretical understandings of gamification and draw comparisons to related approaches, including alternate reality games (ARGs), games with a purpose (GWAPs), and gameful design. We present a multidisciplinary review of gamification in action, focusing on empirical findings related to purpose and context, design of systems, approaches and techniques, and user impact. Findings from the survey show that a standard conceptualization of gamification is emerging against a growing backdrop of empirical participantsbased research. However, definitional subjectivity, diverse or unstated theoretical foundations, incongruities among empirical findings, and inadequate experimental design remain matters of concern. We discuss how gamification may to be more usefully presented as a subset of a larger effort to improve the user experience of interactive systems through gameful design. We end by suggesting points of departure for continued empirical investigations of gamified practice and its effects. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d6e234f5fa305b9b698b2cd7a505895",
"text": "Innovations transform our research traditions and become the driving force to advance individual, group, and social creativity. Meanwhile, interdisciplinary research is increasingly being promoted as a route to advance the complex challenges we face as a society. In this paper, we use Latent Dirichlet Allocation (LDA) citation as a proxy context for the diffusion of an innovation. With an analysis of topic evolution, we divide the diffusion process into five stages: testing and evaluation, implementation, improvement, extending, and fading. Through a correlation analysis of topic and subject, we show the application of LDA in different subjects. We also reveal the cross-boundary diffusion between different subjects based on the analysis of the interdisciplinary studies. Results show that as LDA is transferred into different areas, the adoption of each subject is relatively adjacent to those with similar research interests. Our findings further support researchers’ understanding of the impact formation of innovation.",
"title": ""
},
{
"docid": "604619dd5f23569eaff40eabc8e94f52",
"text": "Understanding the causes and effects of species invasions is a priority in ecology and conservation biology. One of the crucial steps in evaluating the impact of invasive species is to map changes in their actual and potential distribution and relative abundance across a wide region over an appropriate time span. While direct and indirect remote sensing approaches have long been used to assess the invasion of plant species, the distribution of invasive animals is mainly based on indirect methods that rely on environmental proxies of conditions suitable for colonization by a particular species. The aim of this article is to review recent efforts in the predictive modelling of the spread of both plant and animal invasive species using remote sensing, and to stimulate debate on the potential use of remote sensing in biological invasion monitoring and forecasting. Specifically, the challenges and drawbacks of remote sensing techniques are discussed in relation to: i) developing species distribution models, and ii) studying life cycle changes and phenological variations. Finally, the paper addresses the open challenges and pitfalls of remote sensing for biological invasion studies including sensor characteristics, upscaling and downscaling in species distribution models, and uncertainty of results.",
"title": ""
},
{
"docid": "02d9218324b0649b95bf01db101b4e22",
"text": "Face anti-spoofing is very significant to the security of face recognition. Many existing literatures focus on the study of photo attack. For the video attack, however, the related research efforts are still insufficient. In this paper, instead of extracting features from a single image, features are learned from video frames. To realize face anti-spoofing, the spatiotemporal features of continuous video frames are extracted using 3D convolution neural network (CNN) from the short video frame level. Experimental results show that the two sets of face anti-spoofing public databases, Replay-Attack and CASIA, have achieved the HTER (Half Total Error Rate) of 0.04% and 10.65%, respectively, which is better than the state-of-the-art.",
"title": ""
},
{
"docid": "64cee7715639e354e3fb0a367e2c57fc",
"text": "Cloud computing offers applications and infrastructure at low prices and opens the possibility of criminal cases. The increasing criminal cases in the cloud environment have made investigators to use latest investigative methods for forensic process. Similarly, the attackers discover new ways to hide the sources of evidence. This may hinder the investigation process and is called anti-forensics. Anti-forensic attack compromises the trust and availability of evidence. To defend such kind of attacks against forensic tools, anti-forensic techniques in cloud environment have to be researched exhaustively. This paper explores the anti-forensic techniques in the cloud environment and proposes a framework for detecting the anti-forensic attack against cloud forensic process. The framework provides an effective model for forensic investigation of anti-forensic attacks in cloud.",
"title": ""
},
{
"docid": "300bff5036b5b4e83a4bc605020b49e3",
"text": "Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.",
"title": ""
},
{
"docid": "1d25beed16cbbf3de507d8bd9eb9fb4c",
"text": "OBJECTIVE\nTo compare the morphology of the hymen in adolescent girls who have and have not had sexual intercourse involving penile-vaginal penetration.\n\n\nSUBJECTS\nFemale patients aged 13 to 19 years, recruited from an urban adolescent medicine practice.\n\n\nMETHODS\nSubjects were interviewed in private after completing detailed questionnaires and then underwent a physical examination. External genital inspections were performed using a colposcope with an attached 35-mm camera to document the appearance of the hymen. The presence of notches or clefts was recorded during the examination, and photographs taken at x10 magnification were used to take measurements of the width of the posterior hymenal rim.\n\n\nRESULTS\nPosterior hymenal notches and clefts were more common among girls admitting past intercourse (13/27 [48%]) than in girls who denied intercourse (2/58 [3%]; P =.001), but the mean width of the posterior hymenal rim was not significantly different between the 2 groups (2.5 mm vs 3.0 mm; P =.11). Two subjects who denied intercourse but had posterior hymenal clefts described a painful first experience with tampon insertion.\n\n\nCONCLUSIONS\nDeep notches or complete clefts in the posterior rim of the hymen were rare in girls who denied intercourse. Subjects who admitted past intercourse still had nondisrupted, intact hymens in 52% of cases.",
"title": ""
},
{
"docid": "e6f75ee14f51496b041638f468f29642",
"text": "The exponential increase of mobile data traffic requires disrupting approaches for the realization of future 5G systems. In this article, we overview the technologies that will pave the way for a novel cellular architecture that integrates high-data-rate access and backhaul networks based on millimeter-wave frequencies (57-66, 71-76, and 81-86 GHz). We evaluate the feasibility of short- and medium-distance links at these frequencies and analyze the requirements from the transceiver architecture and technology, antennas, and modulation scheme points of view. Technical challenges are discussed, and design options highlighted; finally, a performance evaluation quantifies the benefits of millimeter- wave systems with respect to current cellular technologies.",
"title": ""
},
{
"docid": "0dcd777080c565283802cc8d4674c3f9",
"text": "Speech separation or enhancement algorithms seldom exploit information about phoneme identities. In this study, we propose a novel phoneme-specific speech separation method. Rather than training a single global model to enhance all the frames, we train a separate model for each phoneme to process its corresponding frames. A robust ASR system is employed to identify the phoneme identity of each frame. This way, the information from ASR systems and language models can directly influence speech separation by selecting a phoneme-specific model to use at the test stage. In addition, phoneme-specific models have fewer variations to model and do not exhibit the data imbalance problem. The improved enhancement results can in turn help recognition. Experiments on the corpus of the second CHiME speech separation and recognition challenge (task-2) demonstrate the effectiveness of this method in terms of objective measures of speech intelligibility and quality, as well as recognition performance.",
"title": ""
},
{
"docid": "49110953607e0f70ded19901a9816754",
"text": "Selection of a project among a set of possible alternatives is a difficult task that the decision maker (DM) has to face. In this paper, by using a fuzzy TOPSIS technique we propose a new method for a project selection problem. After reviewing four common methods of comparing investment alternatives (net present value, rate of return, benefit cost analysis and payback period) we use them as criteria in a TOPSIS technique. First we calculate the weight of each criterion by a pairwise comparison and then we utilize the improved TOPSIS assessment for the project selection. Keywords—Fuzzy Theory, Pairwise Comparison, Project Selection, TOPSIS Technique.",
"title": ""
},
{
"docid": "e2308b435dddebc422ff49a7534bbf83",
"text": "Memory encryption has yet to be used at the core of operating system designs to provide confidentiality of code and data. As a result, numerous vulnerabilities exist at every level of the software stack. Three general approaches have evolved to rectify this problem. The most popular approach is based on complex hardware enhancements; this allows all encryption and decryption to be conducted within a well-defined trusted boundary. Unfortunately, these designs have not been integrated within commodity processors and have primarily been explored through simulation with very few prototypes. An alternative approach has been to augment existing hardware with operating system enhancements for manipulating keys, providing improved trust. This approach has provided insights into the use of encryption but has involved unacceptable overheads and has not been adopted in commercial operating systems. Finally, specialized industrial devices have evolved, potentially adding coprocessors, to increase security of particular operations in specific operating environments. However, this approach lacks generality and has introduced unexpected vulnerabilities of its own. Recently, memory encryption primitives have been integrated within commodity processors such as the Intel i7, AMD bulldozer, and multiple ARM variants. This opens the door for new operating system designs that provide confidentiality across the entire software stack outside the CPU. To date, little practical experimentation has been conducted, and the improvements in security and associated performance degradation has yet to be quantified. This article surveys the current memory encryption literature from the viewpoint of these central issues.",
"title": ""
},
{
"docid": "9a66f3a0c7c5e625e26909f04f43f5f4",
"text": "El propósito de este estudio fue examinar el impacto relativo de los diferentes tipos de liderazgo en los resultados académicos y no académicos de los estudiantes. La metodología consistió en el análisis de los resultados de 27 estudios publicados sobre la relación entre liderazgo y resultados de los estudiantes. El primer metaanálisis, que incluyó 22 de los 27 estudios, implicó una comparación de los efectos de la transformación y liderazgo instructivo en los resultados de los estudiantes. Con el segundo meta-análisis se realizó una comparación de los efectos de cinco conjuntos derivados inductivamente de prácticas de liderazgo en los resultados de los estudiantes. Doce de los estudios contribuyeron a este segundo análisis. El primer meta-análisis indicó que el efecto promedio de liderazgo instructivo en los resultados de los estudiantes fue de tres a cuatro veces la de liderazgo transformacional. La inspección de los elementos de la encuesta que se utilizaron para medir el liderazgo escolar reveló cinco conjuntos de prácticas de liderazgo o dimensiones: el establecimiento de metas y expectativas; dotación de recursos estratégicos, la planificación, coordinación y evaluación de la enseñanza y el currículo; promoción y participan en el aprendizaje y desarrollo de los profesores, y la garantía de un ambiente ordenado y de apoyo. El segundo metaanálisis reveló fuertes efectos promedio para la dimensión de liderazgo que implica promover y participar en el aprendizaje docente, el desarrollo y efectos moderados de las dimensiones relacionadas con la fijación de objetivos y la planificación, coordinación y evaluación de la enseñanza y el currículo. Las comparaciones entre el liderazgo transformacional y el instructivo y entre las cinco dimensiones de liderazgo sugirieron que los líderes que focalizan sus relaciones, su trabajo y su aprendizaje en el asunto clave de la enseñanza y el aprendizaje, tendrán una mayor influencia en los resultados de los estudiantiles. El artículo concluye con una discusión sobre la necesidad de que liderazgo, investigación y práctica estén más estrechamente vinculados a la evidencia sobre la enseñanza eficaz y el aprendizaje efectivo del profesorado. Dicha alineación podría aumentar aún más el impacto del liderazgo escolar en los resultados de los estudiantes.",
"title": ""
},
{
"docid": "457684e85d51869692aab90231a711a1",
"text": "Cassandra is a distributed storage system for managing structured data that is designed to scale to a very large size across many commodity servers, with no single point of failure. Reliability at massive scale is a very big challenge. Outages in the service can have significant negative impact. Hence Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different datacenters). At this scale, small and large components fail continuously; the way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. Cassandra has achieved several goals--scalability, high performance, high availability and applicability. In many ways Cassandra resembles a database and shares many design and implementation strategies with databases. Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format.",
"title": ""
},
{
"docid": "aa9d428d21a5cebee2990dede931953a",
"text": "A grand challenge of the 21 century cosmology is to accurately estimate the cosmological parameters of our Universe. A major approach in estimating the cosmological parameters is to use the large scale matter distribution of the Universe. Galaxy surveys provide the means to map out cosmic large-scale structure in three dimensions. Information about galaxy locations is typically summarized in a “single” function of scale, such as the galaxy correlation function or powerspectrum. We show that it is possible to estimate these cosmological parameters directly from the distribution of matter. This paper presents the application of deep 3D convolutional networks to volumetric representation of dark-matter simulations as well as the results obtained using a recently proposed distribution regression framework, showing that machine learning techniques are comparable to, and can sometimes outperform, maximum-likelihood point estimates using “cosmological models”. This opens the way to estimating the parameters of our Universe with higher accuracy.",
"title": ""
},
{
"docid": "3953962740dd06ad2cadbb5d6b7c2cef",
"text": "The latest election cycle generated sobering examples of the threat that fake news poses to democracy. Primarily disseminated by hyper-partisan media outlets, fake news proved capable of becoming viral sensations that can dominate social media and influence elections. To address this problem, we begin with stance detection, which is a first step towards identifying fake news. The goal of this project is to identify whether given headline-article pairs: (1) agree, (2) disagree, (3) discuss the same topic, or (4) are not related at all, as described in [1]. Our method feeds the headline-article pairs into a bidirectional LSTM which first analyzes the article and then uses the acquired article representation to analyze the headline. On top of the output of the conditioned bidirectional LSTM, we concatenate global statistical features extracted from the headline-article pairs. We report a 9.7% improvement in the Fake News Challenge evaluation metric and a 22.7% improvement in mean F1 compared to the highest scoring baseline. We also present qualitative results that show how our method outperforms state-of-the art algorithms on this challenge.",
"title": ""
},
{
"docid": "068d87d2f1e24fdbe8896e0ab92c2934",
"text": "This paper presents a primary color optical pixel sensor circuit that utilizes hydrogenated amorphous silicon thin-film transistors (TFTs). To minimize the effect of ambient light on the sensing result of optical sensor circuit, the proposed sensor circuit combines photo TFTs with color filters to sense a primary color optical input signal. A readout circuit, which also uses thin-film transistors, is integrated into the sensor circuit for sampling the stored charges in the pixel sensor circuit. Measurements demonstrate that the signal-to-noise ratio of the proposed sensor circuit is unaffected by ambient light under illumination up to 12 000 lux by white LEDs. Thus, the proposed optical pixel sensor circuit is suitable for receiving primary color optical input signals in large TFT-LCD panels.",
"title": ""
},
{
"docid": "ac3e21aec5dc2b48f58aca1fad489ccd",
"text": "Methods for automated knowledge base construction often rely on trained fixed-length vector representations of relations and entities to predict facts. Recent work showed that such representations can be regularized to inject first-order logic formulae. This enables to incorporate domain-knowledge for improved prediction of facts, especially for uncommon relations. However, current approaches rely on propositionalization of formulae and thus do not scale to large sets of formulae or knowledge bases with many facts. Here we propose a method that imposes first-order constraints directly on relation representations, avoiding costly grounding of formulae. We show that our approach works well for implications between pairs of relations on artificial datasets.",
"title": ""
},
{
"docid": "35756d57b4d322de9326aa0f71b49352",
"text": "A 32-Gb/s data-interpolator receiver for electrical chip-to-chip communications is introduced. The receiver front-end samples incoming data by using a blind clock signal, which has a plesiochronous frequency-phase relation with the data. Phase alignment between the data and decision timing is achieved by interpolating the input-signal samples in the analog domain. The receiver has a continuous-time linear equalizer and a two-tap loop unrolled DFE using adjustable-threshold comparators. The receiver occupies 0.24 mm2 and consumes 308.4 mW from a 0.9-V supply when it is implemented with a 28-nm CMOS process.",
"title": ""
},
{
"docid": "ac9f345fb7f4ec78d53bb31a9d2c248f",
"text": "Purpose: The details of a full simulation of an inline side-coupled 6 MV linear accelerator linac from the electron gun to the target are presented. Commissioning of the above simulation was performed by using the derived electron phase space at the target as an input into Monte Carlo studies of dose distributions within a water tank and matching the simulation results to measurement data. This work is motivated by linac-MR studies, where a validated full linac simulation is first required in order to perform future studies on linac performance in the presence of an external magnetic field. Methods: An electron gun was initially designed and optimized with a 2D finite difference program using Child’s law. The electron gun simulation served as an input to a 6 MV linac waveguide simulation, which consisted of a 3D finite element radio-frequency field solution within the waveguide and electron trajectories determined from particle dynamics modeling. The electron gun design was constrained to match the cathode potential and electron gun current of a Varian 600C, while the linac waveguide was optimized to match the measured target current. Commissioning of the full simulation was performed by matching the simulated Monte Carlo dose distributions in a water tank to measured distributions. Results: The full linac simulation matched all the electrical measurements taken from a Varian 600C and the commissioning process lead to excellent agreements in the dose profile measurements. Greater than 99% of all points met a 1%/1mm acceptance criterion for all field sizes analyzed, with the exception of the largest 40 40 cm2 field for which 98% of all points met the 1%/1mm acceptance criterion and the depth dose curves matched measurement to within 1% deeper than 1.5 cm depth. The optimized energy and spatial intensity distributions, as given by the commissioning process, were determined to be non-Gaussian in form for the inline side-coupled 6 MV linac simulated. Conclusions: An integrated simulation of an inline side-coupled 6 MV linac has been completed and benchmarked matching all electrical and dosimetric measurements to high accuracy. The results showed non-Gaussian spatial intensity and energy distributions for the linac modeled. © 2010 American Association of Physicists in Medicine. DOI: 10.1118/1.3397455",
"title": ""
}
] |
scidocsrr
|
75438f0fccec247dc0514c8643b923b7
|
Learning monocular reactive UAV control in cluttered natural environments
|
[
{
"docid": "3d20ba5dc32270cb75df7a2d499a70e4",
"text": "The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.",
"title": ""
}
] |
[
{
"docid": "58e6b3b63b2210da621aabd891dbc627",
"text": "The precise role of orbitofrontal cortex (OFC) in affective processing is still debated. One view suggests OFC represents stimulus reward value and supports learning and relearning of stimulus-reward associations. An alternate view implicates OFC in behavioral control after rewarding or punishing feedback. To discriminate between these possibilities, we used event-related functional magnetic resonance imaging in subjects performing a reversal task in which, on each trial, selection of the correct stimulus led to a 70% probability of receiving a monetary reward and a 30% probability of obtaining a monetary punishment. The incorrect stimulus had the reverse contingency. In one condition (choice), subjects had to choose which stimulus to select and switch their response to the other stimulus once contingencies had changed. In another condition (imperative), subjects had simply to track the currently rewarded stimulus. In some regions of OFC and medial prefrontal cortex, activity was related to valence of outcome, whereas in adjacent areas activity was associated with behavioral choice, signaling maintenance of the current response strategy on a subsequent trial. Caudolateral OFC-anterior insula was activated by punishing feedback preceding a switch in stimulus in both the choice and imperative conditions, indicating a possible role for this region in signaling a change in reward contingencies. These results suggest functional heterogeneity within the OFC, with a role for this region in representing stimulus-reward values, signaling changes in reinforcement contingencies and in behavioral control.",
"title": ""
},
{
"docid": "c1ac49d789e74cd44d8d49ff799e3597",
"text": "In this work, we focus on modeling user-generated review and overall rating pairs, and aim to identify semantic aspects and aspect-level sentiments from review data as well as to predict overall sentiments of reviews. We propose a novel probabilistic supervised joint aspect and sentiment model (SJASM) to deal with the problems in one go under a unified framework. SJASM represents each review document in the form of opinion pairs, and can simultaneously model aspect terms and corresponding opinion words of the review for hidden aspect and sentiment detection. It also leverages sentimental overall ratings, which often come with online reviews, as supervision data, and can infer the semantic aspects and aspect-level sentiments that are not only meaningful but also predictive of overall sentiments of reviews. Moreover, we also develop efficient inference method for parameter estimation of SJASM based on collapsed Gibbs sampling. We evaluate SJASM extensively on real-world review data, and experimental results demonstrate that the proposed model outperforms seven well-established baseline methods for sentiment analysis tasks.",
"title": ""
},
{
"docid": "3b7436cf4660fb82eeb7efbf9c413159",
"text": "The practical work described here was designed in the aim of combining several periods that were previously carried-out independently during the academic year and to more appropriately mimic a \"research\" environment. It illustrates several fundamental biochemical principles as well as experimental aspects and important techniques including spectrophotometry, chromatography, centrifugation, and electrophoresis. Lactate dehydrogenase (LDH) is an enzyme of choice for a student laboratory experiment. This enzyme has many advantages, namely its relative high abundance, high specific activity and high stability. In the first part, the purification scheme starting from pig heart includes ammonium sulphate fractionation, desalting by size exclusion chromatography, anion exchange chromatography and pseudo-affinity chromatography. In the second part of the work the obtained fractions are accessed for protein and activity content in order to evaluate the efficiency of the different purification steps, and are also characterised by electrophoresis using non-denaturing and denaturing conditions. Finally, in the third part, the purified enzyme is subjected to comprehensive analysis of its kinetic properties and compared to those of a commercial skeletal muscle LDH preparation. The results presented thereafter are representative of the data-sets obtained by the student-pairs and are comparable to those obtained by the instructors and the reference publications. This multistep purification of an enzyme from its source material, where students perform different purification techniques over successive laboratory days, the characterisation of the purified enzyme, and the extensive approach of enzyme kinetics, naturally fits into a project-based biochemistry learning process.",
"title": ""
},
{
"docid": "6886849300b597fdb179162744b40ee2",
"text": "This paper argues that the dominant study of the form and structure of games – their poetics – should be complemented by the analysis of their aesthetics (as understood by modern cultural theory): how gamers use their games, what aspects they enjoy and what kinds of pleasures they experience by playing them. The paper outlines a possible aesthetic theory of games based on different aspects of pleasure: the psychoanalytical, the social and the physical form of pleasure.",
"title": ""
},
{
"docid": "1a0d0b0b38e6d6434448cee8959c58a8",
"text": "This paper reports the first results of an investigation into solutions to problems of security in computer systems; it establishes the basis for rigorous investigation by providing a general descriptive model of a computer system. Borrowing basic concepts and constructs from general systems theory, we present a basic result concerning security in computer systems, using precise notions of \"security\" and \"compromise\". We also demonstrate how a change in requirements can be reflected in the resulting mathematical model. A lengthy introductory section is included in order to bridge the gap between general systems theory and practical problem solving. ii PREFACE General systems theory is a relatively new and rapidly growing mathematical discipline which shows great promise for application in the computer sciences. The discipline includes both \"general systems-theory\" and \"general-systems-theory\": that is, one may properly read the phrase \"general systems theory\" in both ways. In this paper, we have borrowed from the works of general systems theorists, principally from the basic work of Mesarovic´, to formulate a mathematical framework within which to deal with the problems of secure computer systems. At the present time we feel that the mathematical representation developed herein is adequate to deal with most if not all of the security problems one may wish to pose. In Section III we have given a result which deals with the most trivial of the secure computer systems one might find viable in actual use. In the concluding section we review the application of our mathematical methodology and suggest major areas of concern in the design of a secure system. The results reported in this paper lay the groundwork for further, more specific investigation into secure computer systems. The investigation will proceed by specializing the elements of the model to represent particular aspects of system design and operation. Such an investigation will be reported in the second volume of this series where we assume a system with centralized access control. A preliminary investigation of distributed access is just beginning; the results of that investigation would be reported in a third volume of the series.",
"title": ""
},
{
"docid": "8bf514424a07e667cc566614c1f25ec2",
"text": "Clustering is one of the most commonly used data mining techniques. Shared nearest neighbor clustering is an important density-based clustering technique that has been widely adopted in many application domains, such as environmental science and urban computing. As the size of data becomes extremely large nowadays, it is impossible for large-scale data to be processed on a single machine. Therefore, the scalability problem of traditional clustering algorithms running on a single machine must be addressed. In this paper, we improve the traditional density-based clustering algorithm by utilizing powerful programming platform (Spark) and distributed computing clusters. In particular, we design and implement Spark-based shared nearest neighbor clustering algorithm called SparkSNN, a scalable density-based clustering algorithm on Spark for big data analysis. We conduct our experiments using real data, i.e., Maryland crime data, to evaluate the performance of the proposed algorithm with respect to speed-up and scale-up. The experimental results well confirm the effectiveness and efficiency of the proposed SparkSNN clustering algorithm.",
"title": ""
},
{
"docid": "f68e447acd30cab6c2c68affb8c58d0c",
"text": "This paper presents a Doppler radar sensor system with camera-aided random body movement cancellation (RBMC) techniques for noncontact vital sign detection. The camera measures the subject's random body motion that is provided for the radar system to perform RBMC and extract the uniform vital sign signals of respiration and heartbeat. Three RBMC strategies are proposed: 1) phase compensation at radar RF front-end, 2) phase compensation for baseband complex signals, and 3) movement cancellation for demodulated signals. Both theoretical analysis and radar simulation have been carried out to validate the proposed RBMC techniques. An experiment was carried out to measure a subject person who was breathing normally but randomly moving his body back and forth. The experimental result reveals that the proposed radar system is effective for RBMC.",
"title": ""
},
{
"docid": "492b01d63bbe0e26522958e8d6147592",
"text": "In this paper, an original method to reduce the height of a dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The miniaturization technique consists in adding a capacitive load between vertical plates. The height of the radiating element is reduced to 0.1λ0, where λ0 is the wavelength at the lowest operation frequency for a Standing Wave Ratio (SWR) <2.5, which corresponds to a reduction factor of 37.5%. The measured input impedance bandwidth is 64% from 1.6 GHz to 3.1 GHz with a SWR <2.5.",
"title": ""
},
{
"docid": "8cdc70a728191aa25789c6284d581dc0",
"text": "The objective of the smart helmet is to provide a means and apparatus for detecting and reporting accidents. Sensors, Wi-Fi enabled processor, and cloud computing infrastructures are utilised for building the system. The accident detection system communicates the accelerometer values to the processor which continuously monitors for erratic variations. When an accident occurs, the related details are sent to the emergency contacts by utilizing a cloud based service. The vehicle location is obtained by making use of the global positioning system. The system promises a reliable and quick delivery of information relating to the accident in real time and is appropriately named Konnect. Thus, by making use of the ubiquitous connectivity which is a salient feature for the smart cities, a smart helmet for accident detection is built.",
"title": ""
},
{
"docid": "14f127a8dd4a0fab5acd9db2a3924657",
"text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].",
"title": ""
},
{
"docid": "e765e634de8b42da8e7b1e43dcc0b8ba",
"text": "Recently, natural language processing applications have become very popular in the industry. Examples of such applications include “semantic” enterprise search engines, document categorizers, speech recognizers and – last but not least – conversational agents, also known as virtual assistants or “chatbots”. The latter in particular are very sought-after in the customer care domain, where the aim is to complement the live agent experience with an artificial intelligence able to help users fulfil a task. In this paper, we discuss the challenges and limitations of industrial chatbot applications, with a particular focus on the “human-in-the-loop” aspect, whereby a cooperation between human and machine takes place in mutual interest. Furthermore, we analyse how the same aspect intervenes in other industrial natural language processing applications.",
"title": ""
},
{
"docid": "a46cae06be40fa4dbdeff1fe06b69c2c",
"text": "As the amount of information offered by information systems is increasing exponentially, the need of personalized approaches for information access increases. This work discusses user profiles designed for providing personalized information access. We first present a general classification of research directions on adaptive systems, followed by a state-of-the-art study about user profiling. We propose then a new classification approach of user profile model. This classification is based on the user dimensions considered to build the user profile.",
"title": ""
},
{
"docid": "d780db3ec609d74827a88c0fa0d25f56",
"text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.",
"title": ""
},
{
"docid": "538047fc099d0062ab100343b26f5cb7",
"text": "AIM\nTo examine the evidence on the association between cannabis and depression and evaluate competing explanations of the association.\n\n\nMETHODS\nA search of Medline, Psychinfo and EMBASE databases was conducted. All references in which the terms 'cannabis', 'marijuana' or 'cannabinoid', and in which the words 'depression/depressive disorder/depressed', 'mood', 'mood disorder' or 'dysthymia' were collected. Only research studies were reviewed. Case reports are not discussed.\n\n\nRESULTS\nThere was a modest association between heavy or problematic cannabis use and depression in cohort studies and well-designed cross-sectional studies in the general population. Little evidence was found for an association between depression and infrequent cannabis use. A number of studies found a modest association between early-onset, regular cannabis use and later depression, which persisted after controlling for potential confounding variables. There was little evidence of an increased risk of later cannabis use among people with depression and hence little support for the self-medication hypothesis. There have been a limited number of studies that have controlled for potential confounding variables in the association between heavy cannabis use and depression. These have found that the risk is much reduced by statistical control but a modest relationship remains.\n\n\nCONCLUSIONS\nHeavy cannabis use and depression are associated and evidence from longitudinal studies suggests that heavy cannabis use may increase depressive symptoms among some users. It is still too early, however, to rule out the hypothesis that the association is due to common social, family and contextual factors that increase risks of both heavy cannabis use and depression. Longitudinal studies and studies of twins discordant for heavy cannabis use and depression are needed to rule out common causes. If the relationship is causal, then on current patterns of cannabis use in the most developed societies cannabis use makes, at most, a modest contribution to the population prevalence of depression.",
"title": ""
},
{
"docid": "76aacf8fd5c24f64211015ce9c196bf0",
"text": "In industrially relevant Cu/ZnO/Al2 O3 catalysts for methanol synthesis, the strong metal support interaction between Cu and ZnO is known to play a key role. Here we report a detailed chemical transmission electron microscopy study on the nanostructural consequences of the strong metal support interaction in an activated high-performance catalyst. For the first time, clear evidence for the formation of metastable \"graphite-like\" ZnO layers during reductive activation is provided. The description of this metastable layer might contribute to the understanding of synergistic effects between the components of the Cu/ZnO/Al2 O3 catalysts.",
"title": ""
},
{
"docid": "2377cb2019609c6911fe766a0918b38c",
"text": "There are a number of emergent traffic and transportation phenomena that cannot be analyzed successfully and explained using analytical models. The only way to analyze such phenomena is through the development of models that can simulate behavior of every agent. Agent-based modeling is an approach based on the idea that a system is composed of decentralized individual ‘agents’ and that each agent interacts with other agents according to localized knowledge. The agent-based approach is a ‘bottom-up’ approach to modeling where special kinds of artificial agents are created by analogy with social insects. Social insects (including bees, wasps, ants and termites) have lived on Earth for millions of years. Their behavior in nature is primarily characterized by autonomy, distributed functioning and self-organizing capacities. Social insect colonies teach us that very simple individual organisms can form systems capable of performing highly complex tasks by dynamically interacting with each other. On the other hand, a large number of traditional engineering models and algorithms are based on control and centralization. In this article, we try to obtain the answer to the following question: Can we use some principles of natural swarm intelligence in the development of artificial systems aimed at solving complex problems in traffic and transportation?",
"title": ""
},
{
"docid": "34b2fed38744920300f2cbf8cc75c021",
"text": "In this paper we develop a framework for a sequential decision making under budget constraints for multi-class classification. In many classification systems, such as medical diagnosis and homeland security, sequential decisions are often warranted. For each instance, a sensor is first chosen for acquiring measurements and then based on the available information one decides (rejects) to seek more measurements from a new sensor/modality or to terminate by classifying the example based on the available information. Different sensors have varying costs for acquisition, and these costs account for delay, throughput or monetary value. Consequently, we seek methods for maximizing performance of the system subject to budget constraints. We formulate a multi-stage multi-class empirical risk objective and learn sequential decision functions from training data. We show that reject decision at each stage can be posed as supervised binary classification. We derive bounds for the VC dimension of the multi-stage system to quantify the generalization error. We compare our approach to alternative strategies on several multi-class real world datasets.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
},
{
"docid": "d449a4d183c2a3e1905935f624d684d3",
"text": "This paper introduces the approach CBRDIA (Case-based Reasoning for Document Invoice Analysis) which uses the principles of case-based reasoning to analyze, recognize and interpret invoices. Two CBR cycles are performed sequentially in CBRDIA. The first one consists in checking whether a similar document has already been processed, which makes the interpretation of the current one easy. The second cycle works if the first one fails. It processes the document by analyzing and interpreting its structuring elements (adresses, amounts, tables, etc) one by one. The CBR cycles allow processing documents from both knonwn or unknown classes. Applied on 923 invoices, CBRDIA reaches a recognition rate of 85,22% for documents of known classes and 74,90% for documents of unknown classes.",
"title": ""
},
{
"docid": "795bede0ff85ce04e956cdc23f8ecb0a",
"text": "Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k-230k neurons and 1.2M-5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500x (15x) efficiency in energy benefits at 300x (60x) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.",
"title": ""
}
] |
scidocsrr
|
fe504bb18947a74ee76edf2563aacc90
|
he revolution re-visited : Clinical and genetics research paradigms nd the productivity paradox in drug discovery
|
[
{
"docid": "73333ad599c6bbe353e46d7fd4f51768",
"text": "The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research–brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.",
"title": ""
}
] |
[
{
"docid": "333b3349cdcb6ddf44c697e827bcfe62",
"text": "Harmful cyanobacterial blooms, reflecting advanced eutrophication, are spreading globally and threaten the sustainability of freshwater ecosystems. Increasingly, non-nitrogen (N(2))-fixing cyanobacteria (e.g., Microcystis) dominate such blooms, indicating that both excessive nitrogen (N) and phosphorus (P) loads may be responsible for their proliferation. Traditionally, watershed nutrient management efforts to control these blooms have focused on reducing P inputs. However, N loading has increased dramatically in many watersheds, promoting blooms of non-N(2) fixers, and altering lake nutrient budgets and cycling characteristics. We examined this proliferating water quality problem in Lake Taihu, China's 3rd largest freshwater lake. This shallow, hyper-eutrophic lake has changed from bloom-free to bloom-plagued conditions over the past 3 decades. Toxic Microcystis spp. blooms threaten the use of the lake for drinking water, fisheries and recreational purposes. Nutrient addition bioassays indicated that the lake shifts from P limitation in winter-spring to N limitation in cyanobacteria-dominated summer and fall months. Combined N and P additions led to maximum stimulation of growth. Despite summer N limitation and P availability, non-N(2) fixing blooms prevailed. Nitrogen cycling studies, combined with N input estimates, indicate that Microcystis thrives on both newly supplied and previously-loaded N sources to maintain its dominance. Denitrification did not relieve the lake of excessive N inputs. Results point to the need to reduce both N and P inputs for long-term eutrophication and cyanobacterial bloom control in this hyper-eutrophic system.",
"title": ""
},
{
"docid": "ef44e3456962ed4a857614b0782ed4d2",
"text": "A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed.",
"title": ""
},
{
"docid": "a4e1a0f5e56685a294a2c9088809a4fb",
"text": "As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations.",
"title": ""
},
{
"docid": "2b5ade239beea52315e50e0d4fde197f",
"text": "The ultimate goal of research is to produce dependable knowledge or to provide the evidence that may guide practical decisions. Statistical conclusion validity (SCV) holds when the conclusions of a research study are founded on an adequate analysis of the data, generally meaning that adequate statistical methods are used whose small-sample behavior is accurate, besides being logically capable of providing an answer to the research question. Compared to the three other traditional aspects of research validity (external validity, internal validity, and construct validity), interest in SCV has recently grown on evidence that inadequate data analyses are sometimes carried out which yield conclusions that a proper analysis of the data would not have supported. This paper discusses evidence of three common threats to SCV that arise from widespread recommendations or practices in data analysis, namely, the use of repeated testing and optional stopping without control of Type-I error rates, the recommendation to check the assumptions of statistical tests, and the use of regression whenever a bivariate relation or the equivalence between two variables is studied. For each of these threats, examples are presented and alternative practices that safeguard SCV are discussed. Educational and editorial changes that may improve the SCV of published research are also discussed.",
"title": ""
},
{
"docid": "eb30c6946e802086ac6de5848897a648",
"text": "To determine how age of acquisition influences perception of second-language speech, the Speech Perception in Noise (SPIN) test was administered to native Mexican-Spanish-speaking listeners who learned fluent English before age 6 (early bilinguals) or after age 14 (late bilinguals) and monolingual American-English speakers (monolinguals). Results show that the levels of noise at which the speech was intelligible were significantly higher and the benefit from context was significantly greater for monolinguals and early bilinguals than for late bilinguals. These findings indicate that learning a second language at an early age is important for the acquisition of efficient high-level processing of it, at least in the presence of noise.",
"title": ""
},
{
"docid": "c04bad416eab93e8bb57a9d282ecc0cc",
"text": "What role does mutual knowledge play in the comprehension process? We compare two answers to this question for the comprehension of definite reference. The Restricted Search hypothesis assumes that addressees rely on the principle of optimal design and understand definite reference by restricting the search for referents to entities in common ground. The Unrestricted Search hypothesis assumes that the search for referents is not restricted to entities in common ground. Only the Unrestricted Search hypothesis predicts that entities that are not in common ground would interfere with comprehension of definite reference. Experiment 1 reveals such interference in increased errors and verification latencies during the resolution of pronouns. Experiment 2 demonstrates the interference by tracking the addressee’s eye movements during the comprehension of demonstrative reference. We discuss alternative models of comprehension that could account for the results, and we describe the role that common ground plays in each model. We propose a Perspective Adjustment model that assumes a search for referents that is independent of common ground, coupled with a monitoring process that detects violations of common ground and adjusts the interpretation. This model assumes a role for common ground only when a correction is needed. We challenge both the assumption that addressees follow the principle of optimal design and the assumption that the principle is optimal. q 1998 Academic Press",
"title": ""
},
{
"docid": "8d8723d0c1b6e23109ec59e6cc6ffeff",
"text": " Employees often have ideas, information, and opinions for constructive ways to improve work and work organizations. Sometimes these employees exercise voice and express their ideas, information, and opinions; and other times they engage in silence and withhold their ideas, information, and opinions. On the surface, expressing and withholding behaviours might appear to be polar opposites because silence implies not speaking while voice implies speaking up on important issues and problems in organizations. Challenging this simplistic notion, this paper presents a conceptual framework suggesting that employee silence and voice are best conceptualized as separate, multidimensional constructs. Based on employee motives, we differentiate three types of silence (Acquiescent Silence, Defensive Silence, and ProSocial Silence) and three parallel types of voice (Acquiescent Voice, Defensive Voice, and ProSocial Voice) where withholding important information is not simply the absence of voice. Building on this conceptual framework, we further propose that silence and voice have differential consequences to employees in work organizations. Based on fundamental differences in the overt behavioural cues provided by silence and voice, we present a series of propositions predicting that silence is more ambiguous than voice, observers are more likely to misattribute employee motives for silence than for voice, and misattributions for motives behind silence will lead to more incongruent consequences (both positive and negative) for employees (than for voice). We conclude by discussing implications for future research and for managers. Journal of Management Studies 40:6 September 2003 0022-2380",
"title": ""
},
{
"docid": "6b5a7e58a8407fa5cda402d4996a3a10",
"text": "In the last few years, Hadoop become a \"de facto\" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.",
"title": ""
},
{
"docid": "3818129a6fb6047d55ed2e62825ce089",
"text": "BACKGROUND\nTackling severe acute malnutrition (SAM) is a global health priority. Heightened risk of non-communicable diseases (NCD) in children exposed to SAM at around 2 years of age is plausible in view of previously described consequences of other early nutritional insults. By applying developmental origins of health and disease (DOHaD) theory to this group, we aimed to explore the long-term effects of SAM.\n\n\nMETHODS\nWe followed up 352 Malawian children (median age 9·3 years) who were still alive following SAM inpatient treatment between July 12, 2006, and March 7, 2007, (median age 24 months) and compared them with 217 sibling controls and 184 age-and-sex matched community controls. Our outcomes of interest were anthropometry, body composition, lung function, physical capacity (hand grip, step test, and physical activity), and blood markers of NCD risk. For comparisons of all outcomes, we used multivariable linear regression, adjusted for age, sex, HIV status, and socioeconomic status. We also adjusted for puberty in the body composition regression model.\n\n\nFINDINGS\nCompared with controls, children who had survived SAM had lower height-for-age Z scores (adjusted difference vs community controls 0·4, 95% CI 0·6 to 0·2, p=0·001; adjusted difference vs sibling controls 0·2, 0·0 to 0·4, p=0·04), although they showed evidence of catch-up growth. These children also had shorter leg length (adjusted difference vs community controls 2·0 cm, 1·0 to 3·0, p<0·0001; adjusted difference vs sibling controls 1·4 cm, 0·5 to 2·3, p=0·002), smaller mid-upper arm circumference (adjusted difference vs community controls 5·6 mm, 1·9 to 9·4, p=0·001; adjusted difference vs sibling controls 5·7 mm, 2·3 to 9·1, p=0·02), calf circumference (adjusted difference vs community controls 0·49 cm, 0·1 to 0·9, p=0·01; adjusted difference vs sibling controls 0·62 cm, 0·2 to 1·0, p=0·001), and hip circumference (adjusted difference vs community controls 1·56 cm, 0·5 to 2·7, p=0·01; adjusted difference vs sibling controls 1·83 cm, 0·8 to 2·8, p<0·0001), and less lean mass (adjusted difference vs community controls -24·5, -43 to -5·5, p=0·01; adjusted difference vs sibling controls -11·5, -29 to -6, p=0·19) than did either sibling or community controls. Survivors of SAM had functional deficits consisting of weaker hand grip (adjusted difference vs community controls -1·7 kg, 95% CI -2·4 to -0·9, p<0·0001; adjusted difference vs sibling controls 1·01 kg, 0·3 to 1·7, p=0·005,)) and fewer minutes completed of an exercise test (sibling odds ratio [OR] 1·59, 95% CI 1·0 to 2·5, p=0·04; community OR 1·59, 95% CI 1·0 to 2·5, p=0·05). We did not detect significant differences between cases and controls in terms of lung function, lipid profile, glucose tolerance, glycated haemoglobin A1c, salivary cortisol, sitting height, and head circumference.\n\n\nINTERPRETATION\nOur results suggest that SAM has long-term adverse effects. Survivors show patterns of so-called thrifty growth, which is associated with future cardiovascular and metabolic disease. The evidence of catch-up growth and largely preserved cardiometabolic and pulmonary functions suggest the potential for near-full rehabilitation. Future follow-up should try to establish the effects of puberty and later dietary or social transitions on these parameters, as well as explore how best to optimise recovery and quality of life for survivors.\n\n\nFUNDING\nThe Wellcome Trust.",
"title": ""
},
{
"docid": "70dc7fe40f55e2b71b79d71d1119a36c",
"text": "In undergoing this life, many people always try to do and get the best. New knowledge, experience, lesson, and everything that can improve the life will be done. However, many people sometimes feel confused to get those things. Feeling the limited of experience and sources to be better is one of the lacks to own. However, there is a very simple thing that can be done. This is what your teacher always manoeuvres you to do this one. Yeah, reading is the answer. Reading a book as this digital image processing principles and applications and other references can enrich your life quality. How can it be?",
"title": ""
},
{
"docid": "0b2cff582a4b7d81b42e5bab2bd7a237",
"text": "The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios. Data streams present distinct properties such as temporally ordered, continuous and high-velocity, which poses tremendous challenges to traditional recommender systems. In this paper, we investigate the problem of recommendation with stream inputs. In particular, we provide a principled framework termed sRec, which provides explicit continuous-time random process models of the creation of users and topics, and of the evolution of their interests. A variational Bayesian approach called recursive meanfield approximation is proposed, which permits computationally efficient instantaneous on-line inference. Experimental results on several real-world datasets demonstrate the advantages of our sRec over other state-of-the-arts.",
"title": ""
},
{
"docid": "ce21a811ea260699c18421d99221a9f2",
"text": "Medical image processing is the most challenging and emerging field now a day’s processing of MRI images is one of the parts of this field. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. This is a computer aided diagnosis systems for detecting malignant texture in biological study. This paper presents an approach in computer-aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. This paper describes the proposed strategy for detection; extraction and classification of brain tumour from MRI scan images of brain; which incorporates segmentation and morphological functions which are the basic functions of image processing. Here we detect the tumour, segment the tumour and we calculate the area of the tumour. Severity of the disease can be known, through classes of brain tumour which is done through neuro fuzzy classifier and creating a user friendly environment using GUI in MATLAB. In this paper cases of 10 patients is taken and severity of disease is shown and different features of images are calculated.",
"title": ""
},
{
"docid": "051188b0b4a6bdc31a0130a16527ce86",
"text": "Considerations of microalgae as a source offood and biochemicals began in the early 1940's, and in 1952 the first Algae Mass-Culture Symposium was held (Burlew, 1953). Since then, a number of microalgae have been suggested and evaluated for their suitability for commercial exploitation. These include Chlorella, Scenedesmus and Spirulina (e.g., Soeder, 1976; Kawaguchi, 1980; Becker & Venkataraman, 1980) and small commercial operations culturing some of these algae for food are underway in various parts of the world. The extremely halophilic unicellular green alga Dunaliella salina (Chlorophyta, Volvocales) has been proposed as a source of its osmoregulatory solute, glycerol and the pigment f3-carotene (Masyuk, 1968; Aasen, et a11969; Ben-Amotz & A vron, 1980). Much research on the commercial potential of this algae and its products has been undertaken (e.g., Williams, et al. 1978; Chen & Chi, 1981) and trial operations have been established in the USSR (Masyuk, 1968) and in Israel (Ben-Amotz & A vron, 1980). Since 1978, we in Australia have been working also, to examine the feasibility of using large-scale culture of Dunaliella salina as a commercial source",
"title": ""
},
{
"docid": "bb1b8e5d3a53b82cffd4d91163d95829",
"text": "PURPOSE\nThis study was designed to evaluate the feasibility and oncologic and functional outcomes of intersphincteric resection for very low rectal cancer.\n\n\nMETHODS\nA feasibility study was performed using 213 specimens from abdominoperineal resections of rectal cancer. Oncologic and functional outcomes were investigated in 228 patients with rectal cancer located <5 cm from the anal verge who underwent intersphincteric resection at seven institutions in Japan between 1995 and 2004.\n\n\nRESULTS\nCurative operations were accomplished by intersphincteric resection in 86 percent of patients who underwent abdominoperineal resection. Complete microscopic curative surgery was achieved by intersphincteric resection in 225 of 228 patients. Morbidity was 24 percent, and mortality was 0.4 percent. During the median observation time of 41 months, rate of local recurrence was 5.8 percent at three years, and five-year overall and disease-free survival rates were 91.9 percent and 83.2 percent, respectively. In 181 patients who received stoma closure, 68 percent displayed good continence, and only 7 percent showed worsened continence at 24 months after stoma closure. Patients with total intersphincteric resection displayed significantly worse continence than patients with partial or subtotal resection.\n\n\nCONCLUSIONS\nCurability with intersphincteric resection was verified histologically, and acceptable oncologic and functional outcomes were obtained by using these procedures in patients with very low rectal cancer. However, information on potential functional adverse effects after intersphincteric resection should be provided to patients preoperatively.",
"title": ""
},
{
"docid": "e829a46ab8dd560f137b4c11c3626410",
"text": "Modeling dressed characters is known as a very tedious process. It u sually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-bas ed simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the ad equ te folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start w ith a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a v irtu l mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a p recomputed distance field around the mannequin. The system then splits the created surface into different pan els delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our sys tem automatically approximates each panel with a developable surface, while keeping them assembled along the s eams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D gar ment, including the folds due to the collisions with the body and gravity. The folds are generated using procedu ral modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. The patterns we create also allow us to sew real replicas of the virtual garments.",
"title": ""
},
{
"docid": "32331ccc9966c44a57be46a474233da6",
"text": "In OBDA an ontology defines a high level global vocabulary for user queries, and such vocabulary is mapped to (typically relational) databases. Extending this paradigm with rules, e.g., expressed in SWRL or RIF, boosts the expressivity of the model and the reasoning ability to take into account features such as recursion and n-ary predicates. We consider evaluation of SPARQL queries under rules with linear recursion, which in principle is carried out by a 2-phase translation to SQL: (1) The SPARQL query, together with the RIF/SWRL rules, and the mappings is translated to a Datalog program, possibly with linear recursion; (2) The Datalog program is converted to SQL by using recursive common table expressions. Since a naive implementation of this translation generates inefficient SQL code, we propose several optimisations to make the approach scalable. We implement and evaluate the techniques presented here in the Ontop system. To the best of our knowledge, this results in the first system supporting all of the following W3C standards: the OWL 2 QL ontology language, R2RML mappings, SWRL rules with linear recursion, and SPARQL queries. The preliminary but encouraging experimental results on the NPD benchmark show that our approach is scalable, provided optimisations are applied.",
"title": ""
},
{
"docid": "b1a69a47cce9ecc51b03d8b4a306e605",
"text": "We use an innovative survey tool to collect management practice data from 732 medium sized manufacturing firms in the US and Europe (France, Germany and the UK). Our measures of managerial best practice are strongly associated with superior firm performance in terms of productivity, profitability, Tobin’s Q, sales growth and survival. We also find significant intercountry variation with US firms on average better managed than European firms, but a much greater intra-country variation with a long tail of extremely badly managed firms. This presents a dilemma – why do so many firms exist with apparently inferior management practices, and why does this vary so much across countries? We find this is due to a combination of: (i) low product market competition and (ii) family firms passing management control down to the eldest sons (primo geniture). European firms in our sample report facing lower levels of competition, and substantially higher levels of primo geniture. These two factors appear to account for around half of the long tail of badly managed firms and half of the average US-Europe gap in management performance.",
"title": ""
},
{
"docid": "53cf85922865609c4a7591bd06679660",
"text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.",
"title": ""
},
{
"docid": "3fe42f71b484068b843fedbd3c24ec45",
"text": "We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) — an improved attention-based architecture for multiple object recognition. The proposed model is a fully differentiable unit that can be optimized end-to-end by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was employed as visual attention mechanism which allows to learn the geometric transformation of objects within images. With the combination of the Spatial Transformer and the powerful recurrent architecture, the proposed EDRAM can localize and recognize objects simultaneously. EDRAM has been evaluated on two publicly available datasets including MNIST Cluttered (with 70K cluttered digits) and SVHN (with up to 250k real world images of house numbers). Experiments show that it obtains superior performance as compared with the state-of-the-art models.",
"title": ""
},
{
"docid": "999c1fa41498e8a330dfbd8fdb4c6d6e",
"text": "Wellness is a widely popular concept that is commonly applied to fitness and self-help products or services. Inference of personal wellness-related attributes, such as body mass index or diseases tendency, as well as understanding of global dependencies between wellness attributes and users’ behavior is of crucial importance to various applications in personal and public wellness domains. Meanwhile, the emergence of social media platforms and wearable sensors makes it feasible to perform wellness profiling for users from multiple perspectives. However, research efforts on wellness profiling and integration of social media and sensor data are relatively sparse, and this study represents one of the first attempts in this direction. Specifically, to infer personal wellness attributes, we proposed multi-source individual user profile learning framework named “TweetFit”. “TweetFit” can handle data incompleteness and perform wellness attributes inference from sensor and social media data simultaneously. Our experimental results show that the integration of the data from sensors and multiple social media sources can substantially boost the wellness profiling performance.",
"title": ""
}
] |
scidocsrr
|
5af7eb50ca357c5fabe2a83b5f9f1937
|
Unsupervised Metaphor Identification Using Hierarchical Graph Factorization Clustering
|
[
{
"docid": "2391d0ea67da55155a8bffbf7b9b5776",
"text": "The way we talk about complex and abstract ideas is suffused with metaphor. In five experiments, we explore how these metaphors influence the way that we reason about complex issues and forage for further information about them. We find that even the subtlest instantiation of a metaphor (via a single word) can have a powerful influence over how people attempt to solve social problems like crime and how they gather information to make \"well-informed\" decisions. Interestingly, we find that the influence of the metaphorical framing effect is covert: people do not recognize metaphors as influential in their decisions; instead they point to more \"substantive\" (often numerical) information as the motivation for their problem-solving decision. Metaphors in language appear to instantiate frame-consistent knowledge structures and invite structurally consistent inferences. Far from being mere rhetorical flourishes, metaphors have profound influences on how we conceptualize and act with respect to important societal issues. We find that exposure to even a single metaphor can induce substantial differences in opinion about how to solve social problems: differences that are larger, for example, than pre-existing differences in opinion between Democrats and Republicans.",
"title": ""
}
] |
[
{
"docid": "444a6e64bfc9a76a9ef6d122e746e457",
"text": "When performing tasks, humans are thought to adopt task sets that configure moment-to-moment data processing. Recently developed mixed blocked/event-related designs allow task set-related signals to be extracted in fMRI experiments, including activity related to cues that signal the beginning of a task block, \"set-maintenance\" activity sustained for the duration of a task block, and event-related signals for different trial types. Data were conjointly analyzed from mixed design experiments using ten different tasks and 183 subjects. Dorsal anterior cingulate cortex/medial superior frontal cortex (dACC/msFC) and bilateral anterior insula/frontal operculum (aI/fO) showed reliable start-cue and sustained activations across all or nearly all tasks. These regions also carried the most reliable error-related signals in a subset of tasks, suggesting that the regions form a \"core\" task-set system. Prefrontal regions commonly related to task control carried task-set signals in a smaller subset of tasks and lacked convergence across signal types.",
"title": ""
},
{
"docid": "1572891f4c2ab064c6d6a164f546e7c1",
"text": "BACKGROUND Unexplained gastrointestinal (GI) symptoms and joint hypermobility (JHM) are common in the general population, the latter described as benign joint hypermobility syndrome (BJHS) when associated with musculo-skeletal symptoms. Despite overlapping clinical features, the prevalence of JHM or BJHS in patients with functional gastrointestinal disorders has not been examined. METHODS The incidence of JHM was evaluated in 129 new unselected tertiary referrals (97 female, age range 16-78 years) to a neurogastroenterology clinic using a validated 5-point questionnaire. A rheumatologist further evaluated 25 patients with JHM to determine the presence of BJHS. Groups with or without JHM were compared for presentation, symptoms and outcomes of relevant functional GI tests. KEY RESULTS Sixty-three (49%) patients had evidence of generalized JHM. An unknown aetiology for GI symptoms was significantly more frequent in patients with JHM than in those without (P < 0.0001). The rheumatologist confirmed the clinical impression of JHM in 23 of 25 patients, 17 (68%) of whom were diagnosed with BJHS. Patients with co-existent BJHS and GI symptoms experienced abdominal pain (81%), bloating (57%), nausea (57%), reflux symptoms (48%), vomiting (43%), constipation (38%) and diarrhoea (14%). Twelve of 17 patients presenting with upper GI symptoms had delayed gastric emptying. One case is described in detail. CONCLUSIONS & INFERENCES In a preliminary retrospective study, we have found a high incidence of JHM in patients referred to tertiary neurogastroenterology care with unexplained GI symptoms and in a proportion of these a diagnosis of BJHS is made. Symptoms and functional tests suggest GI dysmotility in a number of these patients. The possibility that a proportion of patients with unexplained GI symptoms and JHM may share a common pathophysiological disorder of connective tissue warrants further investigation.",
"title": ""
},
{
"docid": "63830f82c3acd0e3ff3a12eeed8801e0",
"text": "We have developed a novel approach using source analysis for classifying motor imagery tasks. Two-equivalent-dipoles analysis was proposed to aid classification of motor imagery tasks for brain-computer interface (BCI) applications. By solving the electroencephalography (EEG) inverse problem of single trial data, it is found that the source analysis approach can aid classification of motor imagination of left- or right-hand movement without training. In four human subjects, an averaged accuracy of classification of 80% was achieved. The present study suggests the merits and feasibility of applying EEG inverse solutions to BCI applications from noninvasive EEG recordings.",
"title": ""
},
{
"docid": "db1f678587259ccc036182a5297e6f94",
"text": "There is a growing recognition of the role of the frontal lobes in the aetiology of severe behavioural aberrations. The authors describe a case of Oedipism in a patient who had MRI evidence of frontal lobe encephalomalacia. After discussing the function of the frontal lobes in modulating behaviour the authors suggest that the structural lesion seen on the MRI was in part responsible for the patient's self-destructive act. Treatment issues and the importance of recognizing underlying structural lesions in instances of extreme self-mutilation are discussed.",
"title": ""
},
{
"docid": "87614469fe3251a547fe5795dd255230",
"text": "Automatic detecting and counting vehicles in unsupervised video on highways is a very challenging problem in computer vision with important practical applications such as to monitor activities at traffic intersections for detecting congestions, and then predict the traffic flow which assists in regulating traffic. Manually reviewing the large amount of data they generate is often impractical. The background subtraction and image segmentation based on morphological transformation for tracking and counting vehicles on highways is proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm segments the image by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more accurately, as well as adapt effectively to changing environments.",
"title": ""
},
{
"docid": "19b96cd469f1b81e45cf11a0530651a8",
"text": "only Painful initially, patient preference No cost Digitation Pilot RCTs 28 Potential risk of premature closure No cost Open wound (fig 4⇓) RCT = randomised controlled trial. For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe BMJ 2017;356:j475 doi: 10.1136/bmj.j475 (Published 2017 February 21) Page 4 of 6",
"title": ""
},
{
"docid": "3a7f32d3059bd2bceef27bb59b7276b0",
"text": "We present noWorkflow, an open-source tool that systematically and transparently collects provenance from Python scripts, including data about the script execution and how the script evolves over time. During the demo, we will show how noWorkflow collects and manages provenance, as well as how it supports the analysis of computational experiments. We will also encourage attendees to use noWorkflow for their own scripts.",
"title": ""
},
{
"docid": "d4d802b296b210a1957b1a214d9fd9fb",
"text": "Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7",
"title": ""
},
{
"docid": "6b44bd202f964033a2a2433d6322f160",
"text": "We apply convolutional neural networks (CNN) to the problem of image orientation detection in the context of determining the correct orientation (from 0, 90, 180, and 270 degrees) of a consumer photo. The problem is especially important for digitazing analog photographs. We substantially improve on the published state of the art in terms of the performance on one of the standard datasets, and test our system on a more difficult large dataset of consumer photos. We use Guided Backpropagation to obtain insights into how our CNN detects photo orientation, and to explain its mistakes.",
"title": ""
},
{
"docid": "e2e47bef900599b0d7b168e02acf7e88",
"text": "Reflection seismic data from the F3 block in the Dutch North Sea exhibits many largeamplitude reflections at shallow horizons, typically categorized as “brightspots ” (Schroot and Schuttenhelm, 2003), mainly because of their bright appearance. In most cases, these bright reflections show a significant “flatness” contrasting with local structural trends. While flatspots are often easily identified in thick reservoirs, we have often occasionally observed apparent flatspot tuning effects at fluid contacts near reservoir edges and in thin reservoir beds, while only poorly understanding them. We conclude that many of the shallow large-amplitude reflections in block F3 are dominated by flatspots, and we investigate the thin-bed tuning effects that such flatspots cause as they interact with the reflection from the reservoir’s upper boundary. There are two possible effects to be considered: (1) the “wedge-model” tuning effects of the flatspot and overlying brightspots, dimspots, or polarity-reversals; and (2) the stacking effects that result from possible inclusion of post-critical flatspot reflections in these shallow sands. We modeled the effects of these two phenomena for the particular stratigraphic sequence in block F3. Our results suggest that stacking of post-critical flatspot reflections can cause similar large-amplitude but flat reflections, in some cases even causing an interface expected to produce a ‘dimspot’ to appear as a ‘brightspot’. Analysis of NMO stretch and muting shows the likely exclusion of critical offset data in stacked output. If post-critical reflections are included in stacking, unusual results will be observed. In the North Sea case, we conclude the tuning effect was the primary reason causing for the brightness and flatness of these reflections. However, it is still important to note that care should be taken while applying muting on reflections with wide range of incidence angles and the inclusion of critical offset data may cause some spurious features in the stacked section.",
"title": ""
},
{
"docid": "3e5e7e38068da120639c3fcc80227bf8",
"text": "The ferric reducing antioxidant power (FRAP) assay was recently adapted to a microplate format. However, microplate-based FRAP (mFRAP) assays are affected by sample volume and composition. This work describes a calibration process for mFRAP assays which yields data free of volume effects. From the results, the molar absorptivity (ε) for the mFRAP assay was 141,698 M(-1) cm(-1) for gallic acid, 49,328 M(-1) cm(-1) for ascorbic acid, and 21,606 M(-1) cm(-1) for ammonium ferrous sulphate. The significance of ε (M(-1) cm(-1)) is discussed in relation to mFRAP assay sensitivity, minimum detectable concentration, and the dimensionless FRAP-value. Gallic acid showed 6.6 mol of Fe(2+) equivalents compared to 2.3 mol of Fe(+2) equivalents for ascorbic acid. Application of the mFRAP assay to Manuka honey samples (rated 5+, 10+, 15+, and 18+ Unique Manuka Factor; UMF) showed that FRAP values (0.54-0.76 mmol Fe(2+) per 100g honey) were strongly correlated with UMF ratings (R(2)=0.977) and total phenols content (R(2) = 0.982)whilst the UMF rating was correlated with the total phenols (R(2) = 0.999). In conclusion, mFRAP assay results were successfully standardised to yield data corresponding to 1-cm spectrophotometer which is useful for quality assurance purposes. The antioxidant capacity of Manuka honey was found to be directly related to the UMF rating.",
"title": ""
},
{
"docid": "c42234019e52fea100e7bdc0bd437f36",
"text": "Strings of photovoltaic panels have a significantly reduced power output when mismatch between the panels, such as partial shading, occurs since integrated diodes are then partly bypassing the shaded panels. With the implementation of DC-DC converters on panel level, the maximum available power can be extracted from each panel regardless of any shading. In this paper, different concepts of PV panel integrated DC-DC converters are presented, comparative evaluation is given and the converter design process is shown for the buck-boost converter which is identified as the best suited concept. Furthermore, the results of high precision efficiency measurements of an experimental prototype are presented and compared to a commercial MIC.",
"title": ""
},
{
"docid": "840fdcf256a7836e2d4f3e3d0445fe26",
"text": "Educational Data Mining (EDM) is an interdisciplinary ingenuous research area that handles the development of methods to explore data arising in a scholastic fields. Computational approaches used by EDM is to examine scholastic data in order to study educational questions. As a result, it provides intrinsic knowledge of teaching and learning process for effective education planning. This paper conducts a comprehensive study on the recent and relevant studies put through in this field to date. The study focuses on methods of analysing educational data to develop models for improving academic performances and improving institutional effectiveness. This paper accumulates and relegates literature, identifies consequential work and mediates it to computing educators and professional bodies. We identify research that gives well-fortified advise to amend edifying and invigorate the more impuissant segment students in the institution. The results of these studies give insight into techniques for ameliorating pedagogical process, presaging student performance, compare the precision of data mining algorithms, and demonstrate the maturity of open source implements.",
"title": ""
},
{
"docid": "8bd0c280a95f549bd5596fb1f7499e44",
"text": "Mobile devices are becoming ubiquitous. People take pictures via their phone cameras to explore the world on the go. In many cases, they are concerned with the picture-related information. Understanding user intent conveyed by those pictures therefore becomes important. Existing mobile applications employ visual search to connect the captured picture with the physical world. However, they only achieve limited success due to the ambiguity nature of user intent in the picture-one picture usually contains multiple objects. By taking advantage of multitouch interactions on mobile devices, this paper presents a prototype of interactive mobile visual search, named TapTell, to help users formulate their visual intent more conveniently. This kind of search leverages limited yet natural user interactions on the phone to achieve more effective visual search while maintaining a satisfying user experience. We make three contributions in this work. First, we conduct a focus study on the usage patterns and concerned factors for mobile visual search, which in turn leads to the interactive design of expressing visual intent by gesture. Second, we introduce four modes of gesture-based interactions (crop, line, lasso, and tap) and develop a mobile prototype. Third, we perform an in-depth usability evaluation on these different modes, which demonstrates the advantage of interactions and shows that lasso is the most natural and effective interaction mode. We show that TapTell provides a natural user experience to use phone camera and gesture to explore the world. Based on the observation and conclusion, we also suggest some design principles for interactive mobile visual search in the future.",
"title": ""
},
{
"docid": "242cc9922b120057fe9f9066f257fb44",
"text": "ion Yes No Partly Availability / Mobility No No No Fault tolerance Partly No Partly Flexibility / Event based Yes Partly Partly Uncertainty of information No No No",
"title": ""
},
{
"docid": "c757e54a14beec3b4930ad050a16d311",
"text": "The University Class Scheduling Problem (UCSP) is concerned with assigning a number of courses to classrooms taking into consideration constraints like classroom capacities and university regulations. The problem also attempts to optimize the performance criteria and distribute the courses fairly to classrooms depending on the ratio of classroom capacities to course enrollments. The problem is a classical scheduling problem and considered to be NP-complete. It has received some research during the past few years given its wide use in colleges and universities. Several formulations and algorithms have been proposed to solve scheduling problems, most of which are based on local search techniques. In this paper, we propose a complete approach using integer linear programming (ILP) to solve the problem. The ILP model of interest is developed and solved using the three advanced ILP solvers based on generic algorithms and Boolean Satisfiability (SAT) techniques. SAT has been heavily researched in the past few years and has lead to the development of powerful 0-1 ILP solvers that can compete with the best available generic ILP solvers. Experimental results indicate that the proposed model is tractable for reasonable-sized UCSP problems. Index Terms — University Class Scheduling, Optimization, Integer Linear Programming (ILP), Boolean Satisfiability.",
"title": ""
},
{
"docid": "db597c88e71a8397b81216282d394623",
"text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.",
"title": ""
},
{
"docid": "2ee0eb9ab9d6c5b9bdad02b9f95c8691",
"text": "Aim: To describe lower extremity injuries for badminton in New Zealand. Methods: Lower limb badminton injuries that resulted in claims accepted by the national insurance company Accident Compensation Corporation (ACC) in New Zealand between 2006 and 2011 were reviewed. Results: The estimated national injury incidence for badminton injuries in New Zealand from 2006 to 2011 was 0.66%. There were 1909 lower limb badminton injury claims which cost NZ$2,014,337 (NZ$ value over 2006 to 2011). The age-bands frequently injured were 10–19 (22%), 40–49 (22%), 30–39 (14%) and 50–59 (13%) years. Sixty five percent of lower limb injuries were knee ligament sprains/tears. Males sustained more cruciate ligament sprains than females (75 vs. 39). Movements involving turning, changing direction, shifting weight, pivoting or twisting were responsible for 34% of lower extremity injuries. Conclusion: The knee was most frequently OPEN ACCESS",
"title": ""
},
{
"docid": "3dfd31873c3d13e8e55a9e0c5bc6ed7c",
"text": "Apache Spark is an open source distributed data processing platform that uses distributed memory abstraction to process large volume of data efficiently. However, performance of a particular job on Apache Spark platform can vary significantly depending on the input data type and size, design and implementation of the algorithm, and computing capability, making it extremely difficult to predict the performance metric of a job such as execution time, memory footprint, and I/O cost. To address this challenge, in this paper, we present a simulation driven prediction model that can predict job performance with high accuracy for Apache Spark platform. Specifically, as Apache spark jobs are often consist of multiple sequential stages, the presented prediction model simulates the execution of the actual job by using only a fraction of the input data, and collect execution traces (e.g., I/O overhead, memory consumption, execution time) to predict job performance for each execution stage individually. We evaluated our prediction framework using four real-life applications on a 13 node cluster, and experimental results show that the model can achieve high prediction accuracy.",
"title": ""
}
] |
scidocsrr
|
5674cba6f2d28e07ebad6a400adf53b2
|
Bakar Kiasan: Flexible Contract Checking for Critical Systems Using Symbolic Execution
|
[
{
"docid": "421cb7fb80371c835a5d314455fb077c",
"text": "This paper explains, in an introductory fashion, the method of specifying the correct behavior of a program by the use of input/output assertions and describes one method for showing that the program is correct with respect to those assertions. An initial assertion characterizes conditions expected to be true upon entry to the program and a final assertion characterizes conditions expected to be true upon exit from the program. When a program contains no branches, a technique known as symbolic execution can be used to show that the truth of the initial assertion upon entry guarantees the truth of the final assertion upon exit. More generally, for a program with branches one can define a symbolic execution tree. If there is an upper bound on the number of times each loop in such a program may be executed, a proof of correctness can be given by a simple traversal of the (finite) symbolic execution tree. However, for most programs, no fixed bound on the number of times each loop is executed exists and the corresponding symbolic execution trees are infinite. In order to prove the correctness of such programs, a more general assertion structure must be provided. The symbolic execution tree of such programs must be traversed inductively rather than explicitly. This leads naturally to the use of additional assertions which are called \"inductive assertions.\"",
"title": ""
}
] |
[
{
"docid": "557743556f6dbbecde1a28a90f9a2d7f",
"text": "This paper presents a new frequency-adaptive synchronization method for grid-connected power converters which allows estimating not only the positive- and negative-sequence components of the power signal at the fundamental frequency, but also other sequence components at higher frequencies. The proposed system is called the MSOGI-FLL since it is based on a decoupled network consisting of multiple second order generalized integrators (MSOGI) which are frequency-adaptive by using a frequency-locked loop (FLL). In this paper, the MSOGI-FLL is analyzed and its performance is evaluated by both simulations and experiments.",
"title": ""
},
{
"docid": "38c178900bac4d5377f29d3ffaf944ca",
"text": "The early 21st century is witnessing a rapid advance in social robots. From vacuum cleaning robots (like the Roomba), to entertainment robots (like the Pleo), to robot pets (like KittyCat), to robot dolls (like Baby Alive), to therapy robots (like Paro), and many others, social robots are rapidly finding applications in households and elder care settings. In 2006, the number of service robots world-wide alone outnum-bered industrial robots by a factor of four and this gap is expected to widen to a factor of six by 2010, only fueled by ambitious goals like those of South Korea to put one robot into each household by the year 2013 or by the Japanese expectation that the robot industry will be worth ten times the present value in 2025 (Gates, 2007). From these expectations alone, it should be clear that social robots will soon become an integral part of human societies, very much like computers and the Internet in the last decade. In fact, using computer technology as an analogy, it seems likely that social robotics will follow a similar trajectory: once social robots have been fully embraced by societies, life without them will become inconceivable. As a consequence of this societal penetration, social robots will also enter our personal lives, and that fact alone requires us to reflect on what exactly happens in our interactions with these machines. For social robots are specifically designed for personal interactions that will involve human emotions and feelings: “A sociable robot is able to communicate and interact with us, understand and even relate to us, in a personal way. It is a robot that is socially intelligent in a human-like way.” (Breazeal, 2002) And while social robots can have benefits for humans (e.g., health benefits as demonstrated with Paro (Shibata..., 2005)), it is also possible that they could inflict harm, emotional harm, that is. And exactly herein lies the hitherto underestimated danger: the potential for humans’ emotional dependence on social robots. As we will see shortly, such emotional dependence on social robots is different from other human dependencies on technology (e.g., different both in kind and quality from depending on one’s cell phone, wrist watch, or PDA). To be able to understand the difference and the potential ramifications of building complex social robots that are freely deployed in human societies, we have to understand how social robots are different from other related technologies and how they, as a result, can affect humans at a very basic level.",
"title": ""
},
{
"docid": "919ee3a62e28c1915d0be556a2723688",
"text": "Bayesian data analysis includes but is not limited to Bayesian inference (Gelman et al., 2003; Kerman, 2006a). Here, we take Bayesian inference to refer to posterior inference (typically, the simulation of random draws from the posterior distribution) given a fixed model and data. Bayesian data analysis takes Bayesian inference as a starting point but also includes fitting a model to different datasets, altering a model, performing inferential and predictive summaries (including prior or posterior predictive checks), and validation of the software used to fit the model. The most general programs currently available for Bayesian inference are WinBUGS (BUGS Project, 2004) and OpenBugs, which can be accessed from R using the packages R2WinBUGS (Sturtz et al., 2005) and BRugs. In addition, various R packages exist that directly fit particular Bayesian models (e.g. MCMCPack, Martin and Quinn (2005)). In this note, we describe our own entry in the “inference engine” sweepstakes but, perhaps more importantly, describe the ongoing development of some R packages that perform other aspects of Bayesian data analysis.",
"title": ""
},
{
"docid": "d4f6282ba372801f1403541daf97d336",
"text": "This study offers an in-depth analysis of four rumors that spread through Twitter after the 2013 Boston Marathon Bombings. Through qualitative and visual analysis, we describe each rumor's origins, changes over time, and relationships between different types of rumoring behavior. We identify several quantitative measures-including temporal progression, domain diversity, lexical diversity and geolocation features-that constitute a multi-dimensional signature for each rumor, and provide evidence supporting the existence of different rumor types. Ultimately these signatures enhance our understanding of how different kinds of rumors propagate online during crisis events. In constructing these signatures, this research demonstrates and documents an emerging method for deeply and recursively integrating qualitative and quantitative methods for analysis of social media trace data.",
"title": ""
},
{
"docid": "a1c2074b45adacc12437f60cbb491db1",
"text": "Building extraction from remotely sensed imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases and several other geospatial applications. Several published contributions are dedicated to the applications of Deep Convolutional Neural Network (DCNN) for building extraction using aerial/satellite imagery exists; however, in all these contributions a good accuracy is always paid at the price of extremely complex and large network architectures. In this paper, we present an enhanced Fully Convolutional Network (FCN) framework especially molded for building extraction of remotely sensed images by applying Conditional Random Field (CRF). The main purpose here is to propose a framework which balances maximum accuracy with less network complexity. The modern activation function called Exponential Linear Unit (ELU) is applied to improve the performance of the Fully Convolutional Network (FCN), resulting in more, yet accurate building prediction. To further reduce the noise (false classified buildings) and to sharpen the boundary of the buildings, a post processing CRF is added at the end of the adopted Convolutional Neural Network (CNN) framework. The experiments were conducted on Massachusetts building aerial imagery. The results show that our proposed framework outperformed FCN baseline, which is the existing baseline framework for semantic segmentation, in term of performance measure, the F1-score and Intersection Over Union (IoU) measure. Additionally, the proposed method stood superior to the pre-existing classifier for building extraction using the same dataset in terms of performance measure and network complexity at once.",
"title": ""
},
{
"docid": "7f2857c1bd23c7114d58c290f21bf7bd",
"text": "Many contemporary organizations are placing a greater emphasis on their performance management systems as a means of generating higher levels of job performance. We suggest that producing performance increments may be best achieved by orienting the performance management system to promote employee engagement. To this end, we describe a new approach to the performance management process that includes employee engagement and the key drivers of employee engagement at each stage. We present a model of engagement management that incorporates the main ideas of the paper and suggests a new perspective for thinking about how to foster and manage employee engagement to achieve high levels of job",
"title": ""
},
{
"docid": "3f37793db0be4f874dd073972f40e1c7",
"text": "The matching properties of the threshold voltage, substrate factor and current factor of MOS transistors have been analysed and measured. Improvements of the existing theory are given, as well as extensions for long distance matching and rotation of devices. The matching results have been verified by measurements and calculations on a band-gap reference circuit.",
"title": ""
},
{
"docid": "6ac6e57937fa3d2a8e319ce17d960c34",
"text": "In various application domains there is a desire to compare process models, e.g., to relate an organization-specific process model to a reference model, to find a web service matching some desired service description, or to compare some normative process model with a process model discovered using process mining techniques. Although many researchers have worked on different notions of equivalence (e.g., trace equivalence, bisimulation, branching bisimulation, etc.), most of the existing notions are not very useful in this context. First of all, most equivalence notions result in a binary answer (i.e., two processes are equivalent or not). This is not very helpful, because, in real-life applications, one needs to differentiate between slightly different models and completely different models. Second, not all parts of a process model are equally important. There may be parts of the process model that are rarely activated while other parts are executed for most process instances. Clearly, these should be considered differently. To address these problems, this paper proposes a completely new way of comparing process models. Rather than directly comparing two models, the process models are compared with respect to some typical behavior. This way we are able to avoid the two problems. Although the results are presented in the context of Petri nets, the approach can be applied to any process modeling language with executable semantics.",
"title": ""
},
{
"docid": "8217042c3779267570276664dc960612",
"text": "We introduce a taxonomy that reflects the theoretical contribution of empirical articles along two dimensions: theory building and theory testing. We used that taxonomy to track trends in the theoretical contributions offered by articles over the past five decades. Results based on data from a sample of 74 issues of the Academy of Management Journal reveal upward trends in theory building and testing over time. In addition, the levels of theory building and testing within articles are significant predictors of citation rates. In particular, articles rated moderate to high on both dimensions enjoyed the highest levels of citations.",
"title": ""
},
{
"docid": "f6362a62b69999bdc3d9f681b68842fc",
"text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.",
"title": ""
},
{
"docid": "323e7669476aab93735a655e54f6a4a9",
"text": "Monte Carlo Tree Search is a method that depends on decision theory in taking actions/ decisions, when other traditional methods failed on doing so, due to lots of factors such as uncertainty, huge problem domain, or lack in the knowledge base of the problem. Before using this method, several problems remained unsolved including some famous AI games like GO. This method represents a revolutionary technique where a Monte Carlo method has been applied to search tree technique, and proved to be successful in areas thought for a long time as impossible to be solved. This paper highlights some important aspects of this method, and presents some areas where it worked well, as well as enhancements to make it even more powerful.",
"title": ""
},
{
"docid": "f7a6a9582304ef27a390d37ae79f94dc",
"text": "Traditional design techniques for FPGAs are based on using hardware description languages, with functional and post-place-and-route simulation as a means to check design correctness and remove detected errors. With large complexity of things to be designed it is necessary to introduce new design approaches that will increase the level of abstraction while maintaining the necessary efficiency of a computation performed in hardware that we are used to today. This paper presents one such methodology that builds upon existing research in multithreading, object composability and encapsulation, partial runtime reconfiguration, and self adaptation. The methodology is based on currently available FPGA design tools. The efficiency of the methodology is evaluated on basic vector and matrix operations.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "07b889a2b1a18bc1f91021f3b889474a",
"text": "In this study, we show a correlation between electrical properties (relative permittivity-εr and conductivity-σ) of blood plasma and plasma glucose concentration. In order to formulate that correlation, we performed electrical property measurements on blood samples collected from 10 adults between the ages of 18 and 40 at University of Alabama Birmingham (UAB) Children's hospital. The measurements are conducted between 500 MHz and 20 GHz band. Using the data obtained from measurements, we developed a single-pole Cole-Cole model for εr and σ as a function of plasma blood glucose concentration. To provide an application, we designed a microstrip patch antenna that can be used to predict the glucose concentration within a given plasma sample. Simulation results regarding antenna design and its performance are also presented.",
"title": ""
},
{
"docid": "404a662b55baea9402d449fae6192424",
"text": "Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.",
"title": ""
},
{
"docid": "99bac31f4d0df12cf25f081c96d9a81a",
"text": "Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.",
"title": ""
},
{
"docid": "ec19face14810817bfd824d70a11c746",
"text": "The article deals with various ways of memristor modeling and simulation in the MATLAB&Simulink environment. Recently used and published mathematical memristor model serves as a base, regarding all known features of its behavior. Three different approaches in the MATLAB&Simulink system are used for the differential and other equations formulation. The first one employs the standard system core offer for the Ordinary Differential Equations solutions (ODE) in the form of an m-file. The second approach is the model construction in Simulink environment. The third approach employs so-called physical modeling using the built-in Simscape system. The output data are the basic memristor characteristics and appropriate time courses. The features of all models are discussed, especially regarding the computer simulation. Possible problems that may occur during modeling are pointed. Key-Words: memristor, modeling and simulation, MATLAB, Simulink, Simscape, physical model",
"title": ""
},
{
"docid": "83e897a37aca4c349b4a910c9c0787f4",
"text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.",
"title": ""
},
{
"docid": "4d45fa7a0ff9f4c0c15bf32dd05ac8a7",
"text": "This paper presents a sub-nanosecond pulse generator intended for a transmitter of through-the-wall surveillance radar. The basis of the generator is a step recovery diode, which is used to sharpen the slow rise time edge of an input driving waveform. A unique pulse shaping technique is then applied to form an ultra-wideband Gaussian pulse. A simple transistor switching circuit was used to drive this Gaussian pulser, which transforms a TTL trigger signal to a driving pulse with the timing and amplitude parameters required by the step recovery diode. The maximum pulse repetition frequency of the generator is 20 MHz. High amplitude pulses are advantageous for obtaining a good radar range, especially when penetrating thick lossy walls. In order to increase the output power of the transmitter, the outputs of two identical generators were connected in parallel. The measurement results are presented, which show waveforms of the generated Gaussian pulses approximately 180 ps in width and over 32 V in amplitude.",
"title": ""
},
{
"docid": "0c479abc72634e6d76b787f130a8ea1f",
"text": "While intelligent transportation systems come in many shapes and sizes, arguably the most transformational realization will be the autonomous vehicle. As such vehicles become commercially available in the coming years, first on dedicated roads and under specific conditions, and later on all public roads at all times, a phase transition will occur. Once a sufficient number of autonomous vehicles is deployed, the opportunity for explicit coordination appears. This article treats this challenging network control problem, which lies at the intersection of control theory, signal processing, and wireless communication. We provide an overview of the state of the art, while at the same time highlighting key research directions for the coming decades.",
"title": ""
}
] |
scidocsrr
|
fd7222395c3dd98fc311db10d3b82cd8
|
An Industrial Application of Mutation Testing: Lessons, Challenges, and Research Directions
|
[
{
"docid": "ffcc5b512d780dc13562f450e21e67de",
"text": "Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http://defects4j.org.",
"title": ""
}
] |
[
{
"docid": "963e2e56265d07b33cfa009434bce943",
"text": "In today’s modern communication industry, antennas are the most important components required to create a communication link. Microstrip antennas are the most suited for aerospace and mobile applications because of their low profile, light weight and low power handling capacity. They can be designed in a variety of shapes in order to obtain enhanced gain and bandwidth, dual band and circular polarization to even ultra wideband operation. The thesis provides a detailed study of the design of probe-fed Rectangular Microstrip Patch Antenna to facilitate dual polarized, dual band operation. The design parameters of the antenna have been calculated using the transmission line model and the cavity model. For the simulation process IE3D electromagnetic software which is based on method of moment (MOM) has been used. The effect of antenna dimensions and substrate parameters on the performance of antenna have been discussed. The antenna has been designed with embedded spur lines and integrated reactive loading for dual band operation with better impedance matching. The designed antenna can be operated at two frequency band with center frequencies 7.62 (with a bandwidth of 11.68%) and 9.37 GHz (with a bandwidth of 9.83%). A cross slot of unequal length has been inserted so as to have dual polarization. This results in a minor shift in the central frequencies of the two bands to 7.81 and 9.28 GHz. At a frequency of 9.16 GHz, circular polarization has been obtained. So the dual band and dual frequency operation has successfully incorporated into a single patch.",
"title": ""
},
{
"docid": "7c8f38386322d9095b6950c4f31515a0",
"text": "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.",
"title": ""
},
{
"docid": "9986073424bf18814ef0e5affd15d8e3",
"text": "This paper presents an energy-efficient feature extraction accelerator design aimed at visual navigation. The hardware-oriented algorithmic modifications such as a circular-shaped sampling region and unified description are proposed to minimize area and energy consumption while maintaining feature extraction quality. A matched-throughput accelerator employs fully-unrolled filters and single-stream descriptor enabled by algorithm-architecture co-optimization, which requires lower clock frequency for the given throughput requirement and reduces hardware cost of description processing elements. Due to the large number of FIFO blocks, a robust low-power FIFO architecture for the ultra-low voltage (ULV) regime is also proposed. This approach leverages shift-latch delay elements and balanced-leakage readout technique to achieve 62% energy savings and 37% delay reduction. We apply these techniques to a feature extraction accelerator that can process 30 fps VGA video in real time and is fabricated in 28 nm LP CMOS technology. The design consumes 2.7 mW with a clock frequency of 27 MHz at Vdd = 470 mV, providing 3.5× better energy efficiency than previous state-of-the-art while extracting features from entire image.",
"title": ""
},
{
"docid": "37a0c6ac688c7d7f2dd622ebbe3ec184",
"text": "Prior research shows that directly applying phrase-based SMT on lexical tokens to migrate Java to C# produces much semantically incorrect code. A key limitation is the use of sequences in phrase-based SMT to model and translate source code with well-formed structures. We propose mppSMT, a divide-and-conquer technique to address that with novel training and migration algorithms using phrase-based SMT in three phases. First, mppSMT treats a program as a sequence of syntactic units and maps/translates such sequences in two languages to one another. Second, with a syntax-directed fashion, it deals with the tokens within syntactic units by encoding them with semantic symbols to represent their data and token types. This encoding via semantic symbols helps better migration of API usages. Third, the lexical tokens corresponding to each sememe are mapped or migrated. The resulting sequences of tokens are merged together to form the final migrated code. Such divide-and-conquer and syntax-direction strategies enable phrase-based SMT to adapt well to syntactical structures in source code, thus, improving migration accuracy. Our empirical evaluation on several real-world systems shows that 84.8 -- 97.9% and 70 -- 83% of the migrated methods are syntactically and semantically correct, respectively. 26.3 -- 51.2% of total migrated methods are exactly matched to the human-written C# code in the oracle. Compared to Java2CSharp, a rule-based migration tool, it achieves higher semantic accuracy from 6.6 -- 57.7% relatively. Importantly, it does not require manual labeling for training data or manual definition of rules.",
"title": ""
},
{
"docid": "cf2a2d940f45a35404abdf47961b140b",
"text": "This paper discusses a fuzzy model for multi-level human emotions recognition by computer systems through keyboard keystrokes, mouse and touch-screen interactions. This model can also be used to detect the other possible emotions at the time of recognition. Accuracy measurements of human emotions by the fuzzy model are discussed through two methods; the first is accuracy analysis and the second is false positive rate analysis. This fuzzy model detects more emotions, but on the other hand, for some of emotions, a lower accuracy was obtained with the comparison with the non-fuzzy human emotions detection methods. This system was trained and tested by Support Vector Machine (SVM) to recognize the users’ emotions. Overall, this model represents a closer similarity between human brain detection of emotions and computer systems. Key-Words: fuzzy emotions, multi-level emotions, human emotion recognition, human computer interaction.",
"title": ""
},
{
"docid": "777e3818dfeb25358dedd6f740e20411",
"text": "Chronic obstructive pulmonary, pneumonia, asthma, tuberculosis, lung cancer diseases are the most important chest diseases. These chest diseases are important health problems in the world. In this study, a comparative chest diseases diagnosis was realized by using multilayer, probabilistic, learning vector quantization, and generalized regression neural networks. The chest diseases dataset were prepared by using patient’s epicrisis reports from a chest diseases hospital’s database. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "771100d86f7bebba569f84e6bbb0b89f",
"text": "The business model concept is characterized by numerous fields of application which are promising in business practice. Consequently, research on business models has attracted increasing attention in the scientific world. However, for a successful utilization, the widely-criticized lack of theoretical consensus in this field of research has to be overcome. Thus, this paper conducted a comprehensive and up-to-date literature analysis examining 30 relevant literature sources focusing mainly on business model research. To achieve this, the analysis was based on a classification framework containing 17 evaluation criteria. Hereby, a systematic and objective penetration of the research area could be achieved. Moreover, existing research gaps as well as the most important fields to be addressed in future research could be revealed.",
"title": ""
},
{
"docid": "a3699449c25183625c30f7e3db1f0053",
"text": "There are cultural barriers to collaborative effort between literary scholars and computational linguists. In this work, we discuss some of these problems in the context of our ongoing research project, an exploration of free indirect discourse in Virginia Woolf’s To The Lighthouse, ultimately arguing that the advantages of taking each field out of its “comfort zone” justifies the inherent difficulties.",
"title": ""
},
{
"docid": "34d7f848427052a1fc5f565a24f628ec",
"text": "This is the solutions manual (web-edition) for the book Pattern Recognition and Machine Learning (PRML; published by Springer in 2006). It contains solutions to the www exercises. This release was created September 8, 2009. Future releases with corrections to errors will be published on the PRML web-site (see below). The authors would like to express their gratitude to the various people who have provided feedback on earlier releases of this document. In particular, the \" Bishop Reading Group \" , held in the Visual Geometry Group at the University of Oxford provided valuable comments and suggestions. The authors welcome all comments, questions and suggestions about the solutions as well as reports on (potential) errors in text or formulae in this document; please send any such feedback to",
"title": ""
},
{
"docid": "122fe53f1e745480837a23b68e62540a",
"text": "The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods. KeywordsCLAHE, fog, degraded, remove, color image, HSI, edge detection.",
"title": ""
},
{
"docid": "9f6da52c8ea3ba605ecbed71e020d31a",
"text": "With the exponential growth of information being transmitted as a result of various networks, the issues related to providing security to transmit information have considerably increased. Mathematical models were proposed to consolidate the data being transmitted and to protect the same from being tampered with. Work was carried out on the application of 1D and 2D cellular automata (CA) rules for data encryption and decryption in cryptography. A lot more work needs to be done to develop suitable algorithms and 3D CA rules for encryption and description of 3D chaotic information systems. Suitable coding for the algorithms are developed and the results are evaluated for the performance of the algorithms. Here 3D cellular automata encryption and decryption algorithms are used to provide security of data by arranging plain texts and images into layers of cellular automata by using the cellular automata neighbourhood system. This has resulted in highest order of security for transmitted data.",
"title": ""
},
{
"docid": "f4639c2523687aa0d5bfdd840df9cfa4",
"text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "df7e2e6431ebdaf41eea0b647106ede5",
"text": "We present a novel approach to automatic metaphor identification, that discovers both metaphorical associations and metaphorical expressions in unrestricted text. Our system first performs hierarchical graph factorization clustering (HGFC) of nouns and then searches the resulting graph for metaphorical connections between concepts. It then makes use of the salient features of the metaphorically connected clusters to identify the actual metaphorical expressions. In contrast to previous work, our method is fully unsupervised. Despite this fact, it operates with an encouraging precision (0.69) and recall (0.61). Our approach is also the first one in NLP to exploit the cognitive findings on the differences in organisation of abstract and concrete concepts in the human brain.",
"title": ""
},
{
"docid": "e1050f3c38f0b49893da4dd7722aff71",
"text": "The Berkeley lower extremity exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 Ib) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer's forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described",
"title": ""
},
{
"docid": "d8b3eb944d373741747eb840a18a490b",
"text": "Natural scenes contain large amounts of geometry, such as hundreds of thousands or even millions of tree leaves and grass blades. Subtle lighting effects present in such environments usually include a significant amount of occlusion effects and lighting variation. These effects are important for realistic renderings of such natural environments; however, plausible lighting and full global illumination computation come at prohibitive costs especially for interactive viewing. As a solution to this problem, we present a simple approximation to integrated visibility over a hemisphere (ambient occlusion) that allows interactive rendering of complex and dynamic scenes. Based on a set of simple assumptions, we show that our method allows the rendering of plausible variation in lighting at modest additional computation and little or no precomputation, for complex and dynamic scenes.",
"title": ""
},
{
"docid": "4add7de7ed94bc100de8119ebd74967e",
"text": "Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning) real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning), which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF).",
"title": ""
},
{
"docid": "dd27b4cf6e0c9534f7a0b6e5e9e04b62",
"text": "We study the problem of active learning for multi-class classification on large-scale datasets. In this setting, the existing active learning approaches built upon uncertainty measures are ineffective for discovering unknown regions, and those based on expected error reduction are inefficient owing to their huge time costs. To overcome the above issues, this paper proposes a novel query selection criterion called approximated error reduction (AER). In AER, the error reduction of each candidate is estimated based on an expected impact over all datapoints and an approximated ratio between the error reduction and the impact over its nearby datapoints. In particular, we utilize hierarchical anchor graphs to construct the candidate set as well as the nearby datapoint sets of these candidates. The benefit of this strategy is that it enables a hierarchical expansion of candidates with the increase of labels, and allows us to further accelerate the AER estimation. We finally introduce AER into an efficient semi-supervised classifier for scalable active learning. Experiments on publicly available datasets with the sizes varying from thousands to millions demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "1c931bd85e8985fcdabc0f7b20a1b2ac",
"text": "This paper presents a power factor correction (PFC)-based bridgeless Luo (BL-Luo) converter-fed brushless dc (BLDC) motor drive. A single voltage sensor is used for the speed control of the BLDC motor and PFC at ac mains. The voltage follower control is used for a BL-Luo converter operating in discontinuous inductor current mode. The speed of the BLDC motor is controlled by an approach of variable dc-link voltage, which allows a low-frequency switching of the voltage source inverter for the electronic commutation of the BLDC motor, thus offering reduced switching losses. The proposed BLDC motor drive is designed to operate over a wide range of speed control with an improved power quality at ac mains. The power quality indices thus obtained are under the recommended limits of IEC 61000-3-2. The performance of the proposed drive is validated with test results obtained on a developed prototype of the drive.",
"title": ""
},
{
"docid": "64e573006e2fb142dba1b757b1e4f20d",
"text": "Online learning algorithms often have to operate in the presence of concept drift (i.e., the concepts to be learned can change with time). This paper presents a new categorization for concept drift, separating drifts according to different criteria into mutually exclusive and nonheterogeneous categories. Moreover, although ensembles of learning machines have been used to learn in the presence of concept drift, there has been no deep study of why they can be helpful for that and which of their features can contribute or not for that. As diversity is one of these features, we present a diversity analysis in the presence of different types of drifts. We show that, before the drift, ensembles with less diversity obtain lower test errors. On the other hand, it is a good strategy to maintain highly diverse ensembles to obtain lower test errors shortly after the drift independent on the type of drift, even though high diversity is more important for more severe drifts. Longer after the drift, high diversity becomes less important. Diversity by itself can help to reduce the initial increase in error caused by a drift, but does not provide the faster recovery from drifts in long-term.",
"title": ""
}
] |
scidocsrr
|
ded798f394239383fa25899ccb7e70b1
|
Network Science and Cybersecurity
|
[
{
"docid": "ea9f43aaab4383369680c85a040cedcf",
"text": "Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.",
"title": ""
}
] |
[
{
"docid": "c53289d0fd566423a19fa1eedeb01843",
"text": "In this paper, we proposed a novel spam detection method that focused on reducing the false positive error of mislabeling nonspam as spam. First, we used the wrapper-based feature selection method to extract crucial features. Second, the decision tree was chosen as the classifier model with C4.5 as the training algorithm. Third, the cost matrix was introduced to give different weights to two error types, i.e., the false positive and the false negative errors. We define the weight parameter as a to adjust the relative importance of the two error types. Fourth, K-fold cross validation was employed to reduce out-ofsample error. Finally, the binary PSO with mutation operator (MBPSO) was used as the subset search strategy. Our experimental dataset contains 6000 emails, which were collected during the year of 2012. We conducted a Kolmogorov–Smirnov hypothesis test on the capital-run-length related features and found that all the p values were less than 0.001. Afterwards, we found a = 7 was the most appropriate in our model. Among seven meta-heuristic algorithms, we demonstrated the MBPSO is superior to GA, RSA, PSO, and BPSO in terms of classification performance. The sensitivity, specificity, and accuracy of the decision tree with feature selection by MBPSO were 91.02%, 97.51%, and 94.27%, respectively. We also compared the MBPSO with conventional feature selection methods such as SFS and SBS. The results showed that the MBPSO performs better than SFS and SBS. We also demonstrated that wrappers are more effective than filters with regard to classification performance indexes. It was clearly shown that the proposed method is effective, and it can reduce the false positive error without compromising the sensitivity and accuracy values. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6f9ffe5e1633046418ca0bc4f7089b2f",
"text": "This paper presents a new motion planning primitive to be used for the iterative steering of vision-based autonomous vehicles. This primitive is a parameterized quintic spline, denoted as -spline, that allows interpolating an arbitrary sequence of points with overall second-order geometric ( -) continuity. Issues such as completeness, minimality, regularity, symmetry, and flexibility of these -splines are addressed in the exposition. The development of the new primitive is tightly connected to the inversion control of nonholonomic car-like vehicles. The paper also exposes a supervisory strategy for iterative steering that integrates feedback vision data processing with the feedforward inversion control.",
"title": ""
},
{
"docid": "00eeceba7118e7a8a2f68deadc612f14",
"text": "I n the growing fields of wearable robotics, rehabilitation robotics, prosthetics, and walking robots, variable stiffness actuators (VSAs) or adjustable compliant actuators are being designed and implemented because of their ability to minimize large forces due to shocks, to safely interact with the user, and their ability to store and release energy in passive elastic elements. This review article describes the state of the art in the design of actuators with adaptable passive compliance. This new type of actuator is not preferred for classical position-controlled applications such as pick and place operations but is preferred in novel robots where safe human– robot interaction is required or in applications where energy efficiency must be increased by adapting the actuator’s resonance frequency. The working principles of the different existing designs are explained and compared. The designs are divided into four groups: equilibrium-controlled stiffness, antagonistic-controlled stiffness, structure-controlled stiffness (SCS), and mechanically controlled stiffness. In classical robotic applications, actuators are preferred to be as stiff as possible to make precise position movements or trajectory tracking control easier (faster systems with high bandwidth). The biological counterpart is the muscle that has superior functional performance and a neuromechanical control system that is much more advanced at adapting and tuning its parameters. The superior power-to-weight ratio, force-toweight ratio, compliance, and control of muscle, when compared with traditional robotic actuators, are the main barriers for the development of machines that can match the motion, safety, and energy efficiency of human or other animals. One of the key differences of these systems is the compliance or springlike behavior found in biological systems [1]. Although such compliant",
"title": ""
},
{
"docid": "0208d66e905292e1c83cf4af43f2b8aa",
"text": "Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penaltybased DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems. & 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "6f650989dff7b4aaa76f051985c185bf",
"text": "Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of ‘valid’ paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli’s interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.’s algorithm for detecting obfuscated calls in x86 binaries. Experimental results from comparing context insensitive, Sharir and Pnueli’s callingcontext-sensitive, and stack-context-sensitive versions of the algorithm are presented.",
"title": ""
},
{
"docid": "ceaab471634611c7d98776a7f33662e3",
"text": "Visible Light Communication (VLC) has many advantages such as high-speed data transmission and non-frequency authorization, which provided a good solution for indoor access environment with an effective and energy-saving way. This paper proposes a combination VLC + WiFi based indoor wireless access network structure. Different network architectures are analyzed and optimal scheme is given. Based on the numerical calculation, the handover model is introduced. Finally, a demo system is designed and implemented.",
"title": ""
},
{
"docid": "d563b025b084b53c30afba4211870f2d",
"text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.",
"title": ""
},
{
"docid": "bcd725162d37d173c9b8eae085d12330",
"text": "This paper describes an approach for a robotic arm to learn new actions through dialogue in a simplified blocks world. In particular, we have developed a threetier action knowledge representation that on one hand, supports the connection between symbolic representations of language and continuous sensorimotor representations of the robot; and on the other hand, supports the application of existing planning algorithms to address novel situations. Our empirical studies have shown that, based on this representation the robot was able to learn and execute basic actions in the blocks world. When a human is engaged in a dialogue to teach the robot new actions, step-by-step instructions lead to better learning performance compared to one-shot instructions.",
"title": ""
},
{
"docid": "e3699de3c4450eb2988cb50d5d75c44e",
"text": "Biomarkers of Alzheimer's disease (AD) are increasingly important. All modern AD therapeutic trials employ AD biomarkers in some capacity. In addition, AD biomarkers are an essential component of recently updated diagnostic criteria for AD from the National Institute on Aging--Alzheimer's Association. Biomarkers serve as proxies for specific pathophysiological features of disease. The 5 most well established AD biomarkers include both brain imaging and cerebrospinal fluid (CSF) measures--cerebrospinal fluid Abeta and tau, amyloid positron emission tomography (PET), fluorodeoxyglucose (FDG) positron emission tomography, and structural magnetic resonance imaging (MRI). This article reviews evidence supporting the position that MRI is a biomarker of neurodegenerative atrophy. Topics covered include methods of extracting quantitative and semiquantitative information from structural MRI; imaging-autopsy correlation; and evidence supporting diagnostic and prognostic value of MRI measures. Finally, the place of MRI in a hypothetical model of temporal ordering of AD biomarkers is reviewed.",
"title": ""
},
{
"docid": "99e71a45374284cbcb28b3dbe69e175d",
"text": "Spatial event detection is an important and challenging problem. Unlike traditional event detection that focuses on the timing of global urgent event, the task of spatial event detection is to detect the spatial regions (e.g. clusters of neighboring cities) where urgent events occur. In this paper, we focus on the problem of spatial event detection using textual information in social media. We observe that, when a spatial event occurs, the topics relevant to the event are often discussed more coherently in cities near the event location than those far away. In order to capture this pattern, we propose a new method called Graph Topic Scan Statistic (Graph-TSS) that corresponds to a generalized log-likelihood ratio test based on topic modeling. We first demonstrate that the detection of spatial event regions under Graph-TSS is NP-hard due to a reduction from classical node-weighted prize-collecting Steiner tree problem (NW-PCST). We then design an efficient algorithm that approximately maximizes the graph topic scan statistic over spatial regions of arbitrary form. As a case study, we consider three applications using Twitter data, including Argentina civil unrest event detection, Chile earthquake detection, and United States influenza disease outbreak detection. Empirical evidence demonstrates that the proposed Graph-TSS performs superior over state-of-the-art methods on both running time and accuracy.",
"title": ""
},
{
"docid": "17dfbb112878f4cf4344c5dff195fa18",
"text": "Hybrid vehicle techniques have been widely studied recently because of their potential to significantly improve the fuel economy and drivability of future ground vehicles. Due to the dualpower-source nature of these vehicles, control strategies based on engineering intuition frequently fail to fully explore the potential of these advanced vehicles. In this paper, we will present a procedure for the design of an approximately optimal power management strategy. The design procedure starts by defining a cost function, such as minimizing a combination of fuel consumption and selected emission species over a driving cycle. Dynamic Programming (DP) is then utilized to find the optimal control actions. Through analysis of the behavior of the DP control actions, approximately optimal rules are extracted, which, unlike DP control signals, are implementable. The performance of the power management control strategy is verified by using the hybrid vehicle model HE-VESIM developed at the Automotive Research Center of the University of Michigan. A trade-off study between fuel economy and emissions was performed. It was found that significant emission reduction can be achieved at the expense of a small increase in fuel consumption. Power Management Strategy for a Parallel Hybrid Electric Truck",
"title": ""
},
{
"docid": "b72faf101696a1c9175bb1117a072135",
"text": "The rapid deployment of smartphones as all-purpose mobile computing systems has led to a wide adoption of wireless communication systems such as Wi-Fi and Bluetooth in mobile scenarios. Both communication systems leak information to the surroundings during operation. This information has been used for tracking and crowd density estimations in literature. However, an estimation of pedestrian flows has not yet been evaluated with respect to a known ground truth and, thus, a reliable adoption in real world scenarios is rather difficult. With this paper, we fill in this gap. Using ground truth provided by the security check process at a major German airport, we discuss the quality and feasibility of pedestrian flow estimations for both WiFi and Bluetooth captures. We present and evaluate three approaches in order to improve the accuracy in comparison to a naive count of captured MAC addresses. Such counts only showed an impractical Pearson correlation of 0.53 for Bluetooth and 0.61 for Wi-Fi compared to ground truth. The presented extended approaches yield a superior correlation of 0.75 in best case. This indicates a strong correlation and an improvement of accuracy. Given these results, the presented approaches allow for a practical estimation of pedestrian flows.",
"title": ""
},
{
"docid": "7ade8142ce50038d2026662c971dfe71",
"text": "We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, namely Lambek’s pregroup semantics. A key observation is that the monoidal category of (finite dimensional) vector spaces, linear maps and the tensor product, as well as any pregroup, are examples of compact closed categories. Since, by definition, a pregroup is a compact closed category with trivial morphisms, its compositional content is reflected within the compositional structure of any non-degenerate compact closed category. The (slightly refined) category of vector spaces enables us to compute the meaning of a compound well-typed sentence from the meaning of its constituents, by ‘lifting’ the type reduction mechanisms of pregroup semantics to the whole category. These sentence meanings live in a single space, independent of the grammatical structure of the sentence. Hence we can use the inner-product to compare meanings of arbitrary sentences. A variation of this procedure which involves constraining the scalars of the vector spaces to the semiring of Booleans results in the well-known Montague semantics.",
"title": ""
},
{
"docid": "5ed719161f832a0c5297d0ab0411f727",
"text": "Cameras and inertial sensors are each good candidates for autonomous vehicle navigation, modeling from video, and other applications that require six-degrees-of-freedom motion estimation. However, these sensors are also good candidates to be deployed together, since each can be used to resolve the ambiguities in estimated motion that result from using the other modality alone. In this paper, we consider the specific problem of estimating sensor motion and other unknowns from image, gyro, and accelerometer measurements, in environments without known fiducials. This paper targets applications where external positions references such as global positioning are not available, and focuses on the use of small and inexpensive inertial sensors, for applications where weight and cost requirements preclude the use of precision inertial navigation systems. We present two algorithms for estimating sensor motion from image and inertial measurements. The first algorithm is a batch method, which produces estimates of the sensor motion, scene structure, and other unknowns using measurements from the entire observation sequence simultaneously. The second algorithm recovers sensor motion, scene structure, and other parameters recursively, and is suitable for use with long or “infinite” sequences, in which no feature",
"title": ""
},
{
"docid": "f249a6089a789e52eeadc8ae16213bc1",
"text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.",
"title": ""
},
{
"docid": "263485ca833637a55f18abcdfff096e2",
"text": "We propose an efficient and parameter-free scoring criterio n, the factorized conditional log-likelihood (̂fCLL), for learning Bayesian network classifiers. The propo sed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as w ell as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-the oretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-o f-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show tha t f̂CLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, us ing significantly less computational resources.",
"title": ""
},
{
"docid": "4fa7f7f723c2f2eee4c0e2c294273c74",
"text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.",
"title": ""
},
{
"docid": "74ecfe68112ba6309ac355ba1f7b9818",
"text": "We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. By parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, our method retains desired properties of state space models such as data efficiency and interpretability, while making use of the ability to learn complex patterns from raw data offered by deep learning approaches. Our method scales gracefully from regimes where little training data is available to regimes where data from large collection of time series can be leveraged to learn accurate models. We provide qualitative as well as quantitative results with the proposed method, showing that it compares favorably to the state-of-the-art.",
"title": ""
},
{
"docid": "d80ca368563546b1c2a7aa99d97e39d2",
"text": "In this paper we present a short history of logics: from parti cular cases of 2-symbol or numerical valued logic to the general case of n-symbol or num erical valued logic. We show generalizations of 2-valued Boolean logic to fuzzy log ic, also from the Kleene’s and Lukasiewicz’ 3-symbol valued logics or Belnap’s 4ymbol valued logic to the most generaln-symbol or numerical valued refined neutrosophic logic . Two classes of neutrosophic norm ( n-norm) and neutrosophic conorm ( n-conorm) are defined. Examples of applications of neutrosophic logic to physics are listed in the last section. Similar generalizations can be done for n-Valued Refined Neutrosophic Set , and respectively n-Valued Refined Neutrosopjhic Probability .",
"title": ""
},
{
"docid": "d300119f7e25b4252d7212ca42b32fb3",
"text": "Various computational procedures or constraint-based methods for data repairing have been proposed over the last decades to identify errors and, when possible, correct them. However, these approaches have several limitations including the scalability and quality of the values to be used in replacement of the errors. In this paper, we propose a new data repairing approach that is based on maximizing the likelihood of replacement data given the data distribution, which can be modeled using statistical machine learning techniques. This is a novel approach combining machine learning and likelihood methods for cleaning dirty databases by value modification. We develop a quality measure of the repairing updates based on the likelihood benefit and the amount of changes applied to the database. We propose SCARE (SCalable Automatic REpairing), a systematic scalable framework that follows our approach. SCARE relies on a robust mechanism for horizontal data partitioning and a combination of machine learning techniques to predict the set of possible updates. Due to data partitioning, several updates can be predicted for a single record based on local views on each data partition. Therefore, we propose a mechanism to combine the local predictions and obtain accurate final predictions. Finally, we experimentally demonstrate the effectiveness, efficiency, and scalability of our approach on real-world datasets in comparison to recent data cleaning approaches.",
"title": ""
}
] |
scidocsrr
|
718b6586f27cca366834651510d63b53
|
Deep Image Harmonization
|
[
{
"docid": "6008f42e840e85c935bc455e13e03e19",
"text": "Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.",
"title": ""
},
{
"docid": "c615480e70f3baa5589d0c620549967a",
"text": "A common task in image editing is to change the colours of a picture to match the desired colour grade of another picture. Finding the correct colour mapping is tricky because it involves numerous interrelated operations, like balancing the colours, mixing the colour channels or adjusting the contrast. Recently, a number of automated tools have been proposed to find an adequate one-to-one colour mapping. The focus in this paper is on finding the best linear colour transformation. Linear transformations have been proposed in the literature but independently. The aim of this paper is thus to establish a common mathematical background to all these methods. Also, this paper proposes a novel transformation, which is derived from the Monge-Kantorovicth theory of mass transportation. The proposed solution is optimal in the sense that it minimises the amount of changes in the picture colours. It favourably compares theoretically and experimentally with other techniques for various images and under various colour spaces.",
"title": ""
},
{
"docid": "0a761fba9fa9246261ca7627ff6afe91",
"text": "Compositing is one of the most commonly performed operations in computer graphics. A realistic composite requires adjusting the appearance of the foreground and background so that they appear compatible; unfortunately, this task is challenging and poorly understood. We use statistical and visual perception experiments to study the realism of image composites. First, we evaluate a number of standard 2D image statistical measures, and identify those that are most significant in determining the realism of a composite. Then, we perform a human subjects experiment to determine how the changes in these key statistics influence human judgements of composite realism. Finally, we describe a data-driven algorithm that automatically adjusts these statistical measures in a foreground to make it more compatible with its background in a composite. We show a number of compositing results, and evaluate the performance of both our algorithm and previous work with a human subjects study.",
"title": ""
}
] |
[
{
"docid": "bb50f0ad981d3f81df6810322da7bd71",
"text": "Scale-model laboratory tests of a surface effect ship (SES) conducted in a near-shore transforming wave field are discussed. Waves approaching a beach in a wave tank were used to simulate transforming sea conditions and a series of experiments were conducted with a 1:30 scale model SES traversing in heads seas. Pitch and heave motion of the vehicle were recorded in support of characterizing the seakeeping response of the vessel in developing seas. The aircushion pressure and the vessel speed were varied over a range of values and the corresponding vehicle responses were analyzed to identify functional dependence on these parameters. The results show a distinct correlation between the air-cushion pressure and the response amplitude of both pitch and heave.",
"title": ""
},
{
"docid": "5364dd1ec4afce5ee01ca8bc0e6d9aed",
"text": "In this paper we present a fuzzy version of SHOIN (D), the corresponding Description Logic of the ontology description language OWL DL. We show that the representation and reasoning capabilities of fuzzy SHOIN (D) go clearly beyond classical SHOIN (D). Interesting features are: (i) concept constructors are based on t-norm, t-conorm, negation and implication; (ii) concrete domains are fuzzy sets; (iii) fuzzy modifiers are allowed; and (iv) entailment and subsumption relationships may hold to some degree in the unit interval [0, 1].",
"title": ""
},
{
"docid": "e118177a0fc9fad704b2be958b01a873",
"text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.",
"title": ""
},
{
"docid": "e54240e56b80916aab16980f7c7bd320",
"text": "The aim of this study was to establish the optimal cut-off points of the Chen Internet Addiction Scale (CIAS), to screen for and diagnose Internet addiction among adolescents in the community by using the well-established diagnostic criteria of Internet addiction. This survey of 454 adolescents used screening (57/58) and diagnostic (63/64) cut-off points of the CIAS, a self-reported instrument, based on the results of systematic diagnostic interviews by psychiatrists. The area under the curve of the receiver operating characteristic curve revealed that CIAS has good diagnostic accuracy (89.6%). The screening cut-off point had high sensitivity (85.6%) and the diagnostic cut-off point had the highest diagnostic accuracy, classifying 87.6% of participants correctly. Accordingly, the screening point of the CIAS could provide a screening function in two-stage diagnosis, and the diagnostic point could serve as a diagnostic criterion in one-stage massive epidemiologic research.",
"title": ""
},
{
"docid": "d23d93fa41c98c0eafc98594b1a51aa0",
"text": "Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R²) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines.",
"title": ""
},
{
"docid": "fb6377f3e1d0c9a98017c507eb703365",
"text": "Classification methods from statistical pattern recognition, neural nets, and machine learning were applied to four real-world data sets. Each of these data sets has been previously analyzed and reported in the statistical, medical, or machine learning literature. The data sets are characterized by statisucal uncertainty; there is no completely accurate solution to these problems. Training and testing or resampling techniques are used to estimate the true error rates of the classification methods. Detailed attention is given to the analysis of performance of the neural nets using back propagation. For these problems, which have relatively few hypotheses and features, the machine learning procedures for rule induction or tree induction clearly performed best.",
"title": ""
},
{
"docid": "4a27c9c13896eb50806371e179ccbf33",
"text": "A geographical information system (CIS) is proposed as a suitable tool for mapping the spatial distribution of forest fire danger. Using a region severely affected by forest fires in Central Spain as the study area, topography, meteorological data, fuel models and human-caused risk were mapped and incorporated within a GIS. Three danger maps were generated: probability of ignition, fuel hazard and human risk, and all of them were overlaid in an integrated fire danger map, based upon the criteria established by the Spanish Forest Service. CIS make it possible to improve our knowledge of the geographical distribution of fire danger, which is crucial for suppression planning (particularly when hotshot crews are involved) and for elaborating regional fire defence plans.",
"title": ""
},
{
"docid": "6e90247455ac6a8e23504b1ec422b9f1",
"text": "The paper deals with the wireless sensor-based remote control of mobile robots motion in an unknown environment with obstacles using the Bluetooth wireless transmission and Sun SPOT technology. The Sun SPOT is designed to be a flexible development platform, capable of hosting widely differing application modules. Web technologies are changing the education in robotics. A feature of remote control laboratories is that users can interact with real mobile robot motion processes through the Internet. Motion control of mobile robots is very important research field today, because mobile robots are a interesting subject both in scientific research and practical applications. In this paper the object of the remote control is the Boe-Bot mobile robot from Parallax. This Boe-Bot mobile robot is the simplest, low-cost platform and the most suitable for the small-sized, light, battery-driven autonomous vehicle. The vehicle has two driving wheels and the angular velocities of the two wheels are independently controlled. When the vehicle is moving towards the target in an unknown environment with obstacles, an avoiding strategy is necessary. A remote control program has been implemented.",
"title": ""
},
{
"docid": "704f4681b724a0e4c7c10fd129f3378b",
"text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "6c4b59e0e8cc42faea528dc1fe7a09ed",
"text": "Grounded Theory is a powerful research method for collecting and analysing research data. It was ‘discovered’ by Glaser & Strauss (1967) in the 1960s but is still not widely used or understood by researchers in some industries or PhD students in some science disciplines. This paper demonstrates the steps in the method and describes the difficulties encountered in applying Grounded Theory (GT). A fundamental part of the analysis method in GT is the derivation of codes, concepts and categories. Codes and coding are explained and illustrated in Section 3. Merging the codes to discover emerging concepts is a central part of the GT method and is shown in Section 4. Glaser and Strauss’s constant comparison step is applied and illustrated so that the emerging categories can be seen coming from the concepts and leading to the emergent theory grounded in the data in Section 5. However, the initial applications of the GT method did have difficulties. Problems encountered when using the method are described to inform the reader of the realities of the approach. The data used in the illustrative analysis comes from recent IS/IT Case Study research into configuration management (CM) and the use of commercially available computer products (COTS). Why and how the GT approach was appropriate is explained in Section 6. However, the focus is on reporting GT as a research method rather than the results of the Case Study.",
"title": ""
},
{
"docid": "89a9473318537ef1cc1f6166364cbecf",
"text": "The authors propose an interpersonal social-cognitive theory of the self and personality, the relational self, in which knowledge about the self is linked with knowledge about significant others, and each linkage embodies a self-other relationship. Mental representations of significant others are activated and used in interpersonal encounters in the social-cognitive phenomenon of transference (S. M. Andersen & N. S. Glassman, 1996), and this evokes the relational self. Variability in relational selves depends on interpersonal contextual cues, whereas stability derives from the chronic accessibility of significant-other representations. Relational selves function in if-then terms (W. Mischel & Y. Shoda, 1995), in which ifs are situations triggering transference, and thens are relational selves. An individual's repertoire of relational selves is a source of interpersonal patterns involving affect, motivation, self-evaluation, and self-regulation.",
"title": ""
},
{
"docid": "99b151b39c13e7106b680ae7935567fd",
"text": "Pediatricians have an important role not only in early recognition and evaluation of autism spectrum disorders but also in chronic management of these disorders. The primary goals of treatment are to maximize the child's ultimate functional independence and quality of life by minimizing the core autism spectrum disorder features, facilitating development and learning, promoting socialization, reducing maladaptive behaviors, and educating and supporting families. To assist pediatricians in educating families and guiding them toward empirically supported interventions for their children, this report reviews the educational strategies and associated therapies that are the primary treatments for children with autism spectrum disorders. Optimization of health care is likely to have a positive effect on habilitative progress, functional outcome, and quality of life; therefore, important issues, such as management of associated medical problems, pharmacologic and nonpharmacologic intervention for challenging behaviors or coexisting mental health conditions, and use of complementary and alternative medical treatments, are also addressed.",
"title": ""
},
{
"docid": "c67a7eab2370315159200ac65c3fe52b",
"text": "Convolutional neural networks (CNNs) are the core of most state-of-the-art deep learning algorithms specialized for object detection and classification. CNNs are both computationally complex and embarrassingly parallel. Two properties that leave room for potential software and hardware optimizations for embedded systems. Given a programmable hardware accelerator with a CNN oriented custom instructions set, the compiler’s task is to exploit the hardware’s full potential, while abiding with the hardware constraints and maintaining generality to run different CNN models with varying workload properties. Snowflake is an efficient and scalable hardware accelerator implemented on programmable logic devices. It implements a control pipeline for a custom instruction set. The goal of this paper is to present Snowflake’s compiler that generates machine level instructions from Torch7 model description files. The main software design points explored in this work are: model structure parsing, CNN workload breakdown, loop rearrangement for memory bandwidth optimizations and memory access balancing. The performance achieved by compiler generated instructions matches against hand optimized code for convolution layers. Generated instructions also efficiently execute AlexNet and ResNet18 inference on Snowflake. Snowflake with 256 processing units was synthesized on Xilinx’s Zynq XC7Z045 FPGA. At 250 MHz, AlexNet achieved in 93.6 frames/s and 1.2 GB/s of off-chip memory bandwidth, and 21.4 frames/s and 2.2 GB/s for ResNet18. Total on-chip power is 5 W.",
"title": ""
},
{
"docid": "82a0169afe20e2965f7fdd1a8597b7d3",
"text": "Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.",
"title": ""
},
{
"docid": "85462fe3cf060d7fa85251d5a7d30d1a",
"text": "Validity of PostureScreen Mobile® in the Measurement of Standing Posture Breanna Cristine Berry Hopkins Department of Exercise Sciences, BYU Master of Science Background: PostureScreen Mobile® is an app created to quickly screen posture using front and side-view photographs. There is currently a lack of evidence that establishes PostureScreen Mobile® (PSM) as a valid measure of posture. Therefore, the purpose of this preliminary study was to document the validity and reliability of PostureScreen Mobile® in assessing static standing posture. Methods: This study was an experimental trial in which the posture of 50 male participants was assessed a total of six times using two different methods: PostureScreen Mobile® and Vicon 3D motion analysis system (VIC). Postural deviations, as measured during six trials of PSM assessments (3 trials with and 3 trials without anatomical markers), were compared to the postural deviations as measured using the VIC as the criterion measure. Measurement of lateral displacement on the x-axis (shift) and rotation on the y-axis (tilt) were made of the head, shoulders, and hips in the frontal plane. Measurement of forward/rearward displacement on the Z-axis (shift) of the head, shoulders, hips, and knees were made in the sagittal plane. Validity was evaluated by comparing the PSM measurements of shift and tilt of each body part to that of the VIC. Reliability was evaluated by comparing the variance of PSM measurements to the variance of VIC measurements. The statistical model employed the Bayesian framework and consisted of the scaled product of the likelihood of the data given the parameters and prior probability densities for each of the parameters. Results: PSM tended to overestimate VIC postural tilt and shift measurements in the frontal plane and underestimate VIC postural shift measurements in the sagittal plane. Use of anatomical markers did not universally improve postural measurements with PSM, and in most cases, the variance of postural measurements using PSM exceeded that of VIC. The patterns in the intraclass correlation coefficients (ICC) suggest high trial-to-trial variation in posture. Conclusions: We conclude that until research further establishes the validity and reliability of the PSM app, it should not be used in research or clinical applications when accurate postural assessments are necessary or when serial measurements of posture will be performed. We suggest that the PSM be used by health and fitness professionals as a screening tool, as described by the manufacturer. Due to the suspected trial-to-trial variation in posture, we question the usefulness of a single postural assessment.",
"title": ""
},
{
"docid": "fcce2e75108497f0e8e37300d6ad335c",
"text": "The authors performed a meta-analysis of studies examining the association between polymorphisms in the 5,10-methylenetetrahydrofolate reductase (MTHFR) gene, including MTHFR C677T and A1298C, and common psychiatric disorders, including unipolar depression, anxiety disorders, bipolar disorder, and schizophrenia. The primary comparison was between homozygote variants and the wild type for MTHFR C677T and A1298C. For unipolar depression and the MTHFR C677T polymorphism, the fixed-effects odds ratio for homozygote variants (TT) versus the wild type (CC) was 1.36 (95% confidence interval (CI): 1.11, 1.67), with no residual between-study heterogeneity (I(2) = 0%)--based on 1,280 cases and 10,429 controls. For schizophrenia and MTHFR C677T, the fixed-effects odds ratio for TT versus CC was 1.44 (95% CI: 1.21, 1.70), with low heterogeneity (I(2) = 42%)--based on 2,762 cases and 3,363 controls. For bipolar disorder and MTHFR C677T, the fixed-effects odds ratio for TT versus CC was 1.82 (95% CI: 1.22, 2.70), with low heterogeneity (I(2) = 42%)-based on 550 cases and 1,098 controls. These results were robust to various sensitively analyses. This meta-analysis demonstrates an association between the MTHFR C677T variant and depression, schizophrenia, and bipolar disorder, raising the possibility of the use of folate in treatment and prevention.",
"title": ""
},
{
"docid": "1ef0a2569a1e6a4f17bfdc742ad30a7f",
"text": "Internet of Things (IoT) is becoming more and more popular. Increasingly, European projects (CityPulse, IoT.est, IoT-i and IERC), standard development organizations (ETSI M2M, oneM2M and W3C) and developers are involved in integrating Semantic Web technologies to Internet of Things. All of them design IoT application uses cases which are not necessarily interoperable with each other. The main innovative research challenge is providing a unified system to build interoperable semantic-based IoT applications. In this paper, to overcome this challenge, we design the Semantic Web of Things (SWoT) generator to assist IoT projects and developers in: (1) building interoperable Semantic Web of Things (SWoT) applications by providing interoperable semantic-based IoT application templates, (2) easily inferring high-level abstractions from sensor measurements thanks to the rules provided by the template, (3) designing domain-specific or inter-domain IoT applications thanks to the interoperable domain knowledge provided by the template, and (4) encouraging to reuse as much as possible the background knowledge already designed. We demonstrate the usefulness of our contribution though three use cases: (1) cloud-based IoT developers, (2) mobile application developers, and (3) assisting IoT projects. A proof-of concept for providing Semantic Web of Things application templates is available at http://www.sensormeasurement.appspot.com/?p=m3api.",
"title": ""
},
{
"docid": "c467edcb0c490034776ba2dc2cde9d9e",
"text": "BACKGROUND\nPostoperative complications of blepharoplasty range from cutaneous changes to vision-threatening emergencies. Some of these can be prevented with careful preoperative evaluation and surgical technique. When complications arise, their significance can be diminished by appropriate management. This article addresses blepharoplasty complications based on the typical postoperative timeframe when they are encountered.\n\n\nMETHODS\nThe authors conducted a review article of major blepharoplasty complications and their treatment.\n\n\nRESULTS\nComplications within the first postoperative week include corneal abrasions and vision-threatening retrobulbar hemorrhage; the intermediate period (weeks 1 through 6) addresses upper and lower eyelid malpositions, strabismus, corneal exposure, and epiphora; and late complications (>6 weeks) include changes in eyelid height and contour along with asymmetries, scarring, and persistent edema.\n\n\nCONCLUSIONS\nA thorough knowledge of potential complications of blepharoplasty surgery is necessary for the practicing aesthetic surgeon. Within this article, current concepts and relevant treatment strategies are reviewed with the use of the most recent and/or appropriate peer-reviewed literature available.",
"title": ""
}
] |
scidocsrr
|
b7f0e6a64c75ba935f0edfacbf295df4
|
Benefits and challenges of three cloud computing service models
|
[
{
"docid": "bfabd63b2b5b3c0b58bb2ed687994b31",
"text": "Magnetotellurics is a geophysics technique for characterisation of geothermal reservoirs, mineral exploration, and other geoscience endeavours that need to sound deeply into the earth -- many kilometres or tens of kilometres. Central to its data processing is an inversion problem which currently takes several weeks on a desktop machine. In our new eScience lab, enabled by cloud computing, we parallelised an existing FORTAN program and embedded the parallel version in a cloud-based web application to improve its usability. A factor-of-five speedup has taken the time for some inversions from weeks down to days and is in use in a pre-fracturing and post-fracturing study of a new geothermal site in South Australia, an area with a high occurrence of hot dry rocks. We report on our experience with Amazon Web Services cloud services and our migration to Microsoft Azure, the collaboration between computer scientists and geophysicists, and the foundation it has laid for future work exploiting cloud data-parallel programming models.",
"title": ""
}
] |
[
{
"docid": "f99670327cc71eeab7bea6ef24d1d5c6",
"text": "Infant cry is a mode of communication, for interacting and drawing attention. The infants cry due to physiological, emotional or some ailment reasons. Cry involves high pitch changes in the signal. In this paper we describe an ‘Infant Cry Sounds Database’ (ICSD), collected especially for the study of likely cause of an infant’s cry. The database consists of infant cry sounds due to six causes: pain, discomfort, emotional need, ailment, environmental factors and hunger/thirst. The ground truth cause of cry is established with the help of two medical experts and parents of the infants. Preliminary analysis is carried out using the sound production features, the instantaneous fundamental frequency and frame energy derived from the cry acoustic signal, using auto correlation and linear prediction (LP) analysis. Spectrograms give the base reference. The infant cry sounds due to pain and discomfort are distinguished. The database should be helpful towards automated diagnosis of the causes of infant cry.",
"title": ""
},
{
"docid": "acb3aaaf79ebc3fc65724e92e4d076aa",
"text": "Lay dispositionism refers to lay people's tendency to use traits as the basic unit of analysis in social perception (L. Ross & R. E. Nisbett, 1991). Five studies explored the relation between the practices indicative of lay dispositionism and people's implicit theories about the nature of personal attributes. As predicted, compared with those who believed that personal attributes are malleable (incremental theorists), those who believed in fixed traits (entity theorists) used traits or trait-relevant information to make stronger future behavioral predictions (Studies 1 and 2) and made stronger trait inferences from behavior (Study 3). Moreover, the relation between implicit theories and lay dispositionism was found in both the United States (a more individualistic culture) and Hong Kong (a more collectivistic culture), suggesting this relation to be generalizable across cultures (Study 4). Finally, an experiment in which implicit theories were manipulated provided preliminary evidence for the possible causal role of implicit theories in lay dispositionism (Study 5).",
"title": ""
},
{
"docid": "4ec3395db1c5fa9ccf13bbc8e25df465",
"text": "This tutorial focuses on dynamic pricing under model uncertainty: a class of problems whose first instance dates back at least 40 years, is relatively simple in structure, is widely considered fundamental, and has numerous manifestations across multiple application domains and academic disciplines. While significant progress has been made throughout the last several decades, including a flurry of recent work, many variants of this problem class remain essentially unsolved. Briefly stated, the problem can be described as sequential pricing when the underlying demand model (or demand curve) is unknown and the market response to any given price is confounded by statistical noise. It will be helpful to hold in mind the following simple problem instance. The decision maker (“seller”) faces demand for a product s/he is selling. At every successive time unit the seller fixes a price for the product, subsequent to which demand is realized. The demand realizations are “noisy” observations of an ambient demand curve which is unbeknownst to the seller. The seller’s objective is to maximize expected cumulative (either discounted or not) profits over the time horizon that governs the interactions with the buyers. This situation is a quintessential example of a trade-off between exploration of the environment (to learn demand characteristics) and exploitation of that knowledge (via pricing) to maximize expected rewards.",
"title": ""
},
{
"docid": "9123ff1c2e6c52bf9a16a6ed4c67f151",
"text": "Domestic induction cookers operation is based on a resonant inverter which supplies medium-frequency currents (20-100 kHz) to an inductor, which heats up the pan. The variable load that is inherent to this application requires the use of a reliable and load-adaptive control algorithm. In addition, a wide output power range is required to get a satisfactory user performance. In this paper, a control algorithm to cover the variety of loads and the output power range is proposed. The main design criteria are efficiency, power balance, acoustic noise, flicker emissions, and user performance. As a result of the analysis, frequency limit and power level limit algorithms are proposed based on square wave and pulse density modulations. These have been implemented in a field-programmable gate array, including output power feedback and mains-voltage zero-cross-detection circuitry. An experimental verification has been performed using a commercial induction heating inverter. This provides a convenient experimental test bench to analyze the viability of the proposed algorithm.",
"title": ""
},
{
"docid": "1724343ef620b75617966eb6bf6b6d8d",
"text": "Interruptions are a common aspect of the work environment of most organizations. Yet little is known about how intemptions and their characteristics, such as frequency of occurrence, influence decision-making performance of individuals. Consequently, this paper reports the results of two experiments investigating the influence of interruptions on individual decision making. Interruptions were found to improve decision-making performance on simple tasks and to lower performance on complex tasks. For complex tasks, the frequency of interruptions and the dissimilarity of content between the primary and interruption tasks was found to exacerbate this effect. The implications of these results for future research and practice are discussed. Subject Areas: Decision Making, Information Overload, and Interruptions.",
"title": ""
},
{
"docid": "f1465822b9586c00a2bc23dda2ee5133",
"text": "Platform as a Service (PaaS) solutions are changing the way that software is produced, distributed, consumed, and priced. PaaS, also known as cloud platform, offer an execution environment based on software platforms. To be competitive on the market, PaaS providers have to be aware of drivers of successful platforms and design or adjust their business models accordingly. Surprisingly, prior research has made little attempt to investigate consumers’ preferences on PaaS that influence developers’ choice on PaaS solutions. This paper examines this understudied issue through a conjoint study. First a comprehensive literature analysis on PaaS has been conducted in order to build the study design on a rigorous foundation. The conducted conjoint survey contained ten attributes together with 26 corresponding attribute levels and has been completed by 103 participants. Based on the results, a prioritized list of customers’ preferences for PaaS has been created.",
"title": ""
},
{
"docid": "c49b6afe877ccd658f5de6e08e12d982",
"text": "This communication presents a small, low-profile planar triple-band microstrip antenna for WLAN/WiMAX applications. The goal of this communication is to combine WLAN and WiMAX communication standards simultaneously into a single device by designing a single antenna that can excite triple-band operation. The designed antenna has a compact size of $19 \\times 25\\;\\text{mm}^{2}$ ($0.152 \\lambda_{0}\\;\\times 0.2 \\lambda_{0}$). The proposed antenna consists of F-shaped slot radiators and a defected ground plane. Since only two F-shaped slots are etched on either sides of the radiator for triple-band operation, the radiator is very compact in size and simple in structure. The antenna shows three distinct bands I from 2.0 to 2.76, II from 3.04 to 4.0, and III from 5.2 to 6.0 GHz, which covers entire WLAN (2.4/5.2/5.8 GHz) and WiMAX (2.5/3.5/5.5) bands. To validate the proposed design, an experimental prototype has been fabricated and tested. Thus, the simulation results along with the measurements show that the antenna can simultaneously operate over WLAN (2.4/5.2/5.8 GHz) and WiMAX (2.5/3.5/5.5 GHz) frequency bands.",
"title": ""
},
{
"docid": "b0e2865aa653e4f9a34c3f214c4b1de5",
"text": "We present in this paper our three system submissions for the POS tagging subtask of the Empirist Shared Task: Our baseline system UdS-retrain extends a standard training dataset with in-domain training data; UdSdistributional and UdS-surface add two different ways of handling OOV words on top of the baseline system by using either distributional information or a combination of surface similarity and language model information. We reach the best performance using the distributional model.",
"title": ""
},
{
"docid": "2b1002037b717f65e97defbf802d5fcd",
"text": "BACKGROUND\nDeletions of chromosome 19 have rarely been reported, with the exception of some patients with deletion 19q13.2 and Blackfan-Diamond syndrome due to haploinsufficiency of the RPS19 gene. Such a paucity of patients might be due to the difficulty in detecting a small rearrangement on this chromosome that lacks a distinct banding pattern. Array comparative genomic hybridisation (CGH) has become a powerful tool for the detection of microdeletions and microduplications at high resolution in patients with syndromic mental retardation.\n\n\nMETHODS AND RESULTS\nUsing array CGH, this study identified three interstitial overlapping 19q13.11 deletions, defining a minimal critical region of 2.87 Mb, associated with a clinically recognisable syndrome. The three patients share several major features including: pre- and postnatal growth retardation with slender habitus, severe postnatal feeding difficulties, microcephaly, hypospadias, signs of ectodermal dysplasia, and cutis aplasia over the posterior occiput. Interestingly, these clinical features have also been described in a previously reported patient with a 19q12q13.1 deletion. No recurrent breakpoints were identified in our patients, suggesting that no-allelic homologous recombination mechanism is not involved in these rearrangements.\n\n\nCONCLUSIONS\nBased on these results, the authors suggest that this chromosomal abnormality may represent a novel clinically recognisable microdeletion syndrome caused by haploinsufficiency of dosage sensitive genes in the 19q13.11 region.",
"title": ""
},
{
"docid": "70dc7fe40f55e2b71b79d71d1119a36c",
"text": "In undergoing this life, many people always try to do and get the best. New knowledge, experience, lesson, and everything that can improve the life will be done. However, many people sometimes feel confused to get those things. Feeling the limited of experience and sources to be better is one of the lacks to own. However, there is a very simple thing that can be done. This is what your teacher always manoeuvres you to do this one. Yeah, reading is the answer. Reading a book as this digital image processing principles and applications and other references can enrich your life quality. How can it be?",
"title": ""
},
{
"docid": "ffa00946c33090f23714d3b9013f6ffb",
"text": "Status epilepticus (SE) is the most extreme form of epilepsy. It describes a prolonged seizure that may occur in patients with previous epilepsy or in acute disorders of the central nervous system. It is one of the most common neurologic emergencies, with an incidence of up to 41 per 100,000 per year and an estimated mortality is 20%. The three major determinants of prognosis are the duration of SE, patient age, and the underlying cause. Common and easily recognized causes of SE include cerebrovascular disorders, brain trauma, infections, and low antiepileptic drug levels in patients with epilepsy. Less common causes present a clinical and diagnostic challenge, but are major determinants of prognosis. Among them, inflammatory causes and inborn errors of metabolism have gained wide interest; recent insights into these causes have contributed to a better understanding of the pathophysiology of SE and its appropriate treatment. This review focuses on the different etiologies of SE and emphasizes the importance of prompt recognition and treatment of the underlying causes.",
"title": ""
},
{
"docid": "742f115d2ba9b9ee8862fe5a0c5497f6",
"text": "This paper targets the design of a high dynamic range lowpower, low-noise pixel readout integrated circuit (ROIC) that handles the infrared (IR) detector’s output signal of the uncooled thermal IR camera. Throughout the paper, both the optics and the IR detector modules of the IR camera are modeled using the analogue hardware description language (AHDL) to enable extracting the proper input signal required for the ROIC design. A capacitive trans-impedance amplifier (CTIA) is selected for design as a column level ROIC. The core of the CTIA is designed for minimum power consumption by operation in the sub-threshold region. In addition, a design of correlated double sampling (CDS) technique is applied to the CTIA to minimize the noise and the offset levels. The presented CTIA design achieves a power consumption of 5.2μW and root mean square (RMS) output noise of 6.9μV. All the circuits were implemented in 0.13μm CMOS process technology. The design rule check (DRC), layout versus schematic (LVS), parasitic extraction (PE), Process-voltage-temperature (PVT) analysis and post-layout simulation are performed for all designed circuits. The postlayout simulation results illustrate enhancement of the power consumption and noise performance compared to other published ROIC designs. Finally, a new widening dynamic range (WDR) technique is applied to the CTIA with the CDS circuit designs to increase the dynamic range (DR).",
"title": ""
},
{
"docid": "ff6ab778ec692f4b8e86da6f573d7d0b",
"text": "Despite the enormous popularity of Online Social Networking sites (OSNs; e.g., Facebook and Myspace), little research in psychology has been done on them. Two studies examining how personality is reflected in OSNs revealed several connections between the Big Five personality traits and self-reported Facebook-related behaviors and observable profile information. For example, extraversion predicted not only frequency of Facebook usage (Study 1), but also engagement in the site, with extraverts (vs. introverts) showing traces of higher levels of Facebook activity (Study 2). As in offline contexts, extraverts seek out virtual social engagement, which leaves behind a behavioral residue in the form of friends lists and picture postings. Results suggest that, rather than escaping from or compensating for their offline personality, OSN users appear to extend their offline personalities into the domains of OSNs.",
"title": ""
},
{
"docid": "013ff6855f65bac088427ec899c236af",
"text": "Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this survey we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing.",
"title": ""
},
{
"docid": "dca8b7f7022a139fc14bddd1af2fea49",
"text": "In this study, we investigated the discrimination power of short-term heart rate variability (HRV) for discriminating normal subjects versus chronic heart failure (CHF) patients. We analyzed 1914.40 h of ECG of 83 patients of which 54 are normal and 29 are suffering from CHF with New York Heart Association (NYHA) classification I, II, and III, extracted by public databases. Following guidelines, we performed time and frequency analysis in order to measure HRV features. To assess the discrimination power of HRV features, we designed a classifier based on the classification and regression tree (CART) method, which is a nonparametric statistical technique, strongly effective on nonnormal medical data mining. The best subset of features for subject classification includes square root of the mean of the sum of the squares of differences between adjacent NN intervals (RMSSD), total power, high-frequencies power, and the ratio between low- and high-frequencies power (LF/HF). The classifier we developed achieved sensitivity and specificity values of 79.3% and 100 %, respectively. Moreover, we demonstrated that it is possible to achieve sensitivity and specificity of 89.7% and 100 %, respectively, by introducing two nonstandard features ΔAVNN and ΔLF/HF, which account, respectively, for variation over the 24 h of the average of consecutive normal intervals (AVNN) and LF/HF. Our results are comparable with other similar studies, but the method we used is particularly valuable because it allows a fully human-understandable description of classification procedures, in terms of intelligible “if ... then ...” rules.",
"title": ""
},
{
"docid": "ef785a3eadaa01a7b45d978f63583513",
"text": "This paper presents a laparoscopic grasping tool for minimally invasive surgery with the capability of multiaxis force sensing. The tool is able to sense three-axis Cartesian manipulation force and a single-axis grasping force. The forces are measured by a wrist force sensor located at the distal end of the tool, and two torque sensors at the tool base, respectively. We propose an innovative design of a miniature force sensor achieving structural simplicity and potential cost effectiveness. A prototype is manufactured and experiments are conducted in a simulated surgical environment by using an open platform for surgical robot research, called Raven-II.",
"title": ""
},
{
"docid": "e07198de4fe8ea55f2c04ba5b6e9423a",
"text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.",
"title": ""
},
{
"docid": "01ee1036caeb4a64477aa19d0f8a6429",
"text": "In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.",
"title": ""
},
{
"docid": "d8c4e6632f90c3dd864be93db881a382",
"text": "Document understanding techniques such as document clustering and multidocument summarization have been receiving much attention recently. Current document clustering methods usually represent the given collection of documents as a document-term matrix and then conduct the clustering process. Although many of these clustering methods can group the documents effectively, it is still hard for people to capture the meaning of the documents since there is no satisfactory interpretation for each document cluster. A straightforward solution is to first cluster the documents and then summarize each document cluster using summarization methods. However, most of the current summarization methods are solely based on the sentence-term matrix and ignore the context dependence of the sentences. As a result, the generated summaries lack guidance from the document clusters. In this article, we propose a new language model to simultaneously cluster and summarize documents by making use of both the document-term and sentence-term matrices. By utilizing the mutual influence of document clustering and summarization, our method makes; (1) a better document clustering method with more meaningful interpretation; and (2) an effective document summarization method with guidance from document clustering. Experimental results on various document datasets show the effectiveness of our proposed method and the high interpretability of the generated summaries.",
"title": ""
},
{
"docid": "f1c210ee9f70db482d134bf544984f77",
"text": "Character segmentation plays an important role in the Arabic optical character recognition (OCR) system, because the letters incorrectly segmented perform to unrecognized character. Accuracy of character recognition depends mainly on the segmentation algorithm used. The domain of off-line handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different segmentation algorithms for off-line Arabic handwriting recognition have been proposed and applied to various types of word images. This paper provides modify segmentation algorithm based on bounding box to improve segmentation accuracy using two main stages: preprocessing stage and segmentation stage. In preprocessing stage, used a set of methods such as noise removal, binarization, skew correction, thinning and slant correction, which retains shape of the character. In segmentation stage, the modify bounding box algorithm is done. In this algorithm a distance analysis use on bounding boxes of two connected components (CCs): main (CCs), auxiliary (CCs). The modified algorithm is presented and taking place according to three cases. Cut points also determined using structural features for segmentation character. The modified bounding box algorithm has been successfully tested on 450 word images of Arabic handwritten words. The results were very promising, indicating the efficiency of the suggested",
"title": ""
}
] |
scidocsrr
|
8e0c7230831102a426b364f0193c9474
|
CUHK & SIAT Submission for THUMOS 15 Action Recognition Challenge
|
[
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "812abd8ee942c352bd2b141e3c88ba21",
"text": "Video based action recognition is one of the important and challenging problems in computer vision research. Bag of visual words model (BoVW) with local features has been very popular for a long time and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from local features, which is mainly composed of five steps; (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Although many effort s have been made in each step independently in different scenarios, their effects on action recognition are still unknown. Meanwhile, video data exhibits different views of visual patterns , such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Fusing these descriptors is crucial for boosting the final performance of an action recognition system. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practices to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate and improper choice in one of the steps may counteract the performance improvement of other steps. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid supervector , by exploring the complementarity of different BoVW frameworks with improved dense trajectories. Using this representation, we obtain impressive results on the three challenging datasets; HMDB51 (61.9%), UCF50 (92.3%), and UCF101 (87.9%). © 2016 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "cc0c1c11d437060e9492a3a1218e1271",
"text": "Graph coloring problems, in which one would like to color the vertices of a given graph with a small number of colors so that no two adjacent vertices receive the same color, arise in many applications, including various scheduling and partitioning problems. In this paper the complexity and performance of algorithms which construct such colorings are investigated. For a graph <italic>G</italic>, let &khgr;(<italic>G</italic>) denote the minimum possible number of colors required to color <italic>G</italic> and, for any graph coloring algorithm <italic>A</italic>, let <italic>A</italic>(<italic>G</italic>) denote the number of colors used by <italic>A</italic> when applied to <italic>G</italic>. Since the graph coloring problem is known to be “NP-complete,” it is considered unlikely that any efficient algorithm can guarantee <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>) for all input graphs. In this paper it is proved that even coming close to khgr;(<italic>G</italic>) with a fast algorithm is hard. Specifically, it is shown that if for some constant <italic>r</italic> < 2 and constant <italic>d</italic> there exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) ≤ <italic>r</italic>·&khgr;(<italic>G</italic>) + <italic>d</italic>, then there also exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>).",
"title": ""
},
{
"docid": "012b42c01cebf0840a429ab0e7db2914",
"text": "Silicon single-photon avalanche diodes (SPADs) are nowadays a solid-state alternative to photomultiplier tubes (PMTs) in single-photon counting (SPC) and time-correlated single-photon counting (TCSPC) over the visible spectral range up to 1-mum wavelength. SPADs implemented in planar technology compatible with CMOS circuits offer typical advantages of microelectronic devices (small size, ruggedness, low voltage, low power, etc.). Furthermore, they have inherently higher photon detection efficiency, since they do not rely on electron emission in vacuum from a photocathode as do PMTs, but instead on the internal photoelectric effect. However, PMTs offer much wider sensitive area, which greatly simplifies the design of optical systems; they also attain remarkable performance at high counting rate, and offer picosecond timing resolution with microchannel plate models. In order to make SPAD detectors more competitive in a broader range of SPC and TCSPC applications, it is necessary to face several issues in the semiconductor device design and technology. Such issues will be discussed in the context of the two possible approaches to such a challenge: employing a standard industrial high-voltage CMOS technology or developing a dedicated CMOS-compatible technology. Advances recently attained in the development of SPAD detectors will be outlined and discussed with reference to both single-element detectors and integrated detector arrays.",
"title": ""
},
{
"docid": "818f371f9d6e340c240b278c5290cb0b",
"text": "CVSSearch is a tool that searches for fragments of source code by using CVS comments. CVS is a version control system that is widely used in the open source community [4]. Our search tool takes advantage of the fact that a CVS comment typically describes the lines of code involved in the commit and this description will typically hold for many future versions. In other words, CVSSearch allows one to better search the most recent version of the code by looking at previous versions to better understand the current version.",
"title": ""
},
{
"docid": "0cb237a05e30a4bc419dc374f3a7b55a",
"text": "Question-and-answer (Q&A) websites, such as Yahoo! Answers, Stack Overflow and Quora, have become a popular and powerful platform for Web users to share knowledge on a wide range of subjects. This has led to a rapidly growing volume of information and the consequent challenge of readily identifying high quality objects (questions, answers and users) in Q&A sites. Exploring the interdependent relationships among different types of objects can help find high quality objects in Q&A sites more accurately. In this paper, we specifically focus on the ranking problem of co-ranking questions, answers and users in a Q&A website. By studying the tightly connected relationships between Q&A objects, we can gain useful insights toward solving the co-ranking problem. However, co-ranking multiple objects in Q&A sites is a challenging task: a) With the large volumes of data in Q&A sites, it is important to design a model that can scale well; b) The large-scale Q&A data makes extracting supervised information very expensive. In order to address these issues, we propose an unsupervised Network-based Co-Ranking framework (NCR) to rank multiple objects in Q&A sites. Empirical studies on real-world Yahoo! Answers datasets demonstrate the effectiveness and the efficiency of the proposed NCR method.",
"title": ""
},
{
"docid": "3122b61a0d48888dff488cc41564c820",
"text": "In this study, the ensemble classifier presented by Caruana, Niculescu-Mizil, Crew & Ksikes (2004) is investigated. Their ensemble approach generates thousands of models using a variety of machine learning algorithms and uses a forward stepwise selection to build robust ensembles that can be optimised to an arbitrary metric. On average, the resulting ensemble out-performs the best individual machine learning models. The classifier is implemented in the WEKA machine learning environment, which allows the results presented by the original paper to be validated and the classifier to be extended to multi-class problem domains. The behaviour of different ensemble building strategies is also investigated. The classifier is then applied to the spam filtering domain, where it is tested on three different corpora in an attempt to provide a realistic evaluation of the system. It records similar performance levels to that seen in other problem domains and out-performs individual models and the naive Bayesian filtering technique regularly used by commercial spam filtering solutions. Caruana et al.’s (2004) classifier will typically outperform the best known models in a variety of problems.",
"title": ""
},
{
"docid": "2dad5e4cc93246fd64b576d414fb5a3e",
"text": "Intelligent vehicles use advanced driver assistance systems (ADASs) to mitigate driving risks. There is increasing demand for an ADAS framework that can increase driving safety by detecting dangerous driving behavior from driver, vehicle, and lane attributes. However, because dangerous driving behavior in real-world driving scenarios can be caused by any or a combination of driver, vehicle, and lane attributes, the detection of dangerous driving behavior using conventional approaches that focus on only one type of attribute may not be sufficient to improve driving safety in realistic situations. To facilitate driving safety improvements, the concept of dangerous driving intensity (DDI) is introduced in this paper, and the objective of dangerous driving behavior detection is converted into DDI estimation based on the three attribute types. To this end, we propose a framework, wherein fuzzy sets are optimized using particle swarm optimization for modeling driver, vehicle, and lane attributes and then used to accurately estimate the DDI. The mean opinion scores of experienced drivers are employed to label DDI for a fair comparison with the results of our framework. The experimental results demonstrate that the driver, vehicle, and lane attributes defined in this paper provide useful cues for DDI analysis; furthermore, the results obtained using the framework are in favorable agreement with those obtained in the perception study. The proposed framework can greatly increase driving safety in intelligent vehicles, where most of the driving risk is within the control of the driver.",
"title": ""
},
{
"docid": "0eb3d3c33b62c04ed5d34fc3a38b5182",
"text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.",
"title": ""
},
{
"docid": "b741698d7e4d15cb7f4e203f2ddbce1d",
"text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.",
"title": ""
},
{
"docid": "3189fa20d605bf31c404b0327d74da79",
"text": "We now see an increasing number of self-tracking apps and wearable devices. Despite the vast number of available tools, however, it is still challenging for self-trackers to find apps that suit their unique tracking needs, preferences, and commitments. Furthermore, people are bounded by the tracking tools’ initial design because it is difficult to modify, extend, or mash up existing tools. In this paper, we present OmniTrack, a mobile self-tracking system, which enables self-trackers to construct their own trackers and customize tracking items to meet their individual tracking needs. To inform the OmniTrack design, we first conducted semi-structured interviews (N = 12) and analyzed existing mobile tracking apps (N = 62). We then designed and developed OmniTrack as an Android mobile app, leveraging a semi-automated tracking approach that combines manual and automated tracking methods. We evaluated OmniTrack through a usability study (N = 10) and improved its interfaces based on the feedback. Finally, we conducted a 3-week deployment study (N = 21) to assess if people can capitalize on OmniTrack’s flexible and customizable design to meet their tracking needs. From the study, we showed how participants used OmniTrack to create, revise, and appropriate trackers—ranging from a simple mood tracker to a sophisticated daily activity tracker. We discuss how OmniTrack positively influences and supports self-trackers’ tracking practices over time, and how to further improve OmniTrack by providing more appropriate visualizations and sharable templates, incorporating external contexts, and supporting researchers’ unique data collection needs.",
"title": ""
},
{
"docid": "4eb1e28d62af4a47a2e8dc795b89cc09",
"text": "This paper describes a new computational finance approach. This approach combines pattern recognition techniques with an evolutionary computation kernel applied to financial markets time series in order to optimize trading strategies. Moreover, for pattern matching a template-based approach is used in order to describe the desired trading patterns. The parameters for the pattern templates, as well as, for the decision making rules are optimized using a genetic algorithm kernel. The approach was tested considering actual data series and presents a robust profitable trading strategy which clearly beats the market, S&P 500 index, reducing the investment risk significantly.",
"title": ""
},
{
"docid": "a958ded315a2de150f46c92ac9a5a414",
"text": "Dynamic binary analysis techniques play a central role to study the security of software systems and detect vulnerabilities in a broad range of devices and applications. Over the past decade, a variety of different techniques have been published, often alongside the release of prototype tools to demonstrate their effectiveness. Unfortunately, most of those techniques’ implementations are deeply coupled with their dynamic analysis frameworks and are not easy to integrate in other frameworks. Those frameworks are not designed to expose their internal state or their results to other components. This prevents analysts from being able to combine together different tools to exploit their strengths and tackle complex problems which requires a combination of sophisticated techniques. Fragmentation and isolation are two important problems which too often results in duplicated efforts or in multiple equivalent solutions for the same problem – each based on a different programming language, abstraction model, or execution environment. In this paper, we present avatar2, a dynamic multi-target orchestration framework designed to enable interoperability between different dynamic binary analysis frameworks, debuggers, emulators, and real physical devices. Avatar2 allows the analyst to organize different tools in a complex topology and then “move” the execution of binary code from one system to the other. The framework supports the automated transfer of the internal state of the device/application, as well as the configurable forwarding of input/output and memory accesses to physical peripherals or emulated targets. To demonstrate avatar2 usage and versatility, in this paper we present three very different use cases in which we replicate a PLC rootkit presented at NDSS 2017, we test Firefox combining Angr and GDB, and we record the execution of an embedded device firmware using PANDA and OpenOCD. All tools and the three use cases will be released as open source to help other researchers to replicate our experiments and perform their own analysis tasks with avatar2.",
"title": ""
},
{
"docid": "1204d1695e39bb7897b6771c445d809e",
"text": "The known disorders of cholesterol biosynthesis have expanded rapidly since the discovery that Smith-Lemli-Opitz syndrome is caused by a deficiency of 7-dehydrocholesterol. Each of the six now recognized sterol disorders-mevalonic aciduria, Smith-Lemli-Opitz syndrome, desmosterolosis, Conradi-Hünermann syndrome, CHILD syndrome, and Greenberg dysplasia-has added to our knowledge of the relationship between cholesterol metabolism and embryogenesis. One of the most important lessons learned from the study of these disorders is that abnormal cholesterol metabolism impairs the function of the hedgehog class of embryonic signaling proteins, which help execute the vertebrate body plan during the earliest weeks of gestation. The study of the enzymes and genes in these several syndromes has also expanded and better delineated an important class of enzymes and proteins with diverse structural functions and metabolic actions that include sterol biosynthesis, nuclear transcriptional signaling, regulation of meiosis, and even behavioral modulation.",
"title": ""
},
{
"docid": "36ae2ac184ea03bde05bf1e69c4aa0f7",
"text": "We present a new annotation scheme for normalizing time expressions, such as three days ago, to computer-readable forms, such as 2016-03-07. The annotation scheme addresses several weaknesses of the existing TimeML standard, allowing the representation of time expressions that align to more than one calendar unit (e.g., the past three summers), that are defined relative to events (e.g., three weeks postoperative), and that are unions or intersections of smaller time expressions (e.g., Tuesdays and Thursdays). It achieves this by modeling time expression interpretation as the semantic composition of temporal operators like UNION, NEXT, and AFTER. We have applied the annotation scheme to 34 documents so far, producing 1104 annotations, and achieving inter-annotator agreement of 0.821.",
"title": ""
},
{
"docid": "6b55b99f66500d60e0b72b2a736c46eb",
"text": "This paper discusses the implementation of GNU radio-based software defined radio (SDR) for designing a frequency modulated continuous wave (FMCW) radar to detect stationary and moving targets. The use of SDR system in which its components are implemented by means of software is to reduce cost and complexity in the design and implementation. Whilst the signal processing of FMCW radar is carried out using Matlab R® with triangular linear frequency modulation (LFM) waveform to obtain the target distance and the target relative speed for stationary and moving target, respectively. From the result, it is shown that the radar is successfully implemented using GNU radio-based SDR with the capability in distance target detection of 14.79km for a moving target away from the radar with the relative speed of 50m/s.",
"title": ""
},
{
"docid": "eda607a60321038e75104bf555856d4f",
"text": "Knee injuries occur commonly in sports, limiting field and practice time and performance level. Although injury etiology relates primarily to sports specific activity, female athletes are at higher risk of knee injury than their male counterparts in jumping and cutting sports. Particular pain syndromes such as anterior knee pain and injuries such as noncontact anterior cruciate ligament (ACL) injuries occur at a higher rate in female than male athletes at a similar level of competition. Anterior cruciate ligament injuries can be season or career ending, at times requiring costly surgery and rehabilitation. Beyond real-time pain and functional limitations, previous injury is implicated in knee osteoarthritis occurring later in life. Although anatomical parameters differ between and within the sexes, it is not likely this is the single reason for knee injury rate disparities. Clinicians and researchers have also studied the role of sex hormones and dynamic neuromuscular imbalances in female compared with male athletes in hopes of finding the causes for the increased rate of ACL injury. Understanding gender differences in knee injuries will lead to more effective prevention strategies for women athletes who currently suffer thousands of ACL tears annually. To meet the goal in sports medicine of safely returning an athlete to her sport, our evaluation, assessment, treatments and prevention strategies must reflect not only our knowledge of the structure and innervations of the knee but neuromuscular control in multiple planes and with multiple forces while at play.",
"title": ""
},
{
"docid": "e11b6fd2dcec42e7b726363a869a0d95",
"text": "Future frame prediction in videos is a promising avenue for unsupervised video representation learning. Video frames are naturally generated by the inherent pixel flows from preceding frames based on the appearance and motion dynamics in the video. However, existing methods focus on directly hallucinating pixel values, resulting in blurry predictions. In this paper, we develop a dual motion Generative Adversarial Net (GAN) architecture, which learns to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a duallearning mechanism. The primal future-frame prediction and dual future-flow prediction form a closed loop, generating informative feedback signals to each other for better video prediction. To make both synthesized future frames and flows indistinguishable from reality, a dual adversarial training method is proposed to ensure that the futureflow prediction is able to help infer realistic future-frames, while the future-frame prediction in turn leads to realistic optical flows. Our dual motion GAN also handles natural motion uncertainty in different pixel locations with a new probabilistic motion encoder, which is based on variational autoencoders. Extensive experiments demonstrate that the proposed dual motion GAN significantly outperforms stateof-the-art approaches on synthesizing new video frames and predicting future flows. Our model generalizes well across diverse visual scenes and shows superiority in unsupervised video representation learning.",
"title": ""
},
{
"docid": "5d48cd6c8cc00aec5f7f299c346405c9",
"text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of",
"title": ""
},
{
"docid": "727e4b745037587df8e9789f978e0db4",
"text": "There is a growing number of courses delivered using elearning environments and their online discussions play an important role in collaborative learning of students. Even in courses with a few number of students, there could be thousands of messages generated in a few months within these forums. Manually evaluating the participation of students in such case is a significant challenge, considering the fact that current e-learning environments do not provide much information regarding the structure of interactions between students. There is a recent line of research on applying social network analysis (SNA) techniques to study these interactions.\n Here we propose to exploit SNA techniques, including community mining, in order to discover relevant structures in social networks we generate from student communications but also information networks we produce from the content of the exchanged messages. With visualization of these discovered relevant structures and the automated identification of central and peripheral participants, an instructor is provided with better means to assess participation in the online discussions. We implemented these new ideas in a toolbox, named Meerkat-ED, which automatically discovers relevant network structures, visualizes overall snapshots of interactions between the participants in the discussion forums, and outlines the leader/peripheral students. Moreover, it creates a hierarchical summarization of the discussed topics, which gives the instructor a quick view of what is under discussion. We believe exploiting the mining abilities of this toolbox would facilitate fair evaluation of students' participation in online courses.",
"title": ""
},
{
"docid": "3dbedb4539ac6438e9befbad366d1220",
"text": "The main focus of this paper is to propose integration of dynamic and multiobjective algorithms for graph clustering in dynamic environments under multiple objectives. The primary application is to multiobjective clustering in social networks which change over time. Social networks, typically represented by graphs, contain information about the relations (or interactions) among online materials (or people). A typical social network tends to expand over time, with newly added nodes and edges being incorporated into the existing graph. We reflect these characteristics of social networks based on real-world data, and propose a suitable dynamic multiobjective evolutionary algorithm. Several variants of the algorithm are proposed and compared. Since social networks change continuously, the immigrant schemes effectively used in previous dynamic optimisation give useful ideas for new algorithms. An adaptive integration of multiobjective evolutionary algorithms outperformed other algorithms in dynamic social networks.",
"title": ""
},
{
"docid": "f6ae47c4b53a3d5493405e8c2095d928",
"text": "Bipartite networks are currently regarded as providing amajor insight into the organization ofmany real-world systems, unveiling themechanisms driving the interactions occurring between distinct groups of nodes. One of themost important issues encounteredwhenmodeling bipartite networks is devising away to obtain a (monopartite) projection on the layer of interest, which preserves asmuch as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any twonodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, herewe consider a set of four nullmodels, definedwithin the exponential randomgraph framework. Our algorithm outputs amatrix of link-specific p-values, fromwhich a validated projection is straightforwardly obtainable, upon running amultiple hypothesis testing procedure. Finally, we test ourmethod on an economic network (i.e. the countries-productsWorld TradeWeb representation) and a social network (i.e.MovieLens, collecting the users’ ratings of a list ofmovies). In both cases non-trivial communities are detected: while projecting theWorld TradeWeb on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on thefilms layer allows clusters ofmovies whose affinity cannot be fully accounted for by genre similarity to be individuated.",
"title": ""
}
] |
scidocsrr
|
8c90c5f21e5414c5cb264f939c19b11a
|
Personality and coping.
|
[
{
"docid": "778db8037f0c50766a3715f9d9df6147",
"text": "The study of stress and coping points to two concepts central to an understanding of the response to trauma: approach and avoidance. This pair of concepts refers to two basic modes of coping with stress. Approach and avoidance are simply metaphors for cognitive and emotional activity that is oriented either toward or away from threat. An approach-avoidance model of coping is presented in the context of contemporary theoretical approaches to coping. The research literature on coping effectiveness, including evidence from our laboratory, is discussed, and speculations are made about the implications for future research. The study of stress and coping has become quite popular in recent years, particularly in regard to traumatic life events. Although the area is broad and the coping process is complex, there is a striking coherence in much of the literature. This coherence is based on two concepts central to an understanding of coping with trauma: approach and avoidance. In its simplest form, this pair of concepts refers to two basic orientations toward stressful information, or two basic modes of coping with stress. Approach and avoidance are shorthand terms for the cognitive and emotional activity that is oriented either toward or away from threat. In this article we will present the case for utilizing the concepts of approach and avoidance to provide a coherent theoretical structure to our understanding of coping with stress. Several different formulations of the approach-avoidance dimension will be reviewed, followed by a brief review of the coping effectiveness literature. Several studies from our laboratory will be used to illustrate the relationship between coping and outcome. Finally, a general approach-avoidance model of coping will be presented, with suggestions for further research to corroborate or extend the theory. The study of coping with stress has been split into two areas: anticipation of future stressful events and recovery from trauma. These areas have been kept remarkably distinct in both theory and research on coping. Although there are clearly important differences between the two cases, we have chosen not to emphasize this distinction. For any given stress, anticipation and recovery are not always clearly separable; dealing with a trauma involves coming to terms with the event itself and with the threat of recurrence in the future. More important, Correspondence concerning this article should be sent to Susan Roth, Department of Psychology, Duke University, Durham, NC 27706. we have identified the same processes as central to coping in both anticipation and recovery periods. A p p r o a c h A v o i d a n c e Formula t ions The approach-avoidance distinction is not new, having historical roots in psychoanalytic theories of defense and working through (e.g., Freud, 1915/1957), and in views of conflict from the behavioral (e.g., Hovland & Sears, 1938; Miller, 1944) and phenomenological (e.g., Lewin, 1951) traditions. In the more recent literature on coping with stress, approach-avoidance distinction is a core idea. One is struck by the extent to which the concepts of approach and avoidance underlie the personality or individual difference variables studied in the anticipatory threat literature, and also the dimensions of coping studied in traumatic stress reaction research. Table l briefly describes 14 of these coping formulations. In the anticipatory threat literature, the repressionsensitization distinction is paradigmatic of the approachavoidance dimension of individual difference: Repression involves an avoidance of anxiety-arousing stimuli and their consequences and is a general orientation away from threat. Sensitization, on the other hand, is the approach toward anxiety-arousing stimuli and their consequences and is an orientation toward threat. Although, as we shall see, there is no clear-cut evidence regarding the effects of individual differences along the approach and avoidance dimensions, the issues for coping and adaptation seem to be the following: Avoidant strategies seem useful in that they may reduce stress and prevent anxiety from becoming crippling. Approach strategies, on the other hand, allow for appropriate action and/or the possibility for noticing and taking advantage of changes in a situation that might make it more controllable. Approach strategies also allow for ventilation of affect. Individual differences along the approach-avoidance dimension have also been a focus of study in the traumatic stress reaction research. For example, Shontz (1975) discussed fragmentation versus containment in response to illness, whereas McGlashan, Levy, and Carpenter (1975) referred to integration and sealing over as two distinct styles of recovery from schizophrenia. To illustrate, fragmentation is a form of denial in which people split themselves off from their illness, resulting in an unstable selfsystem. Containment is the incorporation of threat into an integrated self-structure, without overwhelming the self. Horowitz's (1976, 1979) formulation of the approach-avoidance dimension in response to stress is the most fully developed and will be discussed in more detail. July 1986 9 American Psychologist Copyright 1986 by the American Psychological Association, inc. 0003-066X/86/$00.75 813 Table 1 Summary of Approach-Avoidance Coping Formulations Coping formulation Avoidance Approach Measurement Perceptual defenseperceptual vigilance (Bruner & Postman, 1947) Avoidance-vigilance (Cohen & Lazarus, 1973; Janis, 1958, 1977, 1982) Repression-sensitization (Bell & Byrne, 1978; Byrne, 1964) Repression-sensitization (Gudjonsson, 1981; Houston & Hodges, 1970) Nonvigilant-vigilant (Averill & Rosenn, 1972) Selective inattentionselective attention (Kahnemann, 1973) Inaccurate-accurate expectations (Johnson & Leventhal, 1974) Reducers-augmenters (Petrie, 1978) Blunting-monitoring (Miller, 1980; Miller & Mangan, 1983) Rejection-attention (Mullen & Suls, 1982) Sealing over-integration (McGlashan, Levy, & Carpenter, 1975) Relative deficit in perceiving threatening stimuli Procrastination, giving up of personal responsibility, inadequate search of environmental cues, restricting thought about the stressor, and failure to appraise the situation and make contingency plans Avoidance of anxietyarousing stimuli and their consequents, selective inattention and forgetting, and low anxiety Low subjective distress plus high electrodermal indicators of distress Preferences for unsignaled shock, even when avoidance possible Inattention to selected (eg , threatening) elements of the perceptual field Not having accurate information about what to expect regarding a threatening situation Ignoring warning signals and information about hazards; tolerance for pain Seeking distraction, relaxing, denying threat, practicing detachment and intellectualization Orientation away from stressor and one's reactions to it In regard to recovery from a schizophrenic episode, a lack of curiosity about the experience, a shifting of responsibility onto others, a negative view of the episode, an isolation of the episode from the rest of the person's life, and a failure to grow from the experience Relative readiness to perceive threatening stimuli Alertness; self-responsibility; thorough, active searching; seeking knowledge; careful appraisal and planning Orientation toward anxietyarousing stimuli and their consequents, selective attention and recall, and high anxiety High subjective distress plus low electrodermal indicators of distress Preference for signaled shock, even when no avoidance possible Attention to threatening elements of perceptual field Having accurate information about what to expect regarding a threatening situation Attending to warning signals; intolerance for pain Vigilance, anxiety, and orientation toward threat Orientation toward stressor Curiosity about the experience, taking selfresponsibility, positive view of episode, incorporation of episode into the person's life Tachistoscope word recognition paradigm Interviews or observer ratings",
"title": ""
}
] |
[
{
"docid": "e90e2a651c54b8510efe00eb1d8e7be0",
"text": "The design simulation, fabrication, and measurement of a 2.4-GHz horizontally polarized omnidirectional planar printed antenna for WLAN applications is presented. The antenna adopts the printed Alford-loop-type structure. The three-dimensional (3-D) EM simulator HFSS is used for design simulation. The designed antenna is fabricated on an FR-4 printed-circuit-board substrate. The measured input standing-wave-ratio (SWR) is less than three from 2.40 to 2.483 GHz. As desired, the horizontal-polarization H-plane pattern is quite omnidirectional and the E-plane pattern is also very close to that of an ideal dipole antenna. Also a comparison with the popular printed inverted-F antenna (PIFA) has been conducted, the measured H-plane pattern of the Alford-loop-structure antenna is better than that of the PIFA when the omnidirectional pattern is desired. Further more, the study of the antenna printed on a simulated PCMCIA card and that inserted inside a laptop PC are also conducted. The HFSS model of a laptop PC housing, consisting of the display, the screen, and the metallic box with the keyboard, is constructed. The effect of the laptop PC housing with different angle between the display and keyboard on the antenna is also investigated. It is found that there is about 15 dB attenuation of the gain pattern (horizontal-polarization field) in the opposite direction of the PCMCIA slot on the laptop PC. Hence, the effect of the large ground plane of the PCMCIA card and the attenuation effect of the laptop PC housing should be taken into consideration for the antenna design for WLAN applications. For the proposed antenna, in addition to be used alone for a horizontally polarized antenna, it can be also a part of a diversity antenna",
"title": ""
},
{
"docid": "84625e28d5545123a4bbd3f5a3154b0e",
"text": "Event recognition from still images is of great importance for image understanding. However, compared with event recognition in videos, there are much fewer research works on event recognition in images. This paper addresses the issue of event recognition from images and proposes an effective method with deep neural networks. Specifically, we design a new architecture, called Object-Scene Convolutional Neural Network (OS-CNN). This architecture is decomposed into object net and scene net, which extract useful information for event understanding from the perspective of objects and scene context, respectively. Meanwhile, we investigate different network architectures for OS-CNN design, and adapt the deep (AlexNet) and very-deep (GoogLeNet) networks to the task of event recognition. Furthermore, we find that the deep and very-deep networks are complementary to each other. Finally, based on the proposed OS-CNN and comparative study of different network architectures, we come up with a solution of five-stream CNN for the track of cultural event recognition at the ChaLearn Looking at People (LAP) challenge 2015. Our method obtains the performance of 85.5% and ranks the 1st place in this challenge.",
"title": ""
},
{
"docid": "17342401ad2d85c8ccd908703cb15234",
"text": "We present a deep generative model, named Monge-Ampère flow, which builds on continuous-time gradient flow arising from the Monge-Ampère equation in optimal transport theory. The generative map from the latent space to the data space follows a dynamical system, where a learnable potential function guides a compressible fluid to flow towards the target density distribution. Training of the model amounts to solving an optimal control problem. The Monge-Ampère flow has tractable likelihoods and supports efficient sampling and inference. One can easily impose symmetry constraints in the generative model by designing suitable scalar potential functions. We apply the approach to unsupervised density estimation of the MNIST dataset and variational calculation of the two-dimensional Ising model at the critical point. This approach brings insights and techniques from Monge-Ampère equation, optimal transport, and fluid dynamics into reversible flow-based generative models.",
"title": ""
},
{
"docid": "48f8c9b99afa5e42592cb9106198e803",
"text": "The recent explosion of interest in the bioactivity of the flavonoids of higher plants is due, at least in part, to the potential health benefits of these polyphenolic components of major dietary constituents. This review article discusses the biological properties of the flavonoids and focuses on the relationship between their antioxidant activity, as hydrogen donating free radical scavengers, and their chemical structures. This culminates in a proposed hierarchy of antioxidant activity in the aqueous phase. The cumulative findings concerning structure-antioxidant activity relationships in the lipophilic phase derive from studies on fatty acids, liposomes, and low-density lipoproteins; the factors underlying the influence of the different classes of polyphenols in enhancing their resistance to oxidation are discussed and support the contention that the partition coefficients of the flavonoids as well as their rates of reaction with the relevant radicals define the antioxidant activities in the lipophilic phase.",
"title": ""
},
{
"docid": "a81f2102488e6d9599a5796b1b6eba57",
"text": "A content based image retrieval system (CBIR) is proposed to assist the dermatologist for diagnosis of skin diseases. First, after collecting the various skin disease images and their text information (disease name, symptoms and cure etc), a test database (for query image) and a train database of 460 images approximately (for image matching) are prepared. Second, features are extracted by calculating the descriptive statistics. Third, similarity matching using cosine similarity and Euclidian distance based on the extracted features is discussed. Fourth, for better results first four images are selected during indexing and their related text information is shown in the text file. Last, the results shown are compared according to doctor’s description and according to image content in terms of precision and recall and also in terms of a self developed scoring system. Keyword: Cosine similarity, Euclidian distance, Precision, Recall, Query image. 1. Basic introduction to cbir CBIR differs from classical information retrieval in that image databases are essentially unstructured, since digitized images consist purely of arrays of pixel intensities, with no inherent meaning. One of the key issues with any kind of image processing is the need to extract useful information from the raw data (such as recognizing the presence of particular shapes or textures) before any kind of reasoning about the image’s contents is possible. An example may make this clear. Many police forces now use automatic face recognition systems. Such systems may be used in one of two ways. Firstly, the image in front of the camera may be compared with a single individual’s database record to verify his or her identity. In this case, only two images are matched, a process few observers would call CBIR[15]. Secondly, the entire database may be searched to find the most closely matching images. This is a genuine example of CBIR. 2. Structure of CBIR model Basic modules and their brief discussion of a CBIR modal is described in the following Figure 1.Content based image retrieval system consists of following modules: Feature Extraction: In this module the features of interest are calculated for image database. Fig.1 Modules of CBIR modal Feature extraction of query image: This module calculates the feature of the query image. Query image can be a part of image database or it may not be a part of image database. Similarity measure: This module compares the feature database of the existing images with the query image on basis of the similarity measure of the interest[2]. Image Database Feature database Feature Extraction Results images Query image Indexing Similarity measure Feature extraction of query image ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 4, No.5 , September 2013 ISSN : 2322-5157 www.ACSIJ.org 89 Copyright (c) 2013 Advances in Computer Science: an International Journal. All Rights Reserved. Indexing: This module performs filtering of images based on their content would provide better indexing and return more accurate results. Retrieval and Result: This module will display the matching images to the user based on indexing of similarity measure. Basic Components of the CBIR system are: Image Database: Database which stores images. It can be normal drive storage or database storage. Feature database: The entire extracted feature are stored in database like mat file, excel sheets etc. 3. Scope of CBIR for skin disease images Skin diseases are well known to be a large family. The identification of a certain skin disease is a complex and demanding task for dermatologist. A computer aided system can reduce the work load of the dermatologists, especially when the image database is immense. However, most contemporary work on computer aided analysis skin disease focuses on the detection of malignant melanoma. Thus, the features they used are very limited. The goal of our work is to build a retrieval algorithm for the more general diagnosis of various types of skin diseases. It can be very complex to define the features that can best distinguish between classes and yet be consistent within the same class of skin disease. Image and related Text Database is collected from a demonologist’s websites [17, 18]. There are mainly two kinds of methods for the application of a computer assistant. One is text query. A universally accepted and comprehensive dermatological terminology is created, and then example images are located and viewed using dermatological diagnostic concepts using a partial or complete word search. But the use of only descriptive annotation is too coarse and it is easy to make different types of disease fall into same category. The other method is to use visual features derived from color images of the diseased skin. The ability to perform reliable and consistent clinical research in dermatology hinges not only on the ability to accurately describe and codify diagnostic information, but also complex visual data. Visual patterns and images are at the core of dermatology education, research and practice. Visual features are broadly used in melanoma research, skin classification and segmentation. But there is a lack of tools using content-based skin image retrieval. 4. Problem formulation However, with the emergence of massive image databases, the traditional manual and text based search suffers from the following limitations: Manual annotations require too much time and are expensive to implement. As the number of images in a database grows, the difficulty in finding desired images increases. It is not feasible to manually annotate all attributes of the image content for large number of images. Manual annotations fail to deal with the discrepancy of subjective perception. The phrase, “an image says more than a thousand words,” implies a Content-Based Approach to Medical Image Database Retrieval that the textual description is not sufficient for depicting subjective perception. Typically, a medical image usually contains several objects, which convey specific information. Nevertheless, different interpretations for a pathological area can be made by different radiologists. To capture all knowledge, concepts, thoughts, and feelings for the content of any images is almost impossible. 5. Methodology of work 5.1General approach The general approach of image retrieval systems is based on query by image content. Figure 2 illustrate an overview of the image retrieval modal of skin disease images of proposed work. Fig.2 Overview of the Image query based skin disease image retrieval process FIRST FOUR RESULT IMAGES AND CORRESPONDI NG TEXT INFORMATION SKIN DISEASE IMAGE RETRIVAL SYSTEM IMAGE PRE PROCESSING RELATED SKIN DISEASE IMAGES (TRAIN DATABASE) AND TEXT INFO QUERY IMAGE FROM TEST DATABASE FEEDBACK FROM USER TEST DATABASE TEXT DATABASE TRAIN DATABASE ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 4, No.5 , September 2013 ISSN : 2322-5157 www.ACSIJ.org 90 Copyright (c) 2013 Advances in Computer Science: an International Journal. All Rights Reserved. 5.2 Database details : Our train database contains total 460 images (approximately) which are divided into twenty eight classes of skin disease, collected from reputed websites of medical images [17,18]. Test database contains images which are selected as query image. In the present work size of train database and test database is same. All the images are in .JPEG format. Images pixel dimension is set 300X300 by preprocessing. The illumination condition was also unknown for each image. Also, the images were collected with various backgrounds. Text database corresponding to each image contains skin disease name, symptoms, cure, and description of the disease. 5.3 Use Of Descriptive Statistics Parameters for Feature Extraction Statistical texture measures are calculated directly from the original image values, like mean, standard deviation, variance, kurtosis and Skewness [13], which do not consider pixel neighborhood relationships. Statistical measure of randomness that can be used to characterize the texture of the input image. Standard deviation is pixel value analysis feature [11]. First order statistics of the gray level allocation for each image matrix I(x, y) were examined through five commonly used metrics, namely, mean, variance, standard deviation, skewness and kurtosis as descriptive measurements of the overall gray level distribution of an image. Descriptive statistics refers to properties of distributions, such as location, dispersion, and shape [15]. 5.3.1 Location Measure: Location statistics describe where the data is located. Mean : For calculating the mean of element of vector x. ( ) = ( )/ if x is a matrix , compute the mean of each column and return them into a row vector[16]. 5.3.2 Dispersion Measures: Dispersion statistics summarize the scatter or spread of the data. Most of these functions describe deviation from a particular location. For instance, variance is a measure of deviation from the mean, and standard deviation is just the square root of the variance. Variance : For calculating the variance of element of vector x. ( ) = 1/(( − 1) _ ( ) − ( )^2) If x is a matrix , compute the variance of each column and return them into a row vector [16]. Standard Deviation: For calculating the Standard Deviation of element of vector x. ( ) = (1/( − 1) _ ( ( ) − ( ))^2) If x is a matrix , compute the Standard Deviation of each column and return them into a row vector[16]. 5.3.3 Shape Measures: For getting some information about the shape of a distribution using shape statistics. Skewness describes the amount of asymmetry. Kurtosis measures the concentration of data around the peak and in the tails versus the concentration in the flanks. Skewness: For calculating the skewness of element of vector x. ( ) = 1/ ( ) ^ (−3) (( − ( ). ^3) If x is a matrix, return the skewness along the first nonsingleton dimension of the matrix [",
"title": ""
},
{
"docid": "a94d7b1d43bea7fabf39883f239d0c52",
"text": "* Texto original publicado na obra Instituições e Desenvolvimento Econômico (Teixeira; Braga, 2007, p. 03-23). ** Economista sul-coreano especialista em assuntos sobre desenvolvimento econômico. Professor da University of Cambridge, onde atualmente trabalha como redator da Revista Political Economy of Development. Autor da obra Kicking Away the Ladder: Development Strategy in Historical Perspective (2002a). Também é consultor do Banco Mundial, do Banco de Desenvolvimento Asiático e do Banco Europeu de Desenvolvimento. É conhecido por ser uma das grandes influências acadêmicas do economista Rafael Corrêa, atual Presidente do Equador (2007). *** Respectivamente, mestranda pelo curso de Ciências Políticas da Universidade Federal do Rio Grande do Sul, mestranda em Sociologia pela Universidade de São Paulo, mestrando em Sociologia pela Universidade de São Paulo e mestre em Sociologia pela Universidade de São Paulo. Understanding the relationship between institutions and economic development – Some key theoretical issues Compreendendo a relação entre instituições e desenvolvimento econômico – Alguns assuntos teóricoschave*",
"title": ""
},
{
"docid": "41401f698e4776c7622393c5a10e145f",
"text": "Web search engines can greatly benefit from knowledge about a ttributes of entities present in search queries. In this paper, we introduce light ly-supervised methods for extracting entity attributes from natural language tex t. Using these methods, we are able to extract large numbers of attributes of differe nt entities at fairly high precision from a large natural language corpus. We compare o ur methods against a previously proposed pattern-based relation extractor, s h wing that the new methods give considerable improvements over that baseline. We a lso demonstrate that query expansion using extracted attributes improves retri eval performance on underspecified information-seeking queries. 1 Attributes in Web Search Web search engines receive numerous queries requesting inf ormation, often focused on a specific entity, such as a person, place or organization. These queri es are sometimes general requests, such as“bio of George Bush,”or specific requests, such as “new york mayor.” Accurately identifying the entity (new york) or related attributes ( mayor) can improve search results in several ways [1]. For example, knowledge of attributes and entities can identify a query as being a factual request [1, 2]. Query expansion using known attributes of the entity can als o improve results [3]. Additionally, an engine could suggest alternative queries based on attrib utes. If a user searches for just “Craig Ferguson” and “shows” is a known attribute of the entity ”Craig Ferguson”, then an alternative query suggestion could be “Craig Ferguson shows”which may guide the user to more informative results. The widely explored technique of pseudo relevance feedback can also benefit from a known list of entities and attributes [4]. Some view entity and att ribute extraction as a primary building block for the automatic creation of large scale knowledge ba ses aimed at addressing these issues [1]. The first step towards improving search results with attribu es is to create lists of entities and attributes. Towards that end, we propose new algorithms that, beginning with a small seed set of entities and attributes, learn to extract new entities and a ttributes from a large corpus of text. We adopt a bootstrapping approach, where the inputs for our lea rning algorithms are a large unlabeled corpus and the small seed set containing an entity type of int eres , such as seed pairs automatically extracted from query logs [1]. The seed pairs are matched aga inst the corpus to create training instances for the learning algorithms. The algorithms expl oit a wide range of instance features to alleviate the effects of noise and sparseness. The algorith ms produce a large list of entities and associated attributes, which can be directly applied towar ds improving web search. This paper proceeds as follows. We begin with some backgroun d n attribute extraction and web search applications. We then outline our extraction algori thms. Some examples and evaluations of extracted attributes and entities follow.",
"title": ""
},
{
"docid": "1f6bf9c06b7ee774bc08848293b5c94a",
"text": "The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2967df08ad0b9987ce2d6cb6006d3e69",
"text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.",
"title": ""
},
{
"docid": "ee38062c7c479cfc9d8e9fc0982a9ae3",
"text": "Integrating data from heterogeneous sources is often modeled as merging graphs. Given two ormore “compatible”, but not-isomorphic graphs, the first step is to identify a graph alignment, where a potentially partial mapping of vertices between two graphs is computed. A significant portion of the literature on this problem only takes the global structure of the input graphs into account. Only more recent ones additionally use vertex and edge attributes to achieve a more accurate alignment. However, these methods are not designed to scale to map large graphs arising in many modern applications. We propose a new iterative graph aligner, gsaNA, that uses the global structure of the graphs to significantly reduce the problem size and align large graphs with a minimal loss of information. Concretely, we show that our proposed technique is highly flexible, can be used to achieve higher recall, and it is orders of magnitudes faster than the current state of the art techniques. ACM Reference format: Abdurrahman Yaşar and Ümit V. Çatalyürek. 2018. An Iterative Global Structure-Assisted Labeled Network Aligner. In Proceedings of Special Interest Group on Knowledge Discovery and Data Mining, London, England, August 18 (SIGKDD’18), 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn",
"title": ""
},
{
"docid": "0c45c5ee2433578fbc29d29820042abe",
"text": "When Andrew John Wiles was 10 years old, he read Eric Temple Bell’s The Last Problem and was so impressed by it that he decided that he would be the first person to prove Fermat’s Last Theorem. This theorem states that there are no nonzero integers a, b, c, n with n > 2 such that an + bn = cn. This object of this paper is to prove that all semistable elliptic curves over the set of rational numbers are modular. Fermat’s Last Theorem follows as a corollary by virtue of work by Frey, Serre and Ribet.",
"title": ""
},
{
"docid": "2c3e6373feb4352a68ec6fd109df66e0",
"text": "A broadband transition design between broadside coupled stripline (BCS) and conductor-backed coplanar waveguide (CBCPW) is proposed and studied. The E-field of CBCPW is designed to be gradually changed to that of BCS via a simple linear tapered structure. Two back-to-back transitions are simulated, fabricated and measured. It is reported that maximum insertion loss of 2.3 dB, return loss of higher than 10 dB and group delay flatness of about 0.14 ns are obtained from 50 MHz to 20 GHz.",
"title": ""
},
{
"docid": "42386bee406c51e568667abec4bc6a5e",
"text": "Digital projection technology has improved significantly in recent years. But, the relationship of cost with respect to available resolution in projectors is still super-linear. In this paper, we present a method that uses projector light modulator panels (e.g. LCD or DMD panels) of resolution n X n to create a perceptually close match to a target higher resolution cn X cn image, where c is a small integer greater than 1. This is achieved by enhancing the resolution using smaller pixels at specific regions of interest like edges.\n A target high resolution image (cn X cn) is first decomposed into (a) a high resolution (cn X cn) but sparse edge image, and (b) a complementary lower resolution (n X n) non-edge image. These images are then projected in a time sequential manner at a high frame rate to create an edge-enhanced image -- an image where the pixel density is not uniform but changes spatially. In 3D ready projectors with readily available refresh rate of 120Hz, such a temporal multiplexing is imperceptible to the user and the edge-enhanced image is perceptually almost identical to the target high resolution image.\n To create the higher resolution edge image, we introduce the concept of optical pixel sharing. This reduces the projected pixel size by a factor of 1/c2 while increasing the pixel density by c2 at the edges enabling true higher resolution edges. Due to the sparsity of the edge pixels in an image we are able to choose a sufficiently large subset of these to be displayed at the higher resolution using perceptual parameters. We present a statistical analysis quantifying the expected number of pixels that will be reproduced at the higher resolution and verify it for different types of images.",
"title": ""
},
{
"docid": "a1581dfaaa165f93f4ef9cd8e31d6d6b",
"text": "With increasing number of web services, providing an end-to-end Quality of Service (QoS) guarantee in responding to user queries is becoming an important concern. Multiple QoS parameters (e.g., response time, latency, throughput, reliability, availability, success rate) are associated with a service, thereby, service composition with a large number of candidate services is a challenging multi-objective optimization problem. In this paper, we study the multi-constrained multi-objective QoS aware web service composition problem and propose three different approaches to solve the same, one optimal, based on Pareto front construction and two other based on heuristically traversing the solution space. We compare the performance of the heuristics against the optimal, and show the effectiveness of our proposals over other classical approaches for the same problem setting, with experiments on WSC-2009 and ICEBE-2005 datasets.",
"title": ""
},
{
"docid": "8a41f5863bc20511bd8e9071ce6af6dd",
"text": "Research has seen considerable achievements concerning translation of natural language patterns into formal queries for Question Answering (QA) based on Knowledge Graphs (KG). One of the main challenges in this research area is about how to identify which property within a Knowledge Graph matches the predicate found in a Natural Language (NL) relation. Current approaches for formal query generation attempt to resolve this problem mainly by first retrieving the named entity from the KG together with a list of its predicates, then filtering out one from all the predicates of the entity. We attempt an approach to directly match an NL predicate to KG properties that can be employed within QA pipelines. In this paper, we specify a systematic approach as well as providing a tool that can be employed to solve this task. Our approach models KB relations with their underlying parts of speech, we then enhance this with extra attributes obtained from Wordnet and Dependency parsing characteristics. From a question, we model a similar representation of query relations. We then define distance measurements between the query relation and the properties representations from the KG to identify which property is referred to by the relation within the query. We report substantive recall values and considerable precision from our evaluation.",
"title": ""
},
{
"docid": "88033862d9fac08702977f1232c91f3a",
"text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.",
"title": ""
},
{
"docid": "3431f92cd0849f4782858834feebec03",
"text": "DeepFashion is a widely used clothing dataset with 50 categories and more than overall 200k images where each image is annotated with fine-grained attributes. This dataset is often used for clothes recognition and although it provides comprehensive annotations, the attributes distribution is unbalanced and repetitive specially for training fine-grained attribute recognition models. In this work, we tailored DeepFashion for fine-grained attribute recognition task by focusing on each category separately. After selecting categories with sufficient number of images for training, we remove very scarce attributes and merge the duplicate ones in each category, then we clean the dataset based on the new list of attributes. We use a bilinear convolutional neural network with pairwise ranking loss function for multi-label fine-grained attribute recognition and show that the new annotations improve the results for such a task. The detailed annotations for each of the selected categories are provided for public use.",
"title": ""
},
{
"docid": "e5b0200c7fffd4ff3934969ff67de5b4",
"text": "We present a proposal-\"The Sampling Hypothesis\"-suggesting that the variability in young children's responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with frequencies that reflect their subjective probability. We test the Sampling Hypothesis with four experiments on 4- and 5-year-olds. In these experiments, children saw a distribution of colored blocks and an event involving one of these blocks. In the first experiment, one block fell randomly and invisibly into a machine, and children made multiple guesses about the color of the block, either immediately or after a 1-week delay. The distribution of guesses was consistent with the distribution of block colors, and the dependence between guesses decreased as a function of the time between guesses. In Experiments 2 and 3 the probability of different colors was systematically varied by condition. Preschoolers' guesses tracked the probabilities of the colors, as should be the case if they are sampling from the set of possible explanatory hypotheses. Experiment 4 used a more complicated two-step process to randomly select a block and found that the distribution of children's guesses matched the probabilities resulting from this process rather than the overall frequency of different colors. This suggests that the children's probability matching reflects sophisticated probabilistic inferences and is not merely the result of a naïve tabulation of frequencies. Taken together the four experiments provide support for the Sampling Hypothesis, and the idea that there may be a rational explanation for the variability of children's responses in domains like causal inference.",
"title": ""
},
{
"docid": "f95f77f81f5a4838f9f3fa2538e9d132",
"text": "Learning analytics tools should be useful, i.e., they should be usable and provide the functionality for reaching the goals attributed to learning analytics. This paper seeks to unite learning analytics and action research. Based on this, we investigate how the multitude of questions that arise during technology-enhanced teaching and learning systematically can be mapped to sets of indicators. We examine, which questions are not yet supported and propose concepts of indicators that have a high potential of positively influencing teachers' didactical considerations. Our investigation shows that many questions of teachers cannot be answered with currently available research tools. Furthermore, few learning analytics studies report about measuring impact. We describe which effects learning analytics should have on teaching and discuss how this could be evaluated.",
"title": ""
}
] |
scidocsrr
|
d8325cba368e1561feb6dc00c4f1233a
|
POS-originated transaction traces as a source of contextual information for risk management systems in EFT transactions
|
[
{
"docid": "9c0cd7c0641a48dcede829a6ac3ed622",
"text": "Association rules are considered to be the best studied models for data mining. In this article, we propose their use in order to extract knowledge so that normal behavior patterns may be obtained in unlawful transactions from transactional credit card databases in order to detect and prevent fraud. The proposed methodology has been applied on data about credit card fraud in some of the most important retail companies in Chile. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e404699c5b86d3a3a47a1f3d745eecc1",
"text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "01490975c291a64b40484f6d37ea1c94",
"text": "Context-aware systems offer entirely new opportunities for application developers and for end users by gathering context data and adapting systems’ behavior accordingly. Especially in combination with mobile devices such mechanisms are of great value and claim to increase usability tremendously. In this paper, we present a layered architectural framework for context-aware systems. Based on our suggested framework for analysis, we introduce various existing context-aware systems focusing on context-aware middleware and frameworks, which ease the development of context-aware applications. We discuss various approaches and analyze important aspects in context-aware computing on the basis of the presented systems.",
"title": ""
},
{
"docid": "66248db37a0dcf8cb17c075108b513b4",
"text": "Since past few years there is tremendous advancement in electronic commerce technology, and the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper we present the necessary theory to detect fraud in credit card transaction processing using a Hidden Markov Model (HMM). An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected by using an enhancement to it(Hybrid model).In further sections we compare different methods for fraud detection and prove that why HMM is more preferred method than other methods.",
"title": ""
}
] |
[
{
"docid": "5d28756f1dfd9843bc9efc5bcd50f8ca",
"text": "Correctly estimating fingerprint ridge orientation is an important task in fingerprint image processing. A successful orientation estimation algorithm can drastically improve the performance of tasks such as fingerprint enhancement, classification, and singular points extraction. Gradient-based orientation estimation algorithms are widely adopted in academic literature, but they cannot guarantee the correctness of ridge orientations. Even worse, they assign orientations to blocks with singular points. A novel and reliable orientation estimation algorithm is proposed in this paper. This algorithm runs in two phases. The first phase assigns reliable orientations to blocks with parallel structures and marks other blocks with noise, singular points, and minutiae as uncertain. Since most uncertain blocks marked in the first phase do have unique ridge orientations, the second phase of our algorithm restores the orientations of these uncertain blocks from their neighbor blocks orientations. Different from other orientation estimation algorithms, our algorithm leaves the blocks containing singular points and assigns reliable orientations to the other blocks. Detailed examples are given in this paper to show how our algorithm works. We use NIST-4 fingerprint database in our experiment to verify the superiority of our algorithm.",
"title": ""
},
{
"docid": "653fee86af651e13e0d26fed35ef83e4",
"text": "Small ducted fan autonomous vehicles have potential for several applications, especially for missions in urban environments. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller for the GTSpy, a small ducted fan autonomous vehicle based on the Micro Autonomous Systems’ Helispy. This approach allows utilization of the entire low speed flight envelope with a relatively poorly understood vehicle. A simulator model is constructed from a force and moment analysis of the vehicle, allowing for a validation of the controller in preparation for flight testing. Data from flight testing of the system is provided.",
"title": ""
},
{
"docid": "8cd8e10e371085a48acc52dc594847bd",
"text": "We analyze in this paper a number of data sets proposed over the last decade or so for the task of paraphrase identification. The goal of the analysis is to identify the advantages as well as shortcomings of the previously proposed data sets. Based on the analysis, we then make recommendations about how to improve the process of creating and using such data sets for evaluating in the future approaches to the task of paraphrase identification or the more general task of semantic similarity. The recommendations are meant to improve our understanding of what a paraphrase is, offer a more fair ground for comparing approaches, increase the diversity of actual linguistic phenomena that future data sets will cover, and offer ways to improve our understanding of the contributions of various modules or approaches proposed for solving the task of paraphrase identification or similar tasks. We also developed a data collection tool, called Data Collector, that proactively targets the collection of paraphrase instances covering linguistic phenomena important to paraphrasing.",
"title": ""
},
{
"docid": "6a3c5a88df65588435f5099166fae043",
"text": "Due to short - but frequent - sessions of smartphone usage, the fast and easy usability of authentication mechanisms in this special environment has a big impact on user acceptance. In this work we propose a user-friendly alternative to common authentication methods (like PINs and patterns). The advantages of the proposed method are its security, fastness, and easy usage, requiring minimal user interaction compared to other authentication techniques currently used on smartphones. The mechanism described uses the presence of a Bluetooth-connected hardware-token to authenticate the user and can easily be implemented on current smartphones. It is based on an authentication protocol which meets the requirements on energy efficiency and limited resources by optimizing the communication effort. A prototype was implemented on an Android smartphone and an MSP430 based MCU. The token allows fast authentication without the need for additional user action. The entire authentication process can be completed in less than one second, the developed software prototype requires no soft- or hardware modifications (like rooting) of the Android phone.",
"title": ""
},
{
"docid": "de1ec3df1fa76e5a419ac8506cd63286",
"text": "It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. We then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.",
"title": ""
},
{
"docid": "4ebe344a72053aef8ed19e3da139bb10",
"text": "Construction industry faces a lot of inherent uncertainties and issues. As this industry is plagued by risk, risk management is an important part of the decision-making process of these companies. Risk assessment is the critical procedure of risk management. Despite many scholars and practitioners recognizing the risk assessment models in projects, insufficient attention has been paid by researchers to select the suitable risk assessment model. In general, many factors affect this problem which adheres to uncertain and imprecise data and usually several people are involved in the selection process. Using the fuzzy TOPSIS method, this study provides a rational and systematic process for developing the best model under each of the selection criteria. Decision criteria are obtained from the nominal group technique (NGT). The proposed method can discriminate successfully and clearly among risk assessment methods. The proposed approach is demonstrated using a real case involving an Iranian construction corporation. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eb0925da7da4bfaa67c9bba3eb95f76c",
"text": "The determinant of performance in scale-up graph processing on a single system is the speed at which the graph can be fetched from storage: either from disk into memory or from memory into CPU-cache. Algorithms that follow edges perform random accesses to the storage medium for the graph and this can often be the determinant of performance, regardless of the algorithmic complexity or runtime efficiency of the actual algorithm in use. A storage-centric viewpoint would suggest that the solution to this problem lies in recognizing that graphs represent a unique workload and therefore should be treated as such by adopting novel ways to access graph structured data. We approach this problem from two different aspects and this paper details two different efforts in this direction. One approach is specific to graphs stored on SSDs and accelerates random access using a novel prefetcher called RASP. The second approach takes a fresh look at how graphs are accessed and suggests that trading off the low cost of random access for the approach of sequentially streaming a large set of (potentially unrelated) edges can be a winning proposition under certain circumstances: leading to a system for graphs stored on any medium (main-memory, SSD or magnetic disk) called X-stream. RASP and X-stream therefore take - diametrically opposite - storage centric viewpoints of the graph processing problem. After contrasting the approaches and demonstrating the benefit of each, this paper ends with a description of planned future development of an online algorithm that selects between the two approaches, possibly providing the best of both worlds.",
"title": ""
},
{
"docid": "31873424960073962d3d8eba151f6a4b",
"text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.",
"title": ""
},
{
"docid": "38ae190a4a81a33dd818403723505f29",
"text": "We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations. Key components are entity embeddings, a neural attention mechanism over local context windows, and a differentiable joint inference stage for disambiguation. Our approach thereby combines benefits of deep learning with more traditional approaches such as graphical models and probabilistic mention-entity maps. Extensive experiments show that we are able to obtain competitive or stateof-the-art accuracy at moderate computational costs.",
"title": ""
},
{
"docid": "62df99e6378e00809796bd60205f8197",
"text": "A number of simple performance measurements on network, CPU and disk speed were done on a dual ARM Cortex-A15 machine running Linux inside a KVM virtual machine that uses virtio disk and networking. Unexpected behaviour was observed in the CPU and memory intensive benchmarks, and in the networking benchmarks. The average overhead of running inside KVM is between zero and 30 percent when the host is lightly loaded (running only the system software and the necessary qemu-system-arm virtualization code), but the relative overhead increases when both host and VM is busy. We conjecture that this is related to the scheduling inside the host Linux.",
"title": ""
},
{
"docid": "23ac5c4adf61fad813869882c4d2e7b6",
"text": "Most network simulators do not support security features. In this paper, we introduce a new security module for OMNET++ that implements the IEEE 802.15.4 security suite. This module, developed using the C++ language, can simulate all devices and sensors that implement the IEEE 802.15.4 standard. The OMNET++ security module is also evaluated in terms of quality of services in the presence of physical hop attacks. Results show that our module is reliable and can safely be used by researchers.",
"title": ""
},
{
"docid": "d99d4bdf1af85c14653c7bbde10eca7b",
"text": "Plants endure a variety of abiotic and biotic stresses, all of which cause major limitations to production. Among abiotic stressors, heavy metal contamination represents a global environmental problem endangering humans, animals, and plants. Exposure to heavy metals has been documented to induce changes in the expression of plant proteins. Proteins are macromolecules directly responsible for most biological processes in a living cell, while protein function is directly influenced by posttranslational modifications, which cannot be identified through genome studies. Therefore, it is necessary to conduct proteomic studies, which enable the elucidation of the presence and role of proteins under specific environmental conditions. This review attempts to present current knowledge on proteomic techniques developed with an aim to detect the response of plant to heavy metal stress. Significant contributions to a better understanding of the complex mechanisms of plant acclimation to metal stress are also discussed.",
"title": ""
},
{
"docid": "d4e4759c183c61acbf09bff91cc75ee5",
"text": "A wide range of defenses have been proposed to harden neural networks against adversarial attacks. However, a pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable? This paper analyzes adversarial examples from a theoretical perspective, and identifies fundamental bounds on the susceptibility of a classifier to adversarial attacks. We show that, for certain classes of problems, adversarial examples are inescapable. Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier’s robustness against adversarial examples.",
"title": ""
},
{
"docid": "715d244ce35578cd24ab28268415469b",
"text": "In the age of information exploding, multi-document summarization is attracting particular attention for the ability to help people get the main ideas in a short time. Traditional extractive methods simply treat the document set as a group of sentences while ignoring the global semantics of the documents. Meanwhile, neural document model is effective on representing the semantic content of documents in low-dimensional vectors. In this paper, we propose a document-level reconstruction framework named DocRebuild, which reconstructs the documents with summary sentences through a neural document model and selects summary sentences to minimize the reconstruction error. We also apply two strategies, sentence filtering and beamsearch, to improve the performance of our method. Experimental results on the benchmark datasets DUC 2006 and DUC 2007 show that DocRebuild is effective and outperforms the state-of-the-art unsupervised algorithms.",
"title": ""
},
{
"docid": "520faa53674eb384e8e892afc84c7ef4",
"text": "Cyber-Physical Systems (CPS), which integrate controls, computing and physical processes are critical infrastructures of any country. They are becoming more vulnerable to cyber attacks due to an increase in computing and network facilities. The increase of monitoring network protocols increases the chances of being attacked. Once an attacker is able to cross the network intrusion detection mechanisms, he can affect the physical operations of the system which may lead to physical damages of components and/or a disaster. Some researchers used constraints of physical processes known as invariants to monitor the system in order to detect cyber attacks or failures. However, invariants generation is lacking in automation. This paper presents a novel method to identify invariants automatically using association rules mining. Through this technique, we show that it is possible to generate a number of invariants that are sometimes hidden from the design layout. Our preliminary study on a secure water treatment plant suggests that this approach is promising.",
"title": ""
},
{
"docid": "4e5c9901da9ee977d995dd4fd6b9b6bd",
"text": "kmlonolgpbqJrtsHu qNvwlyxzl{vw|~}ololyp | xolyxoqNv
J lgxgOnyc}g pAqNvwl lgrc p|HqJbxz|r rc|pb4|HYl xzHnzl}o}gpb |p'w|rmlypnoHpb0rpb }zqJOn pyxg |HqJOp c}&olypb%nov4|rrclgpbYlo%ys{|Xq|~qlo noxX}ozz|~}lz rlo|xgp4pb0|~} |3 loqNvwH J xzOpb0| p|HqJbxz|rr|pbw|~lmxzHnolo}o}gpb;}gsH}oqly ¡cqOv rpb }zqJOnm¢~p TrloHYly¤£;r¥qOv4XHv&noxX}ozz|~}lz |YxzH|Ynvwl}]vw|~l zlolyp¦}4nonolo}o}gbrp2 |p4s o lyxzlypbq |xzlo|~}^]p|~q§bxz|r4r|pbw|~lmxzHnolo}o}gpbHu ̈cq©c} Joqhlyp qNvwl]no|~}yl^qNvw|~qaqNvwl}llqOv4~} no|o4qJbxzl qNvwl&rtpbbc}oq§Nn pgHxg |HqJOp#qNvwlys%|xol Xlgrrpb«pxzlonoqJrts¦p r|xJYl2w|X¬g4l&q|Xgrclo}2J }oqh|HqJc}o qJOn};®v }&no|p |~¢l¦cq3 ̄=nybr°q]qh%|p|rsH±ylu bpXlgx}zqh|p|p%]xzl qNvwl«|XgrcqJsLJ&qOv4lo}l |Yxo|Xnov4lo}q HYlyr pYlyxgrtspw0rtpw~bc}oqJOn;zlvw|Nxg 2gp¦qNv c} 4|o4lyxou 3l rr Yl}ngxgNzl;| }g rlxgbrlzo|H}lo |oYxzH|Ynv q |Xq|~qlo rlo|xgp4pb0 rpbbc}oq§On^¢p TrcloHYlgT®v } |oYxzH|Ynv vw|~} ololgp}ovw ¡p2xL| ́p4bolyxLJ&q|~}o¢} qhno|4qJ xol¦pgxg|~q§Np p |nyrlo|xolgx2|p# xzl«xzlonq |~}ov Op cqNvwXq]|%noxo c}l p«wlyxxg |pnoly3μLl¶xzl}lgpwq¶|«Ylq|rlo«no|H}l }oqJ4s%J qOvbc} rclz|xgp4pw0lqNvwHL|YrtOlo qh4|xoq]J;}J4lolznv2qh|HHpb",
"title": ""
},
{
"docid": "a76b0892d32af28833819860ea8bd9ff",
"text": "Understanding how to group a set of binary files into the piece of software they belong to is highly desirable for software profiling, malware detection, or enterprise audits, among many other applications. Unfortunately, it is also extremely challenging: there is absolutely no uniformity in the ways different applications rely on different files, in how binaries are signed, or in the versioning schemes used across different pieces of software. In this paper, we show that, by combining information gleaned from a large number of endpoints (millions of computers), we can accomplish large-scale application identification automatically and reliably. Our approach relies on collecting metadata on billions of files every day, summarizing it into much smaller \"sketches\", and performing approximate k-nearest neighbor clustering on non-metric space representations derived from these sketches. We design and implement our proposed system using Apache Spark, show that it can process billions of files in a matter of hours, and thus could be used for daily processing. We further show our system manages to successfully identify which files belong to which application with very high precision, and adequate recall.",
"title": ""
},
{
"docid": "4ba0e0e1a00bb95d464b6bb38e2c1176",
"text": "An important application for use with multimedia databases is a browsing aid, which allows a user to quickly and efficiently preview selections from either a database or from the results of a database query. Methods for facilitating browsing, though, are necessarily media dependent. We present one such method that produces short, representative samples (or “audio thumbnails”) of selections of popular music. This method attempts to identify the chorus or refrain of a song by identifying repeated sections of the audio waveform. A reduced spectral representation of the selection based on a chroma transformation of the spectrum is used to find repeating patterns. This representation encodes harmonic relationships in a signal and thus is ideal for popular music, which is often characterized by prominent harmonic progressions. The method is evaluated over a sizable database of popular music and found to perform well, with most of the errors resulting from songs that do not meet our structural assumptions.",
"title": ""
},
{
"docid": "239e37736832f6f0de050ed1749ba648",
"text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.",
"title": ""
},
{
"docid": "3b576f0ba86940be5cfcbe7b6aa44af7",
"text": "In this paper, we present an effective method to analyze the recognition confidence of handwritten Chinese character, based on the softmax regression score of a high performance convolutional neural network (CNN). Through careful and thorough statistics of 827,685 testing samples that randomly selected from total 8836 different classes of Chinese characters, we find that the confidence measurement based on CNN is an useful metric to know how reliable the recognition results are. Furthermore, we find by experiments that the recognition confidence can be used to find out similar and confusable character-pairs, to check wrongly or cursively written samples, and even to discover and correct mislabeled samples. Many interesting observation and statistics are given and analyzed in this study.",
"title": ""
}
] |
scidocsrr
|
08327524fb2d5f455ce7150ee8ae685c
|
Framing in Social Media: How the U.S. Congress uses Twitter hashtags to frame political issues Introduction
|
[
{
"docid": "843e5fc99df33e280fc4f988b5358987",
"text": "This special issue of Journal of Communication is devoted to theoretical explanations of news framing, agenda setting, and priming effects. It examines if and how the three models are related and what potential relationships between them tell theorists and researchers about the effects of mass media. As an introduction to this effort, this essay provides a very brief review of the three effects and their roots in media-effects research. Based on this overview, we highlight a few key dimensions along which one can compare, framing, agenda setting, and priming. We conclude with a description of the contexts within which the three models operate, and the broader implications that these conceptual distinctions have for the growth of our discipline.",
"title": ""
},
{
"docid": "aaebd4defcc22d6b1e8e617ab7f3ec70",
"text": "In the American political process, news discourse concerning public policy issues is carefully constructed. This occurs in part because both politicians and interest groups take an increasingly proactive approach to amplify their views of what an issue is about. However, news media also play an active role in framing public policy issues. Thus, in this article, news discourse is conceived as a sociocognitive process involving all three players: sources, journalists, and audience members operating in the universe of shared culture and on the basis of socially defined roles. Framing analysis is presented as a constructivist approach to examine news discourse with the primary focus on conceptualizing news texts into empirically operationalizable dimensions—syntactical, script, thematic, and rhetorical structures—so that evidence of the news media's framing of issues in news texts may be gathered. This is considered an initial step toward analyzing the news discourse process as a whole. Finally, an extended empirical example is provided to illustrate the applications of this conceptual framework of news texts.",
"title": ""
}
] |
[
{
"docid": "f2ce4c6d0dfa59cfe600171a122cdc94",
"text": "We describe the methodology that we followed to automatically extract topics corresponding to known events provided by the SNOW 2014 challenge in the context of the SocialSensor project. A data crawling tool and selected filtering terms were provided to all the teams. The crawled data was to be divided in 96 (15-minute) timeslots spanning a 24 hour period and participants were asked to produce a fixed number of topics for the selected timeslots. Our preliminary results are obtained using a methodology that pulls strengths from several machine learning techniques, including Latent Dirichlet Allocation (LDA) for topic modeling and Non-negative Matrix Factorization (NMF) for automated hashtag annotation and for mapping the topics into a latent space where they become less fragmented and can be better related with one another. In addition, we obtain improved topic quality when Copyright c © by the paper’s authors. Copying permitted only for private and academic purposes. In: S. Papadopoulos, D. Corney, L. Aiello (eds.): Proceedings of the SNOW 2014 Data Challenge, Seoul, Korea, 08-04-2014, published at http://ceur-ws.org sentiment detection is performed to partition the tweets based on polarity, prior to topic modeling.",
"title": ""
},
{
"docid": "8f9b348eed632aa05a33b7810d7988f6",
"text": "Text classification models are becoming increasingly complex and opaque, however for many applications it is essential that the models are interpretable. Recently, a variety of approaches have been proposed for generating local explanations. While robust evaluations are needed to drive further progress, so far it is unclear which evaluation approaches are suitable. This paper is a first step towards more robust evaluations of local explanations. We evaluate a variety of local explanation approaches using automatic measures based on word deletion. Furthermore, we show that an evaluation using a crowdsourcing experiment correlates moderately with these automatic measures and that a variety of other factors also impact the human judgements.",
"title": ""
},
{
"docid": "cd78dd2ef989917c01a325a460c07223",
"text": "This paper proposes a multi-joint-gripper that achieves envelope grasping for unknown shape objects. Proposed mechanism is based on a chain of Differential Gear Systems (DGS) controlled by only one motor. It also has a Variable Stiffness Mechanism (VSM) that controls joint stiffness to relieve interfering effects suffered from grasping environment and achieve a dexterous grasping. The experiments elucidate that the developed gripper achieves envelop grasping; the posture of the gripper automatically fits the shape of the object with no sensory feedback. And they also show that the VSM effectively works to relieve external interfering. This paper shows the mechanism and experimental results of the second test machine that was developed inheriting the idea of DGS used in the first test machine but has a completely altered VSM.",
"title": ""
},
{
"docid": "18e95e39417fcb4dd6e294a1ad8fcfd7",
"text": "The paper motivates the need to acquire methodological knowledge for involving children as test users in usability testing. It introduces a methodological framework for delineating comparative assessments of usability testing methods for children participants. This framework consists in three dimensions: (1) assessment criteria for usability testing methods, (2) characteristics describing usability testing methods and, finally, (3) characteristics of children that may impact upon the process and the result of usability testing. Two comparative studies are discussed in the context of this framework along with implications for future research. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f4f972b6925303c330aca99359357b86",
"text": "We consider the problem of fairly allocating indivisible goods, focusing on a recently introduced notion of fairness called maximin share guarantee: each player’s value for his allocation should be at least as high as what he can guarantee by dividing the items into as many bundles as there are players and receiving his least desirable bundle. Assuming additive valuation functions, we show that such allocations may not exist, but allocations guaranteeing each player 2/3 of the above value always exist. These theoretical results have direct practical implications.",
"title": ""
},
{
"docid": "e0d945110e3ef589481e09d924cedcd5",
"text": "Field of emotional content recognition of speech signals has been gaining increasing interest during recent years. Several emotion recognition systems have been constructed by different researchers for recognition of human emotions in spoken utterances. This paper describes speech emotion recognition based on the previous technologies which uses different methods of feature extraction and different classifiers for the emotion recognition are reviewed. The database for the speech emotion recognition system is the emotional speech samples and the features extracted from these speech samples are the energy, pitch, linear prediction cepstrum coefficient (LPCC), Mel frequency cepstrum coefficient (MFCC). Different wavelet decomposition structures can also used for feature vector extraction. The classifiers are used to differentiate emotions such as anger, happiness, sadness, surprise, fear, neutral state, etc. The classification performance is based on extracted features. Conclusions drawn from performance and limitations of speech emotion recognition system based on different methodologies are also discussed.",
"title": ""
},
{
"docid": "b3097e3831ba5910886528f0cc6f1805",
"text": "This paper describes a lightweight security mechanism for protecting electronic transactions conducted over the mobile platform. In a typical mobile computing environment, one or more of the transacting parties are based on some wireless handheld devices. Electronic transactions conducted over the mobile platform are gaining popularity and it is widely accepted that mobile computing is a natural extension of the wired Internet computing world. However, security over the mobile platform is more critical due to the open nature of wireless networks. Furthermore, security is more difficult to implement on the mobile platform because of the resource limitation of mobile handheld devices. Therefore, security mechanisms for protecting traditional computer communications need to be revisited so as to ensure that electronic transactions involving mobile devices can be secured and implemented in an effective manner. This research is part of our effort in designing security infrastructure for electronic commerce systems, which extend from the wired to the wireless Internet. A lightweight mechanism was designed to meet the security needs in face of the resource constraints. The proposed mechanism is proven to be practical in real deployment environment. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f58a1a0d8cc0e2c826c911be4451e0df",
"text": "From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.",
"title": ""
},
{
"docid": "b01cb0af3dc85c5d62040c6bb0c21011",
"text": "CT scanner technology is continuously evolving, with scan times becoming shorter with each scanner generation. Achieving adequate arterial opacification synchronized with CT data acquisition is becoming increasingly difficult. A fundamental understanding of early arterial contrast medium dynamics is thus of utmost importance for the design of CT scanning and injection protocols for current and future cardiovascular CT applications. Arterial enhancement is primarily controlled by the iodine flux (injection flow rate) and the injection duration versus a patient's cardiac output and local downstream physiology. The technical capabilities of modern CT equipment require precise scan timing. Together with automated tube current modulation and weight-based injection protocols, both radiation exposure and contrast medium enhancement can be individualized.",
"title": ""
},
{
"docid": "8014c32fa820e1e2c54e1004b62dc33e",
"text": "Signature-based malicious code detection is the standard technique in all commercial anti-virus software. This method can detect a virus only after the virus has appeared and caused damage. Signature-based detection performs poorly whe n attempting to identify new viruses. Motivated by the standard signature-based technique for detecting viruses, and a recent successful text classification method, n-grams analysis, we explo re the idea of automatically detecting new malicious code. We employ n-grams analysis to automatically generate signatures from malicious and benign software collections. The n-gramsbased signatures are capable of classifying unseen benign and malicious code. The datasets used are large compared to earlier applications of n-grams analysis.",
"title": ""
},
{
"docid": "caa26d9aa26eaf91a1c942c9f116912e",
"text": "We present two recently released opensource taggers: NameTag is a free software for named entity recognition (NER) which achieves state-of-the-art performance on Czech; MorphoDiTa (Morphological Dictionary and Tagger) performs morphological analysis (with lemmatization), morphological generation, tagging and tokenization with state-of-the-art results for Czech and a throughput around 10-200K words per second. The taggers can be trained for any language for which annotated data exist, but they are specifically designed to be efficient for inflective languages, Both tools are free software under LGPL license and are distributed along with trained linguistic models which are free for non-commercial use under the CC BY-NC-SA license. The releases include standalone tools, C++ libraries with Java, Python and Perl bindings and web services.",
"title": ""
},
{
"docid": "9fd82750a7d46911670ba8842a7978c2",
"text": "Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent’s competence. In continual learning, also referred to as lifelong learning, there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.",
"title": ""
},
{
"docid": "c993d3a77bcd272e8eadc66155ee15e1",
"text": "This paper presents animated pose templates (APTs) for detecting short-term, long-term, and contextual actions from cluttered scenes in videos. Each pose template consists of two components: 1) a shape template with deformable parts represented in an And-node whose appearances are represented by the Histogram of Oriented Gradient (HOG) features, and 2) a motion template specifying the motion of the parts by the Histogram of Optical-Flows (HOF) features. A shape template may have more than one motion template represented by an Or-node. Therefore, each action is defined as a mixture (Or-node) of pose templates in an And-Or tree structure. While this pose template is suitable for detecting short-term action snippets in two to five frames, we extend it in two ways: 1) For long-term actions, we animate the pose templates by adding temporal constraints in a Hidden Markov Model (HMM), and 2) for contextual actions, we treat contextual objects as additional parts of the pose templates and add constraints that encode spatial correlations between parts. To train the model, we manually annotate part locations on several keyframes of each video and cluster them into pose templates using EM. This leaves the unknown parameters for our learning algorithm in two groups: 1) latent variables for the unannotated frames including pose-IDs and part locations, 2) model parameters shared by all training samples such as weights for HOG and HOF features, canonical part locations of each pose, coefficients penalizing pose-transition and part-deformation. To learn these parameters, we introduce a semi-supervised structural SVM algorithm that iterates between two steps: 1) learning (updating) model parameters using labeled data by solving a structural SVM optimization, and 2) imputing missing variables (i.e., detecting actions on unlabeled frames) with parameters learned from the previous step and progressively accepting high-score frames as newly labeled examples. This algorithm belongs to a family of optimization methods known as the Concave-Convex Procedure (CCCP) that converge to a local optimal solution. The inference algorithm consists of two components: 1) Detecting top candidates for the pose templates, and 2) computing the sequence of pose templates. Both are done by dynamic programming or, more precisely, beam search. In experiments, we demonstrate that this method is capable of discovering salient poses of actions as well as interactions with contextual objects. We test our method on several public action data sets and a challenging outdoor contextual action data set collected by ourselves. The results show that our model achieves comparable or better performance compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "60511dbd1dbb4c01881dac736dd7f988",
"text": "The current study reconceptualized self-construal as a social cognitive indicator of self-observation that individuals employ for developing and maintaining social relationship with others. From the social cognitive perspective, this study investigated how consumers’ self-construal can affect consumers’ electronic word of mouth (eWOM) behavior through two cognitive factors (online community engagement self-efficacy and social outcome expectations) in the context of a social networking site. This study conducted an online experiment that directed 160 participants to visit a newly created online community. The results demonstrated that consumers’ relational view became salient when the consumers’ self-construal was primed to be interdependent rather than independent. Further, the results showed that such interdependent self-construal positively influenced consumers’ eWOM behavioral intentions through their community engagement self-efficacy and their social outcome expectations. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a3b1e2499142514614a7ab01d1227827",
"text": "In this paper, we propose a simple but robust scheme to detect denial of service attacks (including distributed denial of service attacks) by monitoring the increase of new IP addresses. Unlike previous proposals for bandwidth attack detection schemes which are based on monitoring the traffic volume, our scheme is very effective for highly distributed denial of service attacks. Our scheme exploits an inherent feature of DDoS attacks, which makes it hard for the attacker to counter this detection scheme by changing their attack signature. Our scheme uses a sequential nonparametric change point detection method to improve the detection accuracy without requiring a detailed model of normal and attack traffic. We demonstrate that we can achieve high detection accuracy on a range of different network packet traces.",
"title": ""
},
{
"docid": "22e21aab5d41c84a26bc09f9b7402efa",
"text": "Skeem for their thoughtful comments and suggestions.",
"title": ""
},
{
"docid": "7354d8c1e8253a99cfd62d8f96e57a77",
"text": "In the past few decades, clustering has been widely used in areas such as pattern recognition, data analysis, and image processing. Recently, clustering has been recognized as a primary data mining method for knowledge discovery in spatial databases, i.e. databases managing 2D or 3D points, polygons etc. or points in some d-dimensional feature space. The well-known clustering algorithms, however, have some drawbacks when applied to large spatial databases. First, they assume that all objects to be clustered reside in main memory. Second, these methods are too inefficient when applied to large databases. To overcome these limitations, new algorithms have been developed which are surveyed in this paper. These algorithms make use of efficient query processing techniques provided by spatial database systems.",
"title": ""
},
{
"docid": "f1ada71621322b8f0b4c48130aa79bd5",
"text": "In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases.",
"title": ""
},
{
"docid": "5dd1b35255b3608eafb448ab30a9fbf6",
"text": "Deep-learning-based systems are becoming pervasive in automotive software. So, in the automotive software engineering community, the awareness of the need to integrate deep-learning-based development with traditional development approaches is growing, at the technical, methodological, and cultural levels. In particular, data-intensive deep neural network (DNN) training, using ad hoc training data, is pivotal in the development of software for vehicle functions that rely on deep learning. Researchers have devised a development lifecycle for deep-learning-based development and are participating in an initiative, based on Automotive SPICE (Software Process Improvement and Capability Determination), that's promoting the effective adoption of DNN in automotive software. This article is part of a theme issue on Automotive Software.",
"title": ""
},
{
"docid": "928e127f60953c896d35462215731777",
"text": "Detection of object of a known class is a fundamental problem of computer vision. The appearance of objects can change greatly due to illumination, view point, and articulation. For object classes with large intra-class variation, some divide-and-conquer strategy is necessary. Tree structured classifier models have been used for multi-view multi- pose object detection in previous work. This paper proposes a boosting based learning method, called Cluster Boosted Tree (CBT), to automatically construct tree structured object detectors. Instead of using predefined intra-class sub- categorization based on domain knowledge, we divide the sample space by unsupervised clustering based on discriminative image features selected by boosting algorithm. The sub-categorization information of the leaf nodes is sent back to refine their ancestors' classification functions. We compare our approach with previous related methods on several public data sets. The results show that our approach outperforms the state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
a62bec386c831484a7a86faaa1a94d8f
|
Social Sharing of Emotions on Facebook: Channel Differences, Satisfaction, and Replies
|
[
{
"docid": "06b57ca0fbe0aa7688b69dc1bd3d1cf8",
"text": "This research examines how sociotechnical affordances shape interpretation of disclosure and social judgments on social networking sites. Drawing on the disclosure personalism framework, Study 1 revealed that information unavailability and relational basis underlay personalistic judgments about Facebook disclosures: Perceivers inferred greater message and relational intimacy from disclosures made privately than from those made publicly. Study 2 revealed that perceivers judged intimate disclosures shared publicly as less appropriate than intimate disclosures shared privately, and that perceived disclosure appropriateness accounted for the effects of public versus private contexts on reduced liking for a discloser. Taken together, the results show how sociotechnical affordances shape perceptions of disclosure and relationships, which has implications for understanding relational development and maintenance on SNS.",
"title": ""
},
{
"docid": "42613c6a08ce7d86f81ec51255a1071d",
"text": "Happiness and other emotions spread between people in direct contact, but it is unclear whether massive online social networks also contribute to this spread. Here, we elaborate a novel method for measuring the contagion of emotional expression. With data from millions of Facebook users, we show that rainfall directly influences the emotional content of their status messages, and it also affects the status messages of friends in other cities who are not experiencing rainfall. For every one person affected directly, rainfall alters the emotional expression of about one to two other people, suggesting that online social networks may magnify the intensity of global emotional synchrony.",
"title": ""
},
{
"docid": "0c529c9a9f552f89e0c0ad3e000cbd37",
"text": "In this article, I introduce an emotion paradox: People believe that they know an emotion when they see it, and as a consequence assume that emotions are discrete events that can be recognized with some degree of accuracy, but scientists have yet to produce a set of clear and consistent criteria for indicating when an emotion is present and when it is not. I propose one solution to this paradox: People experience an emotion when they conceptualize an instance of affective feeling. In this view, the experience of emotion is an act of categorization, guided by embodied knowledge about emotion. The result is a model of emotion experience that has much in common with the social psychological literature on person perception and with literature on embodied conceptual knowledge as it has recently been applied to social psychology.",
"title": ""
}
] |
[
{
"docid": "18140fdf4629a1c7528dcd6060f427c3",
"text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.",
"title": ""
},
{
"docid": "f268718ceac79dbf8d0dcda2ea6557ca",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.06.003 ⇑ Corresponding author. E-mail addresses: fred.qi@ieee.org (F. Qi), gmshi@x 1 Principal corresponding author. Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b105711c0aabde844b46c3912cf78363",
"text": "CONFLICT OF INTEREST\nnone declared.\n\n\nINTRODUCTION\nThe incidence of diabetes type 2 (diabetes mellitus type 2 - DM 2) is rapidly increasing worldwide. Physical inactivity and obesity are the major determinants of the disease. Primary prevention of DM 2 entails health monitoring of people at risk category. People with impaired glycemic control are at high risk for development of DM 2 and enter the intensive supervision program for primary and secondary prevention.\n\n\nOBJECTIVE OF THE RESEARCH\nTo evaluate the impact of metformin and lifestyle modification on glycemia and obesity in patients with prediabetes.\n\n\nPATIENTS AND METHODS\nThe study was conducted on three groups of 20 patients each (total of 60 patients) aged from 45 to 80, with an abnormal glycoregulation and prediabetes. The study did not include patients who already met the diagnostic criteria for the diagnosis of diabetes. During the study period of 6 months, one group was extensively educated on changing lifestyle (healthy nutrition and increased physical activity), the second group was treated with 500 mg metformin twice a day, while the control group was advised about diet and physical activities but different from the first two groups. At beginning of the study, all patients were measured initial levels of blood glucose, HbA1C, BMI (Body Mass Index), body weight and height and waist size. Also the same measurements were taken at the end of the conducted research, 6 months later. For the assessment of diabetes control was conducted fasting plasma glucose (FPG) test and 2 hours after a glucose load, and HbA1C.\n\n\nRESULTS\nAt the beginning of the study the average HbA1C (%) values in three different groups according to the type of intervention (lifestyle changes, metformin, control group) were as follows: (6.4 ± 0.5 mmol / l), (6.5 ± 1.2 mmol / l), (6.7 ± 0.5 mmol / l). At the end of the research, the average HbA1C values were: 6.2 ± 0.3 mmol / l, 6.33 ± 0.5 mmol / l and 6.7 ± 1.4 mmol / l. In the group of patients who received intensive training on changing lifestyle or group that was treated with metformin, the average reduction in blood glucose and HbA1C remained within the reference range and there were no criteria for the diagnosis of diabetes. Unlike the control group, a group that was well educated on changing habits decreased average body weight by 4.25 kg, BMI by 1.3 and waist size by 2.5 cm. Metformin therapy led to a reduction in the average weight of 3.83 kg, BMI of 1.33 and 3.27 for waist size. Changing lifestyle (healthy diet and increased physical activity) has led to a reduction in total body weight in 60% of patients, BMI in 65% of patients, whereas metformin therapy led to a reduction of the total body weight in 50%, BMI in 45% of patients. In the control group, the overall reduction in body weight was observed in 25%, and BMI in 15% of patients.\n\n\nCONCLUSION\nModification of lifestyle, such as diet and increased physical activity or use of metformin may improve glycemic regulation, reduce obesity and prevent or delay the onset of developing DM 2.",
"title": ""
},
{
"docid": "31efc351ebeaf1316c0c99fc2d3f3985",
"text": "One of the roles of accounting is to provide information on business performance, either through financial accounting indicators or otherwise. Theoretical-empirical studies on the relationship between Corporate Financial Performance (CFP) and Corporate Social Performance (CSP) have increased in recent years, indicating the development of this research field. However, the contribution to the theory by empirical studies is made in an incremental manner, given that each study normally focuses on a particular aspect of the theory. Therefore, it is periodically necessary to conduct an analysis to evaluate how the aggregation of empirical studies has contributed to the evolution of the theory. Designing such an analysis was the objective of the present study. The theoretical framework covered the following: stakeholder theory, the relationship between CSP and CFP, good management theory, and slack resource theory. This research covered a 15-year period (1996 to 2010), and the data collection employed a search tool for the following databases: Ebsco, Proquest, and ISI. The sampling process obtained a set of 58 exclusively theoretical-empirical and quantitative articles that test the CSP-CFP relationship. The main results in the theoretical field reinforce the proposed positive relationship between CSP and CFP and good management theory and demonstrate a deficiency in the explanation of the temporal lag in the causal relationship between CSP and CFP as well as deficiencies in the description of the CSP construct. These results suggest future studies to research the temporal lag in the causal relationship between CSP and CFP and the possible reasons that the positive association between CSP and CFP has not been assumed in some empirical studies.",
"title": ""
},
{
"docid": "22ee38911960fc78d893fe92a6e0a820",
"text": "In a knowledge and information society, e-learning has built on the extensive use of advanced information and communication technologies to deliver learning and instruction. In addition, employees who need the training do not have to gather in a place at the same time, and thus it is not necessary for them to travel far away for attending training courses. Furthermore, the flexibility allows employees who perform different jobs or tasks for training courses according to their own scheduling. Since many studies have discussed learning and training of employees and most of them are focused on the learning emotion, learning style, educational content, and technology, there is limited research exploring the relationship between the e-learning and employee’s satisfaction. Therefore, this study aims to explore how to enhance employee’s satisfaction by means of e-learning systems, and what kinds of training or teaching activities are effective to increase their learning satisfaction. We provide a model and framework for assessing the impact of e-learning on employee’s satisfaction which improve learning and teaching outcomes. Findings from the study confirmed the validity of the proposed model for e-learning satisfaction assessment. In addition, the results showed that the four variables technology, educational content, motivation, and attitude significantly influenced employee’s learning satisfaction. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3c118c4f2b418f801faee08050e3a165",
"text": "Unsupervised learning from visual data is one of the most difficult challenges in computer vision. It is essential for understanding how visual recognition works. Learning from unsupervised input has an immense practical value, as huge quantities of unlabeled videos can be collected at low cost. Here we address the task of unsupervised learning to detect and segment foreground objects in single images. We achieve our goal by training a student pathway, consisting of a deep neural network that learns to predict, from a single input image, the output of a teacher pathway that performs unsupervised object discovery in video. Our approach is different from the published methods that perform unsupervised discovery in videos or in collections of images at test time. We move the unsupervised discovery phase during the training stage, while at test time we apply the standard feed-forward processing along the student pathway. This has a dual benefit: firstly, it allows, in principle, unlimited generalization possibilities during training, while remaining fast at testing. Secondly, the student not only becomes able to detect in single images significantly better than its unsupervised video discovery teacher, but it also achieves state of the art results on two current benchmarks, YouTube Objects and Object Discovery datasets. At test time, our system is two orders of magnitude faster than other previous methods.",
"title": ""
},
{
"docid": "143f92f32d578089c1b6eab4379d7b8b",
"text": "Brain anatomical networks are sparse, complex, and have economical small-world properties. We investigated the efficiency and cost of human brain functional networks measured using functional magnetic resonance imaging (fMRI) in a factorial design: two groups of healthy old (N = 11; mean age = 66.5 years) and healthy young (N = 15; mean age = 24.7 years) volunteers were each scanned twice in a no-task or \"resting\" state following placebo or a single dose of a dopamine receptor antagonist (sulpiride 400 mg). Functional connectivity between 90 cortical and subcortical regions was estimated by wavelet correlation analysis, in the frequency interval 0.06-0.11 Hz, and thresholded to construct undirected graphs. These brain functional networks were small-world and economical in the sense of providing high global and local efficiency of parallel information processing for low connection cost. Efficiency was reduced disproportionately to cost in older people, and the detrimental effects of age on efficiency were localised to frontal and temporal cortical and subcortical regions. Dopamine antagonism also impaired global and local efficiency of the network, but this effect was differentially localised and did not interact with the effect of age. Brain functional networks have economical small-world properties-supporting efficient parallel information transfer at relatively low cost-which are differently impaired by normal aging and pharmacological blockade of dopamine transmission.",
"title": ""
},
{
"docid": "869e01855c8cfb9dc3e64f7f3e73cd60",
"text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.",
"title": ""
},
{
"docid": "d3c059d0889fc390a91d58aa82980fcc",
"text": "In recent trends industries, organizations and many companies are using personal identification strategies like finger print identification, RFID for tracking attendance and etc. Among of all these personal identification strategies face recognition is most natural, less time taken and high efficient one. It’s has several applications in attendance management systems and security systems. The main strategy involve in this paper is taking attendance in organizations, industries and etc. using face detection and recognition technology. A time period is settled for taking the attendance and after completion of time period attendance will directly stores into storage device mechanically without any human intervention. A message will send to absent student parent mobile using GSM technology. This attendance will be uploaded into web server using Ethernet. This raspberry pi 2 module is used in this system to achieve high speed of operation. Camera is interfaced to one USB port of raspberry pi 2. Eigen faces algorithm is used for face detection and recognition technology. Eigen faces algorithm is less time taken and high effective than other algorithms like viola-jones algorithm etc. the attendance will directly stores in storage device like pen drive that is connected to one of the USB port of raspberry pi 2. This system is most effective, easy and less time taken for tracking attendance in organizations with period wise without any human intervention.",
"title": ""
},
{
"docid": "439485763ec50c6a1e843f98950e4b7d",
"text": "Currently the large surplus of glycerol formed as a by-product during the production of biodiesel offered an abundant and low cost feedstock. Researchers showed a surge of interest in using glycerol as renewable feedstock to produce functional chemicals. This Minireview focuses on recent developments in the conversion of glycerol into valueadded products, including citric acid, lactic acid, 1,3-dihydroxyacetone (DHA), 1,3-propanediol (1,3-PD), dichloro-2propanol (DCP), acrolein, hydrogen, and ethanol etc. The versatile new applications of glycerol in the everyday life and chemical industry will improve the economic viability of the biodiesel industry.",
"title": ""
},
{
"docid": "b197cc7af64421f7256cea5acd4fff3c",
"text": "During natural multimodal communication, we speak, gesture, gaze and move in a powerful flow of communication that bears little resemblance to the discrete keyboard and mouse clicks entered sequentially in a graphical user interface (GUI). A profound shift is occurring toward embracing users’ natural behavior as the c nter of the humancomputer interface. Multimodal interfaces are being developed that permit our highly skilled and coordinated communicative behaviors to control system interactions in a more transparent interface experience than ever before. Our voice, hands, and whole body together, once augmented by sensors such as microphones and cameras, now are the ultimate transparent and mobile multimodal input devices.",
"title": ""
},
{
"docid": "ee9c0e79b29fbe647e3e0ccb168532b5",
"text": "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15%, 7% and 12% respectively in mAP.",
"title": ""
},
{
"docid": "96be7a58f4aec960e2ad2273dea26adb",
"text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.",
"title": ""
},
{
"docid": "1facd226c134b22f62613073deffce60",
"text": "We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.",
"title": ""
},
{
"docid": "400be1fdbd0f1aebfb0da220fd62e522",
"text": "Understanding users' interactions with highly subjective content---like artistic images---is challenging due to the complex semantics that guide our preferences. On the one hand one has to overcome `standard' recommender systems challenges, such as dealing with large, sparse, and long-tailed datasets. On the other, several new challenges present themselves, such as the need to model content in terms of its visual appearance, or even social dynamics, such as a preference toward a particular artist that is independent of the art they create. In this paper we build large-scale recommender systems to model the dynamics of a vibrant digital art community, Behance, consisting of tens of millions of interactions (clicks and 'appreciates') of users toward digital art. Methodologically, our main contributions are to model (a) rich content, especially in terms of its visual appearance; (b) temporal dynamics, in terms of how users prefer 'visually consistent' content within and across sessions; and (c) social dynamics, in terms of how users exhibit preferences both towards certain art styles, as well as the artists themselves.",
"title": ""
},
{
"docid": "fa292adbad54c22fce27afbc5467efad",
"text": "This paper presents the results of a case study on the impacts of implementing Enterprise Content Management Systems (ECMSs) in an organization. It investigates how these impacts are influenced by the functionalities of an ECMS and by the nature of the ECMS-supported processes. The results confirm that both factors do influence the impacts. Further, the results indicate that the implementation of an ECMS can change the nature of ECMS-supported processes. It is also demonstrated that the functionalities of an ECMS need to be aligned with the nature of the processes of the implementing organization. This finding confirms previous research from the Workflow Management domain and extends it to the ECM domain. Finally, the case study results show that implementing an ECMS to support rather ‘static’ processes can be expected to cause more and stronger impacts than the support of ‘flexible’ processes.",
"title": ""
},
{
"docid": "2a55dd98b47bd6b79b5e1d441d23c683",
"text": "This case study explores how a constructivist-based instructional design helped adult learners learn in an online learning environment. Two classes of adult learners pursuing professional development and registered in a webbased course were studied. The data consisted of course documents, submitted artefacts, surveys, interviews, in-class observations, and online observations. The study found that the majority of the learners were engaged in two facets of learning. On the one hand, the instructional activities requiring collaboration and interaction helped the learners support one another’s learning, from which most claimed to have benefited. On the other hand, the constructivistbased course assisted many learners to develop a sense of becoming more responsible, self-directed learners. Overall, the social constructivist style of instructional strategy seems promising to facilitate adult learning, which not only helps change learners’ perceptions of the online learning, but also assists them to learn in a more collaborative, authentic and responsible way. The study, however, also disclosed that in order to maintain high-quality learning, appropriate assessment plans and adequate facilitation must be particularly reinforced. A facilitation model is thus suggested. Introduction With the rising prevalence of the Internet, technological media for teaching and learning are becoming increasingly interactive, widely distributed and collaborative (Bonk, Hara, Dennen, Malikowski & Supplee, 2000; Chang, 2003). A collaborative, interactive, constructivist online learning environment, as opposed to a passive learning environment, is found to be better able to help students learn more actively and effectively (Murphy, Mahoney, Chen, Mendoza-Diaz & Yang, 2005). Online learning provides learners, especially adult learners, with an opportunity and flexibility for learning at Note: The research was sponsored by the National Science Council, NSC-95-2520-S-271-001. British Journal of Educational Technology Vol 41 No 5 2010 706–720 doi:10.1111/j.1467-8535.2009.00965.x © 2009 The Author. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. any time and in any place. As lifelong learning is considered both an economic and a social and individual interest (White, 2007), how to assist general adult learners to learn more practically and persistently through the online learning environment is of great interest. The purpose of this study is to explore whether and how nondegreepursuing adult learners benefit from engaging in a constructivist-based online course. This study first briefly reviews the notion of constructivist learning, and then the characteristics of adult learners and adult learning, followed by discussing online instructional strategies designed based on constructivist principles. Two online courses offered for adult learners are investigated to address the research questions. In addition to reporting the findings, a facilitation model for improving the constructivist-based online course geared towards adult learners is also provided at the end. The concept of constructivist learning Constructivist learning arose from Piagetian and Vygotskian perspectives (Palincsar, 1998), emphasising the impact of constructed knowledge on the individual’s active, reflective thinking. While Piaget focused more on individual cognitive constructivism, Vygotsky stressed that sociocultural systems have a major impact on an individual’s learning (Siegler, 1998). According to social constructivist theory, knowledge is socially situated and is constructed through reflection on one’s own thoughts and experiences, as well as other learners’ ideas. Dewey (1938) believed that individual development is dependent upon the existing social environmental context and argued that students should learn from the genuine world through continuous interaction with others. Lave and Wenger (1991) asserted that learning is socially situated with members’ active participation in their routine, patterned activities. A constructivist, dialogical instructional approach should focus on learning about ‘why’ and learning about ‘how’, rather than conducting learning itself (Scott, 2001). In the constructivist learning environment, students are encouraged to actively engage in learning: to discuss, argue, negotiate ideas, and to collaboratively solve problems; teachers design and provide the learning context and facilitate learning activities (Palincsar). Because of their rich life and employment experience, the social, situated nature of learning through practices appears particularly authentic and appropriate for adult learners. Adult learners and adult learning The success of adult learning greatly depends upon individuals’ maturation and experiences (Mezirow, 1991, 1997; Wang, Sierra & Folger, 2003) contended that the focus of adult learning is on assisting them to become independent thinkers, rather than passive knowledge receivers. However, like younger students, adult learners also need motivation to sustain their learning, particularly those less engaged working adults (Priest, 2000). To achieve this, the course curriculum must be tailored to individual adult’s learning needs, interests, abilities and experiences (Lindeman, 1926). Learners may learn more effectively when instructional activities are designed in accordance with their personal needs, characteristics and, most importantly, their life context (Knowles, 1990). Knowles (1986) proposed the concept of contract learning as the fundamental platform for organising individual adult learning. The idea of contract learning hinges on individual learners planning their own learning based on their Constructivist adult online learning 707 © 2009 The Author. Journal compilation © 2009 Becta. learning needs, prior experiences, interests, goals and self-competence. The progress of the learning contract is based upon the learners’ successfully comprehending what they have learned so far (Scott, 2001). When learners set up their own learning objectives and learning outcomes through the learning contract process, they will better understand their learning style and will have better access to the desired course content (Boyer, 2003). Instructional strategies for facilitating constructivist online learning To implement a constructivist-based online course, various instructional strategies have been implemented, such as requiring students to engage in collaborative, contextualised learning by simulating and assuming an authentic role that is real in the authentic society (Auyeung, 2004; Maor, 2003; Martens, Bastiaens & Kirschner, 2007); setting a collective goal and a shared vision to motivate students’ participation and contribution levels (Gilbert & Driscoll, 2002); and requiring students to be in charge of a discussion of their teamwork (Harmon & Jones, 2000). Some online facilitators required students to plan their own learning goals, set their learning pace, and develop the methodology to achieve the set goals (Boyer, 2003; Kochtanek & Hein, 2000). While learners are expected to assume more responsibility for their learning, the role of online facilitators is crucial (Kochtanek & Hein). A number of online educators suggest that the facilitation tasks include providing feedback to learners and a summary of or specific comments on the discussed issues at the end of class discussions (eg, Graham, Cagiltay, Lim, Craner & Duffy, 2001; Maor), and intervening and promoting students’ participation in the discussion when it becomes stagnant (eg, Auyeung, 2004; Maor). Encouraging students to provide timely responses and feedback to class members helps boost the students’ sense of participation and learning in online learning communities (Gilbert & Driscoll, 2002; Hill, Raven & Han, 2002; Wegerif, 1998), which further helps boost students’ achievement (Moller, Harvey, Downs & Godshalk, 2000). Some online facilitators reinforced students’ interaction and engagement by laying out clear assessment specifications and setting aside a high percentage of the grade to the class-level online discussion activity (Maor). To facilitate online discussion activities, Murphy et al (2005) proposed a constructivist model, which involves three levels of facilitation: (1) the instructor’s mentoring (guiding the learners to develop cognitive and metacognitive skills), (2) teaching assistants’ (TA) coaching (monitoring learners in developing task management skills), and (3) learner facilitators’ moderation (facilitating required learning activities). Salmon (2002) proposed a five-stage model to facilitate online teaching and learning, in which varied facilitation skills and instructional activities are recommended in different learning stages. The five stages are: (1) access and motivation (setting up the system, welcoming and encouraging), (2) socialisation (establishing cultural, social learning environments), (3) information exchange (facilitating, supporting use of course materials), (4) knowledge construction (conferencing, moderating process), and (5) development (helping achieve personal goals) stages. When designing social constructivist pedagogy for adult learners, Huang (2002) suggested that six instructional principles be considered: interactive learning (interacting with the instructor and peers, rather 708 British Journal of Educational Technology Vol 41 No 5 2010 © 2009 The Author. Journal compilation © 2009 Becta. than engaging in isolated learning), collaborative learning (engaging in collaborative knowledge construction, social negotiation, and reflection), facilitating learning (providing a safe, positive learning environment for sharing ideas and thoughts), authentic learning (connecting learning content to real-life experiences), student-centred learning (emphasising self-directed, experiential learning), and high-quality learning",
"title": ""
},
{
"docid": "e22564e88d82b91e266b0a118bd2ec91",
"text": "Non-lethal dose of 70% ethanol extract of the Nerium oleander dry leaves (1000 mg/kg body weight) was subcutaneously injected into male and female mice once a week for 9 weeks (total 10 doses). One day after the last injection, final body weight gain (relative percentage to the initial body weight) had a tendency, in both males and females, towards depression suggesting a metabolic insult at other sites than those involved in myocardial function. Multiple exposure of the mice to the specified dose failed to express a significant influence on blood parameters (WBC, RBC, Hb, HCT, PLT) as well as myocardium. On the other hand, a lethal dose (4000 mg/kg body weight) was capable of inducing progressive changes in myocardial electrical activity ending up in cardiac arrest. The electrocardiogram abnormalities could be brought about by the expected Na+, K(+)-ATPase inhibition by the cardiac glycosides (cardenolides) content of the lethal dose.",
"title": ""
},
{
"docid": "1397da68ae48927176f68dbc05ea7591",
"text": "This paper describes an efficient and robust hybrid parallel solver ‘‘the SPIKE algorithm’’ for narrow-banded linear systems. Two versions of SPIKE with their built-in-options are described in detail: the Recursive SPIKE version for handling non-diagonally dominant systems and the Truncated SPIKE version for diagonally dominant ones. These SPIKE schemes can be used either as direct solvers, or as preconditioners for outer iterative schemes. Both versions are faster than the direct solvers in ScaLAPACK on parallel computing platforms, and quite competitive in terms of achieved accuracy for handling systems that are dense within the band. 2005 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
bdf5d8534c1506caa106b9f2d5220471
|
When disfluency is--and is not--a desirable difficulty: the influence of typeface clarity on metacognitive judgments and memory.
|
[
{
"docid": "15e4cfb84801e86211709a8d24979eaa",
"text": "The English Lexicon Project is a multiuniversity effort to provide a standardized behavioral and descriptive data set for 40,481 words and 40,481 nonwords. It is available via the Internet at elexicon.wustl.edu. Data from 816 participants across six universities were collected in a lexical decision task (approximately 3400 responses per participant), and data from 444 participants were collected in a speeded naming task (approximately 2500 responses per participant). The present paper describes the motivation for this project, the methods used to collect the data, and the search engine that affords access to the behavioral measures and descriptive lexical statistics for these stimuli.",
"title": ""
}
] |
[
{
"docid": "68a3f9fb186289f343b34716b2e087f6",
"text": "User interface (UI) is one of the most important components of a mobile app and strongly influences users' perception of the app. However, UI design tasks are typically manual and time-consuming. This paper proposes a novel approach to (semi)-automate those tasks. Our key idea is to develop and deploy advanced deep learning models based on recurrent neural networks (RNN) and generative adversarial networks (GAN) to learn UI design patterns from millions of currently available mobile apps. Once trained, those models can be used to search for UI design samples given user-provided descriptions written in natural language and generate professional-looking UI designs from simpler, less elegant design drafts.",
"title": ""
},
{
"docid": "a12edc868d121a1fee3a1f1b100ebd31",
"text": "This paper designs the central finite-dimensional H1 filter for linear stochastic systems with integral-quadratically bounded deterministic disturbances, that is suboptimal for a given threshold g with respect to a modified Bolza–Meyer quadratic criterion including the attenuation control term with the opposite sign. The original H1 filtering problem for a linear stochastic system is reduced to the corresponding mean-square H2 filtering problem, using the technique proposed in Doyle (1989) [1]. In the example, the designed filter is applied to estimation of the pitch and yaw angles of a two degrees of freedom (2DOF) helicopter. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1313fbdd0721b58936a05da5080239df",
"text": "Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as \"bug\" for lack of a better classification support or of knowledge about the possible kinds.\n This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.\n We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77% and 82% of correct decisions.",
"title": ""
},
{
"docid": "10b4acbca5b5cb210ec7e96be0d406a6",
"text": "By design, word embeddings are unable to model the dynamic nature of words’ semantics, i.e., the property of words to correspond to potentially different meanings depending on the context in which they appear. To address this limitation, dozens of specialized word embedding techniques have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling for the standard de facto dataset, i.e., the Stanford Contextual Word Similarity. To address the lack of a suitable benchmark, we put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of contextsensitive word embeddings. WiC is released in https://pilehvar.github.io/wic/.",
"title": ""
},
{
"docid": "2e98d7c876aa4875cc2048b687f97cdf",
"text": "In this paper, we present a pH sensing bandage constructed with pH sensing smart threads for chronic wound monitoring. The bandage is integrated with custom CMOS readout electronics for wireless monitoring and data transmission and is capable of continuously monitoring wound pH. Threads exhibit pH sensitivity of 54mV/pH and reach their steady state value within 2 minutes.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "176636edbd9458b7b87c1bb511e4ed51",
"text": "Numerous indigenous healing traditions around the world employ plants with psychoactive effects to facilitate divination and other spiritual healing rituals. Southern Africa has often been considered to have relatively few psychoactive plant species of cultural importance, and little has been published on the subject. This paper reports on 85 species of plants that are used for divination by southern Bantu-speaking people. Of these, 39 species (45 %) have other reported psychoactive uses, and a number have established hallucinogenic activity. These findings indicate that psychoactive plants have an important role in traditional healing practices in southern Africa.",
"title": ""
},
{
"docid": "23bd32e6901fa9f6a7080322bb645610",
"text": "Congenital adrenal hyperplasia (CAH) due to 21-hydroxylase deficiency results in excess androgen production which can lead to early epiphyseal fusion and short stature. Prader-Willi syndrome (PWS) is a genetic disorder resulting from a defect on chromosome 15 due to paternal deletion, maternal uniparental disomy, or imprinting defect. Ninety percent of patients with PWS have short stature. In this article we report a patient with simple-virilizing CAH and PWS who was overtreated with glucocorticoids for CAH and not supplemented with growth hormone for PWS, resulting in a significantly short adult height.",
"title": ""
},
{
"docid": "a56e8dc2bbafb4ec1cf9cb2f7085ab95",
"text": "Performance of deep submicron VLSI is being increasingly dominated by the interconnects due to decreasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies in one single chip is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable. This paper analyzes the limitations of the existing interconnect technologies and design methodologies and presents a novel 3-dimensional (3-D) chip design strategy that exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize a System-on a-Chip (SoC) design. A comprehensive analytical treatment of these 3-D ICs has been presented and it has been shown that by simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical inter-layer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved, without the aid of any other circuit or design innovations. A scheme to optimize the interconnect distribution among different interconnect tiers is presented and the effect of transferring the repeaters to upper Si layers has been quantified in this analysis for a two-layer 3-D chip. Furthermore, one of the major concerns in 3-D ICs arising due to power dissipation problems has been analyzed and an analytical model has been presented to estimate the temperatures of the different active layers. It is demonstrated that advancement in heat sinking technology will be necessary in order to extract maximum performance from these chips. Implications of 3D device architecture on several design issues have also been discussed with especial attention to SoC design strategies. Finally, some of the promising technologies for manufacturing 3-D ICs have been outlined.",
"title": ""
},
{
"docid": "d9f7d78b6e1802a17225db13edd033f6",
"text": "The edit distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations we admit are deleting, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the longest common subsequence of two strings is a special case of the problem of computing edit distances. We describe an algorithm for computing the edit distance between two strings of length n and m, n > m, which requires O(n * max( 1, m/log n)) steps whenever the costs of edit operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.",
"title": ""
},
{
"docid": "e5ba31570f503ff8285f299615d84394",
"text": "Most ontologies are application ontologies, that are not reusable and are difficult to link together as they are too specific. Reference ontology is able to contribute significantly in reducing the issue of ontology applications specificity. Particularly considering higher education domain, we think that a reference ontology dedicated to this knowledge area, can be regarded as a valuable tool for several stakeholders interested in analyzing the system of higher education as a whole, especially in a context of academic systems diversity all over the world. Motivated by this potential application and even more, we decided to build a reference ontology called HERO ontology, which stands for “Higher Education Reference Ontology”. In this paper we explain HERO ontology building process from requirements specification until ontology evaluation using NeOn methodology.",
"title": ""
},
{
"docid": "b8a9b4ed7319f11198791a178cb17d7f",
"text": "Semantic relation classification remains a challenge in natural language processing. In this paper, we introduce a hierarchical recurrent neural network that is capable of extracting information from raw sentences for relation classification. Our model has several distinctive features: (1) Each sentence is divided into three context subsequences according to two annotated nominals, which allows the model to encode each context subsequence independently so as to selectively focus as on the important context information; (2) The hierarchical model consists of two recurrent neural networks (RNNs): the first one learns context representations of the three context subsequences respectively, and the second one computes semantic composition of these three representations and produces a sentence representation for the relationship classification of the two nominals. (3) The attention mechanism is adopted in both RNNs to encourage the model to concentrate on the important information when learning the sentence representations. Experimental results on the SemEval-2010 Task 8 dataset demonstrate that our model is comparable to the state-of-the-art without using any hand-crafted features.",
"title": ""
},
{
"docid": "34c4882192deb6e8324d6c5828cef3f2",
"text": "The Indian banking sector has undergone rapid transformation since 1991—the year India started a series of economic reforms. As part of the reforms, along with public sector banks, private sector banks started operations in the country. Due to the rapid reforms of the banking sector and the increased competition, Indian banks are now featured prominently on the global stage. Estimates indicate that 22 Indian banks are in the list of top-1000 banks and 5 of them feature in the list of top-500 banks (Singh 2007). BANK, the bank studied by us, is one of these top banks.",
"title": ""
},
{
"docid": "1aeca45f1934d963455698879b1e53e8",
"text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",
"title": ""
},
{
"docid": "f182fdd2f5bae84b5fc38284f83f0c27",
"text": "We adopted an approach based on an LSTM neural network to monitor and detect faults in industrial multivariate time series data. To validate the approach we created a Modelica model of part of a real gasoil plant. By introducing hacks into the logic of the Modelica model, we were able to generate both the roots and causes of fault behavior in the plant. Having a self-consistent data set with labeled faults, we used an LSTM architecture with a forecasting error threshold to obtain precision and recall quality metrics. The dependency of the quality metric on the threshold level is considered. An appropriate mechanism such as “one handle” was introduced for filtering faults that are outside of the plant operator field of interest.",
"title": ""
},
{
"docid": "f2e10c5118cc736a942f201ddfbdf524",
"text": "Numerical sediment quality guidelines (SQGs) for freshwater ecosystems have previously been developed using a variety of approaches. Each approach has certain advantages and limitations which influence their application in the sediment quality assessment process. In an effort to focus on the agreement among these various published SQGs, consensus-based SQGs were developed for 28 chemicals of concern in freshwater sediments (i.e., metals, polycyclic aromatic hydrocarbons, polychlorinated biphenyls, and pesticides). For each contaminant of concern, two SQGs were developed from the published SQGs, including a threshold effect concentration (TEC) and a probable effect concentration (PEC). The resultant SQGs for each chemical were evaluated for reliability using matching sediment chemistry and toxicity data from field studies conducted throughout the United States. The results of this evaluation indicated that most of the TECs (i.e., 21 of 28) provide an accurate basis for predicting the absence of sediment toxicity. Similarly, most of the PECs (i.e., 16 of 28) provide an accurate basis for predicting sediment toxicity. Mean PEC quotients were calculated to evaluate the combined effects of multiple contaminants in sediment. Results of the evaluation indicate that the incidence of toxicity is highly correlated to the mean PEC quotient (R(2) = 0.98 for 347 samples). It was concluded that the consensus-based SQGs provide a reliable basis for assessing sediment quality conditions in freshwater ecosystems.",
"title": ""
},
{
"docid": "d0803e4a3f417f58b317f296a876e332",
"text": "In recent years ad hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. Consequently, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this paper, we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today's IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we perform extended evaluations of MapReduce-inspired processing jobs on an IaaS cloud system and compare the results to the popular data processing framework Hadoop.",
"title": ""
},
{
"docid": "48dbd48a531867486b2d018442f64ebb",
"text": "The purpose of this paper is to analyze the extent to which the use of social media can support customer knowledge management (CKM) in organizations relying on a traditional bricks-and-mortar business model. The paper uses a combination of qualitative case study and netnography on Starbucks, an international coffee house chain. Data retrieved from varied sources such as newspapers, newswires, magazines, scholarly publications, books, and social media services were textually analyzed. Three major findings could be culled from the paper. First, Starbucks deploys a wide range of social media tools for CKM that serve as effective branding and marketing instruments for the organization. Second, Starbucks redefines the roles of its customers through the use of social media by transforming them from passive recipients of beverages to active contributors of innovation. Third, Starbucks uses effective strategies to alleviate customers’ reluctance for voluntary knowledge sharing, thereby promoting engagement in social media. The scope of the paper is limited by the window of the data collection period. Hence, the findings should be interpreted in the light of this constraint. The lessons gleaned from the case study suggest that social media is not a tool exclusive to online businesses. It can be a potential game-changer in supporting CKM efforts even for traditional businesses. This paper represents one of the earliest works that analyzes the use of social media for CKM in an organization that relies on a traditional bricks-and-mortar business model.",
"title": ""
}
] |
scidocsrr
|
041d33229d6728d7be6a6529e4c55b55
|
A Fully Tunable Two-Pole Bandpass Filter
|
[
{
"docid": "cdb7380ca1a4b5a8059e3e4adc6b7ea2",
"text": "In this paper, tunable microstrip bandpass filters with two adjustable transmission poles and compensable coupling are proposed. The fundamental structure is based on a half-wavelength (λ/2) resonator with a center-tapped open-stub. Microwave varactors placed at various internal nodes separately adjust the filter's center frequency and bandwidth over a wide tuning range. The constant absolute bandwidth is achieved at different center frequencies by maintaining the distance between the in-band transmission poles. Meanwhile, the coupling strength could be compensable by tuning varactors that are side and embedding loaded in the parallel coupled microstrip lines (PCMLs). As a demonstrator, a second-order filter with seven tuning varactors is implemented and verified. A frequency range of 0.58-0.91 GHz with a 1-dB bandwidth tuning from 115 to 315 MHz (i.e., 12.6%-54.3% fractional bandwidth) is demonstrated. Specifically, the return loss of passbands with different operating center frequencies can be achieved with same level, i.e., about 13.1 and 11.6 dB for narrow and wide passband responses, respectively. To further verify the etch-tolerance characteristics of the proposed prototype filter, another second-order filter with nine tuning varactors is proposed and fabricated. The measured results exhibit that the tunable fitler with the embedded varactor-loaded PCML has less sensitivity to fabrication tolerances. Meanwhile, the passband return loss can be achieved with same level of 20 dB for narrow and wide passband responses, respectively.",
"title": ""
}
] |
[
{
"docid": "8d6cb15882c3a08ce8e2726ed65bf3cb",
"text": "Natural language processing systems (NLP) that extract clinical information from textual reports were shown to be effective for limited domains and for particular applications. Because an NLP system typically requires substantial resources to develop, it is beneficial if it is designed to be easily extendible to multiple domains and applications. This paper describes multiple extensions of an NLP system called MedLEE, which was originally developed for the domain of radiological reports of the chest, but has subsequently been extended to mammography, discharge summaries, all of radiology, electrocardiography, echocardiography, and pathology.",
"title": ""
},
{
"docid": "4d964a5cfd5b21c6196a31f4b204361d",
"text": "Edge detection is a fundamental tool in the field of image processing. Edge indicates sudden change in the intensity level of image pixels. By detecting edges in the image, one can preserve its features and eliminate useless information. In the recent years, especially in the field of Computer Vision, edge detection has been emerged out as a key technique for image processing. There are various gradient based edge detection algorithms such as Robert, Prewitt, Sobel, Canny which can be used for this purpose. This paper reviews all these gradient based edge detection techniques and provides comparative analysis. MATLAB/Simulink is used as a simulation tool. System is designed by configuring ISE Design suit with MATLAB. Hardware Description Language (HDL) is generated using Xilinx System Generator. HDL code is synthesized and implemented using Field Programmable Gate Array (FPGA).",
"title": ""
},
{
"docid": "7aa9a5f9bde62b5aafb30cbd9c79f9e9",
"text": "Congestion in traffic is a serious issue. In existing system signal timings are fixed and they are independent of traffic density. Large red light delays leads to traffic congestion. In this paper, IoT based traffic control system is implemented in which signal timings are updated based on the vehicle counting. This system consists of WI-FI transceiver module it transmits the vehicle count of the current system to the next traffic signal. Based on traffic density of previous signal it controls the signals of the next signal. The system is based on raspberry-pi and Arduino. Image processing of traffic video is done in MATLAB with simulink support. The whole vehicle counting is performed by raspberry pi.",
"title": ""
},
{
"docid": "5c13817c65175a6a3f7f4302981ae562",
"text": "This paper evaluates and compares four volume renderi algorithms that have become rather popular for rendering datase described on uniform rectilinear grids: raycasting, splatting shear-warp, and hardware-assisted 3D texture-mapping. In ord to assess both the strengths and the weaknesses of these algori in a wide variety of scenarios, a set of real-life benchmark datase with different characteristics was carefully selected. In the rende ing, all algorithm-independent image synthesis parameters, su as viewing matrix, transfer functions, and optical model, were ke constant to enable a fair comparison of the rendering results. Bo image quality and computational complexity were evaluated a compared, with the aim of providing both researchers and prac tioners with guidelines on which algorithm is most suited in whic scenario. Our analysis also indicates the current weaknesses each algorithm’s pipeline, and possible solutions to these as w as pointers for future research are offered.",
"title": ""
},
{
"docid": "cbcb20173f4e012253c51020932e75a6",
"text": "We investigate methods for combining multiple selfsupervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.",
"title": ""
},
{
"docid": "fdbfc5bf8af1478e919153fb6cde64f3",
"text": "Software development is conducted in increasingly dynamic business environments. Organizations need the capability to develop, release and learn from software in rapid parallel cycles. The abilities to continuously deliver software, to involve users, and to collect and prioritize their feedback are necessary for software evolution. In 2014, we introduced Rugby, an agile process model with workflows for continuous delivery and feedback management, and evaluated it in university projects together with industrial clients.\n Based on Rugby's release management workflow we identified the specific needs for project-based organizations developing mobile applications. Varying characteristics and restrictions in projects teams in corporate environments impact both process and infrastructure. We found that applicability and acceptance of continuous delivery in industry depend on its adaptability. To address issues in industrial projects with respect to delivery process, infrastructure, neglected testing and continuity, we extended Rugby's workflow and made it tailorable.\n Eight projects at Capgemini, a global provider of consulting, technology and outsourcing services, applied a tailored version of the workflow. The evaluation of these projects shows anecdotal evidence that the application of the workflow significantly reduces the time required to build and deliver mobile applications in industrial projects, while at the same time increasing the number of builds and internal deliveries for feedback.",
"title": ""
},
{
"docid": "d1515b3c475989e3c3584e02c0d5c329",
"text": "Sexting has received increasing scholarly and media attention. Especially, minors’ engagement in this behaviour is a source of concern. As adolescents are highly sensitive about their image among peers and prone to peer influence, the present study implemented the prototype willingness model in order to assess how perceptions of peers engaging in sexting possibly influence adolescents’ willingness to send sexting messages. A survey was conducted among 217 15to 19-year-olds. A total of 18% of respondents had engaged in sexting in the 2 months preceding the study. Analyses further revealed that the subjective norm was the strongest predictor of sexting intention, followed by behavioural willingness and attitude towards sexting. Additionally, the more favourable young people evaluated the prototype of a person engaging in sexting and the higher they assessed their similarity with this prototype, the more they were willing to send sexting messages. Differences were also found based on gender, relationship status and need for popularity. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1367527934bacc04443965406aea1a11",
"text": "The physis, or growth plate, is a complex disc-shaped cartilage structure that is responsible for longitudinal bone growth. In this study, a multi-scale computational approach was undertaken to better understand how physiological loads are experienced by chondrocytes embedded inside chondrons when subjected to moderate strain under instantaneous compressive loading of the growth plate. Models of representative samples of compressed bone/growth-plate/bone from a 0.67 mm thick 4-month old bovine proximal tibial physis were subjected to a prescribed displacement equal to 20% of the growth plate thickness. At the macroscale level, the applied compressive deformation resulted in an overall compressive strain across the proliferative-hypertrophic zone of 17%. The microscale model predicted that chondrocytes sustained compressive height strains of 12% and 6% in the proliferative and hypertrophic zones, respectively, in the interior regions of the plate. This pattern was reversed within the outer 300 μm region at the free surface where cells were compressed by 10% in the proliferative and 26% in the hypertrophic zones, in agreement with experimental observations. This work provides a new approach to study growth plate behavior under compression and illustrates the need for combining computational and experimental methods to better understand the chondrocyte mechanics in the growth plate cartilage. While the current model is relevant to fast dynamic events, such as heel strike in walking, we believe this approach provides new insight into the mechanical factors that regulate bone growth at the cell level and provides a basis for developing models to help interpret experimental results at varying time scales.",
"title": ""
},
{
"docid": "caaab1ca0175a6387b1a0c7be7803513",
"text": "Probably the most promising breakthroughs in vehicular safety will emerge from intelligent, Advanced Driving Assistance Systems (i-ADAS). Influential research institutions and large vehicle manufacturers work in lockstep to create advanced, on-board safety systems by means of integrating the functionality of existing systems and developing innovative sensing technologies. In this contribution, we describe a portable and scalable vehicular instrumentation designed for on-road experimentation and hypothesis verification in the context of designing i-ADAS prototypes.",
"title": ""
},
{
"docid": "e658fe9b94b044fcb62ab23426b26922",
"text": "Twitter as an information dissemination tool has proved to be instrumental in generating user curated content in short spans of time. Tweeting usually occurs when reacting to events, speeches, about a service or product. This in some cases comes with its fair share of blame on varied aspects in reference to say an event. Our work in progress details how we plan to collect the informal texts, clean them and extract features for blame detection. We are interested in augmenting Recurrent Neural Networks (RNN) with selfdeveloped association rules in getting the most out of the data for training and evaluation. We aim to test the performance of our approach using human-induced terror-related tweets corpus. It is possible tailoring the model to fit natural disaster scenarios.",
"title": ""
},
{
"docid": "5d04dd7d174cc1b1517035d26785c70f",
"text": "Folksonomies have become a powerful tool to describe, discover, search, and navigate online resources (e.g., pictures, videos, blogs) on the Social Web. Unlike taxonomies and ontologies, which impose a hierarchical categorisation on content, folksonomies directly allow end users to freely create and choose the categories (in this case, tags) that best describe a piece of information. However, the freedom afforded to users comes at a cost: as tags are defined informally, the retrieval of information becomes more challenging. Different solutions have been proposed to help users discover content in this highly dynamic setting. However, they have proved to be effective only for users who have already heavily used the system (active users) and who are interested in popular items (i.e., items tagged by many other users). In this thesis we explore principles to help both active users and more importantly new or inactive users (cold starters) to find content they are interested in even when this content falls into the long tail of medium-to-low popularity items (cold start items). We investigate the tagging behaviour of users on content and show how the similarities between users and tags can be used to produce better recommendations. We then analyse how users create new content on social tagging websites and show how preferences of only a small portion of active users (leaders), responsible for the vast majority of the tagged content, can be used to improve the recommender system’s scalability. We also investigate the growth of the number of users, items and tags in the system over time. We then show how this information can be used to decide whether the benefits of an update of the data structures modelling the system outweigh the corresponding cost. In this work we formalize the ideas introduced above and we describe their implementation. To demonstrate the improvements of our proposal in recommendation efficacy and efficiency, we report the results of an extensive evaluation conducted on three different social tagging websites: CiteULike, Bibsonomy and MovieLens. Our results demonstrate that our approach achieves higher accuracy than state-of-the-art systems for cold start users and for users searching for cold start items. Moreover, while accuracy of our technique is comparable to other techniques for active users, the computational cost that it requires is much smaller. In other words our approach is more scalable and thus more suitable for large and quickly growing settings.",
"title": ""
},
{
"docid": "6f3a902ed5871a95f6b5adf197684748",
"text": "BACKGROUND\nThe choice of antimicrobials for initial treatment of peritoneal dialysis (PD)-related peritonitis is crucial for a favorable outcome. There is no consensus about the best therapy; few prospective controlled studies have been published, and the only published systematic reviews did not report superiority of any class of antimicrobials. The objective of this review was to analyze the results of PD peritonitis treatment in adult patients by employing a new methodology, the proportional meta-analysis.\n\n\nMETHODS\nA review of the literature was conducted. There was no language restriction. Studies were obtained from MEDLINE, EMBASE, and LILACS. The inclusion criteria were: (a) case series and RCTs with the number of reported patients in each study greater than five, (b) use of any antibiotic therapy for initial treatment (e.g., cefazolin plus gentamicin or vancomycin plus gentamicin), for Gram-positive (e.g., vancomycin or a first generation cephalosporin), or for Gram-negative rods (e.g., gentamicin, ceftazidime, and fluoroquinolone), (c) patients with PD-related peritonitis, and (d) studies specifying the rates of resolution. A proportional meta-analysis was performed on outcomes using a random-effects model, and the pooled resolution rates were calculated.\n\n\nRESULTS\nA total of 64 studies (32 for initial treatment and negative culture, 28 reporting treatment for Gram-positive rods and 24 reporting treatment for Gram-negative rods) and 21 RCTs met all inclusion criteria (14 for initial treatment and negative culture, 8 reporting treatment for Gram-positive rods and 8 reporting treatment for Gram-negative rods). The pooled resolution rate of ceftazidime plus glycopeptide as initial treatment (pooled proportion = 86% [95% CI 0.82-0.89]) was significantly higher than first generation cephalosporin plus aminoglycosides (pooled proportion = 66% [95% CI 0.57-0.75]) and significantly higher than glycopeptides plus aminoglycosides (pooled proportion = 75% [95% CI 0.69-0.80]. Other comparisons of regimens used for either initial treatment, treatment for Gram-positive rods or Gram-negative rods did not show statistically significant differences.\n\n\nCONCLUSION\nWe showed that the association of a glycopeptide plus ceftazidime is superior to other regimens for initial treatment of PD peritonitis. This result should be carefully analyzed and does not exclude the necessity of monitoring the local microbiologic profile in each dialysis center to choice the initial therapeutic protocol.",
"title": ""
},
{
"docid": "004f04189c9414a58b102d93a4c4cec3",
"text": "Identification of the influential clinical symptoms and laboratory features that help in the diagnosis of dengue fever (DF) in early phase of the illness would aid in designing effective public health management and virological surveillance strategies. Keeping this as our main objective, we develop in this paper a new computational intelligence-based methodology that predicts the diagnosis in real time, minimizing the number of false positives and false negatives. Our methodology consists of three major components: 1) a novel missing value imputation procedure that can be applied on any dataset consisting of categorical (nominal) and/or numeric (real or integer); 2) a wrapper-based feature selection method with genetic search for extracting a subset of most influential symptoms that can diagnose the illness; and 3) an alternating decision tree method that employs boosting for generating highly accurate decision rules. The predictive models developed using our methodology are found to be more accurate than the state-of-the-art methodologies used in the diagnosis of the DF.",
"title": ""
},
{
"docid": "7a9151fd4563e07a3abb0b187d537caa",
"text": "It is unknown what kind of biases modern in the wild face datasets have because of their lack of annotation. A direct consequence of this is that total recognition rates alone only provide limited insight about the generalization ability of a Deep Convolutional Neural Networks (DCNNs). We propose to empirically study the effect of different types of dataset biases on the generalization ability of DCNNs. Using synthetically generated face images, we study the face recognition rate as a function of interpretable parameters such as face pose and light. The proposed method allows valuable details about the generalization performance of different DCNN architectures to be observed and compared. In our experiments, we find that: 1) Indeed, dataset bias has a significant influence on the generalization performance of DCNNs. 2) DCNNs can generalize surprisingly well to unseen illumination conditions and large sampling gaps in the pose variation. 3) Using the presented methodology we reveal that the VGG-16 architecture outperforms the AlexNet architecture at face recognition tasks because it can much better generalize to unseen face poses, although it has significantly more parameters. 4) We uncover a main limitation of current DCNN architectures, which is the difficulty to generalize when different identities to not share the same pose variation. 5) We demonstrate that our findings on synthetic data also apply when learning from real-world data. Our face image generator is publicly available to enable the community to benchmark other DCNN architectures.",
"title": ""
},
{
"docid": "2f245ca6c15b5b7ac97191baa6a55aff",
"text": "How objects are assigned to components in a distributed system can have a significant impact on performance and resource usage. Social Hash is a framework for producing, serving, and maintaining assignments of objects to components so as to optimize the operations of large social networks, such as Facebook’s Social Graph. The framework uses a two-level scheme to decouple compute-intensive optimization from relatively low-overhead dynamic adaptation. The optimization at the first level occurs on a slow timescale, and in our applications is based on graph partitioning in order to leverage the structure of the social network. The dynamic adaptation at the second level takes place frequently to adapt to changes in access patterns and infrastructure, with the goal of balancing component loads. We demonstrate the effectiveness of Social Hash with two real applications. The first assigns HTTP requests to individual compute clusters with the goal of minimizing the (memory-based) cache miss rate; Social Hash decreased the cache miss rate of production workloads by 25%. The second application assigns data records to storage subsystems with the goal of minimizing the number of storage subsystems that need to be accessed on multiget fetch requests; Social Hash cut the average response time in half on production workloads for one of the storage systems at Facebook.",
"title": ""
},
{
"docid": "2bddeff754c6a21ffdfc644205d349be",
"text": "With a sampled light field acquired from a plenoptic camera, several low-resolution views of the scene are available from which to infer depth. Unlike traditional multiview stereo, these views may be highly aliased due to the sparse sampling lattice in space, which can lead to reconstruction errors. We first analyse the conditions under which aliasing is a problem, and discuss the trade-offs for different parameter choices in plenoptic cameras. We then propose a method to compensate for the aliasing, whilst fusing the information from the multiple views to correctly recover depth maps. We show results on synthetic and real data, demonstrating the effectiveness of our method.",
"title": ""
},
{
"docid": "5a5ae4ab9b802fe6d5481f90a4aa07b7",
"text": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis.",
"title": ""
},
{
"docid": "39e6ddd04b7fab23dbbeb18f2696536e",
"text": "Moving IoT components from the cloud onto edge hosts helps in reducing overall network traffic and thus minimizes latency. However, provisioning IoT services on the IoT edge devices presents new challenges regarding system design and maintenance. One possible approach is the use of software-defined IoT components in the form of virtual IoT resources. This, in turn, allows exposing the thing/device layer and the core IoT service layer as collections of micro services that can be distributed to a broad range of hosts.\n This paper presents the idea and evaluation of using virtual resources in combination with a permission-based blockchain for provisioning IoT services on edge hosts.",
"title": ""
},
{
"docid": "ab7184c576396a1da32c92093d606a53",
"text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.",
"title": ""
},
{
"docid": "83305a3f13a943b1226cf92375c30ab4",
"text": "The recent availability of Intel Haswell processors marks the transition of hardware transactional memory from research toys to mainstream reality. DBX is an in-memory database that uses Intel's restricted transactional memory (RTM) to achieve high performance and good scalability across multi-core machines. The main limitation (and also key to practicality) of RTM is its constrained working set size: an RTM region that reads or writes too much data will always be aborted. The design of DBX addresses this challenge in several ways. First, DBX builds a database transaction layer on top of an underlying shared-memory store. The two layers use separate RTM regions to synchronize shared memory access. Second, DBX uses optimistic concurrency control to separate transaction execution from its commit. Only the commit stage uses RTM for synchronization. As a result, the working set of the RTMs used scales with the meta-data of reads and writes in a database transaction as opposed to the amount of data read/written. Our evaluation using TPC-C workload mix shows that DBX achieves 506,817 transactions per second on a 4-core machine.",
"title": ""
}
] |
scidocsrr
|
cedd9af9536d4cece0f4638eaef71e68
|
The WebNLG Challenge: Generating Text from RDF Data
|
[
{
"docid": "cd45dd9d63c85bb0b23ccb4a8814a159",
"text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization",
"title": ""
}
] |
[
{
"docid": "3bb4d0f44ed5a2c14682026090053834",
"text": "A Meander Line Antenna (MLA) for 2.45 GHz is proposed. This research focuses on the optimum value of gain and reflection coefficient. Therefore, the MLA's parametric studies is discussed which involved the number of turn, width of feed (W1), length of feed (LI) and vertical length partial ground (L3). As a result, the studies have significantly achieved MLA's gain and reflection coefficient of 3.248dB and -45dB respectively. The MLA also resembles the monopole antenna behavior of Omni-directional radiation pattern. Measured and simulated results are presented. The proposed antenna has big potential to be implemented for WLAN device such as optical mouse application.",
"title": ""
},
{
"docid": "ea92c6f33192b1209060b1a84d987b5d",
"text": "The anatomy of the clitoris is described in human anatomy textbooks. Some researchers have proposal and divulged a new anatomical terminology for the clitoris. This paper is a revision of the anatomical terms proposed by Helen O'Connell, Emmanuele Jannini, and Odile Buisson. Gynecologists, sexual medicine experts, and sexologists should spread certainties for all women, not hypotheses or personal opinions, they should use scientific terminology: clitoral/vaginal/uterine orgasm, G/A/C/U spot orgasm, and female ejaculation, are terms that should not be used by sexologists, women, and mass media. Clitoral bulbs, clitoral or clitoris-urethrovaginal complex, urethrovaginal space, periurethral glans, Halban's fascia erogenous zone, vaginal anterior fornix erogenous zone, genitosensory component of the vagus nerve, and G-spot, are terms used by some sexologists, but they are not accepted or shared by experts in human anatomy. Sexologists should define have sex, make love, the situation in which the orgasm happens in both partners with or without a vaginal intercourse.",
"title": ""
},
{
"docid": "79aa4b2c2215a677b92429d6c90410d0",
"text": "Intruders computers, who are spread across the Internet have become a major threat in our world, The researchers proposed a number of techniques such as (firewall, encryption) to prevent such penetration and protect the infrastructure of computers, but with this, the intruders managed to penetrate the computers. IDS has taken much of the attention of researchers, IDS monitors the resources computer and sends reports on the activities of any anomaly or strange patterns The aim of this paper is to explain the stages of the evolution of the idea of IDS and its importance to researchers and research centres, security, military and to examine the importance of intrusion detection systems and categories , classifications, and where can put IDS to reduce the risk to the network.",
"title": ""
},
{
"docid": "ee8b20f685d4c025e1d113a676728359",
"text": "Two experiments were conducted to evaluate the effects of increasing concentrations of glycerol in concentrate diets on total tract digestibility, methane (CH4) emissions, growth, fatty acid profiles, and carcass traits of lambs. In both experiments, the control diet contained 57% barley grain, 14.5% wheat dried distillers grain with solubles (WDDGS), 13% sunflower hulls, 6.5% beet pulp, 6.3% alfalfa, and 3% mineral-vitamin mix. Increasing concentrations (7, 14, and 21% dietary DM) of glycerol in the dietary DM were replaced for barley grain. As glycerol was added, alfalfa meal and WDDGS were increased to maintain similar concentrations of CP and NDF among diets. In Exp.1, nutrient digestibility and CH4 emissions from 12 ram lambs were measured in a replicated 4 × 4 Latin square experiment. In Exp. 2, lamb performance was evaluated in 60 weaned lambs that were blocked by BW and randomly assigned to 1 of the 4 dietary treatments and fed to slaughter weight. In Exp. 1, nutrient digestibility and CH4 emissions were not altered (P = 0.15) by inclusion of glycerol in the diets. In Exp.2, increasing glycerol in the diet linearly decreased DMI (P < 0.01) and tended (P = 0.06) to reduce ADG, resulting in a linearly decreased final BW. Feed efficiency was not affected by glycerol inclusion in the diets. Carcass traits and total SFA or total MUFA proportions of subcutaneous fat were not affected (P = 0.77) by inclusion of glycerol, but PUFA were linearly decreased (P < 0.01). Proportions of 16:0, 10t-18:1, linoleic acid (18:2 n-6) and the n-6/n-3 ratio were linearly reduced (P < 0.01) and those of 18:0 (stearic acid), 9c-18:1 (oleic acid), linearly increased (P < 0.01) by glycerol. When included up to 21% of diet DM, glycerol did not affect nutrient digestibility or CH4 emissions of lambs fed barley based finishing diets. Glycerol may improve backfat fatty acid profiles by increasing 18:0 and 9c-18:1 and reducing 10t-18:1 and the n-6/n-3 ratio.",
"title": ""
},
{
"docid": "f4b6f3b281a420999b60b38c245113a6",
"text": "There is growing interest in using intranasal oxytocin (OT) to treat social dysfunction in schizophrenia and bipolar disorders (i.e., psychotic disorders). While OT treatment results have been mixed, emerging evidence suggests that OT system dysfunction may also play a role in the etiology of metabolic syndrome (MetS), which appears in one-third of individuals with psychotic disorders and associated with increased mortality. Here we examine the evidence for a potential role of the OT system in the shared risk for MetS and psychotic disorders, and its prospects for ameliorating MetS. Using several studies to demonstrate the overlapping neurobiological profiles of metabolic risk factors and psychiatric symptoms, we show that OT system dysfunction may be one common mechanism underlying MetS and psychotic disorders. Given the critical need to better understand metabolic dysregulation in these disorders, future OT trials assessing behavioural and cognitive outcomes should additionally include metabolic risk factor parameters.",
"title": ""
},
{
"docid": "b6f4bd15f7407b56477eb2cfc4c72801",
"text": "In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.",
"title": ""
},
{
"docid": "e94f453a3301ca86bed19162ad1cb6e1",
"text": "Linux scheduling is based on the time-sharing technique already introduced in the section \"CPU's Time Sharing\" in Chapter 5, Timing Measurements: several processes are allowed to run \"concurrently,\" which means that the CPU time is roughly divided into \"slices,\" one for each runnable process.[1] Of course, a single processor can run only one process at any given instant. If a currently running process is not terminated when its time slice or quantum expires, a process switch may take place. Time-sharing relies on timer interrupts and is thus transparent to processes. No additional code needs to be inserted in the programs in order to ensure CPU time-sharing.",
"title": ""
},
{
"docid": "50c931cc73cbb3336d24707dcb5e938a",
"text": "Endochondral ossification, the mechanism responsible for the development of the long bones, is dependent on an extremely stringent coordination between the processes of chondrocyte maturation in the growth plate, vascular expansion in the surrounding tissues, and osteoblast differentiation and osteogenesis in the perichondrium and the developing bone center. The synchronization of these processes occurring in adjacent tissues is regulated through vigorous crosstalk between chondrocytes, endothelial cells and osteoblast lineage cells. Our knowledge about the molecular constituents of these bidirectional communications is undoubtedly incomplete, but certainly some signaling pathways effective in cartilage have been recognized to play key roles in steering vascularization and osteogenesis in the perichondrial tissues. These include hypoxia-driven signaling pathways, governed by the hypoxia-inducible factors (HIFs) and vascular endothelial growth factor (VEGF), which are absolutely essential for the survival and functioning of chondrocytes in the avascular growth plate, at least in part by regulating the oxygenation of developing cartilage through the stimulation of angiogenesis in the surrounding tissues. A second coordinating signal emanating from cartilage and regulating developmental processes in the adjacent perichondrium is Indian Hedgehog (IHH). IHH, produced by pre-hypertrophic and early hypertrophic chondrocytes in the growth plate, induces the differentiation of adjacent perichondrial progenitor cells into osteoblasts, thereby harmonizing the site and time of bone formation with the developmental progression of chondrogenesis. Both signaling pathways represent vital mediators of the tightly organized conversion of avascular cartilage into vascularized and mineralized bone during endochondral ossification.",
"title": ""
},
{
"docid": "41716df763f68b8f5f3dc9dcdad286fc",
"text": "When SUCNR1/GPR91-expressing macrophages are activated by inflammatory signals, they change their metabolism and accumulate succinate. In this study, we show that during this activation, macrophages release succinate into the extracellular milieu. They simultaneously up-regulate GPR91, which functions as an autocrine and paracrine sensor for extracellular succinate to enhance IL-1β production. GPR91-deficient mice lack this metabolic sensor and show reduced macrophage activation and production of IL-1β during antigen-induced arthritis. Succinate is abundant in synovial fluids from rheumatoid arthritis (RA) patients, and these fluids elicit IL-1β release from macrophages in a GPR91-dependent manner. Together, we reveal a GPR91/succinate-dependent feed-forward loop of macrophage activation and propose GPR91 antagonists as novel therapeutic principles to treat RA.",
"title": ""
},
{
"docid": "1c1830e8e5154566ed03972d300906db",
"text": "Filicide is the killing of a child by his or her parent. Despite the disturbing nature of these crimes, a study of filicide classification can provide insight into their causes. Furthermore, a study of filicide classification provides information essential to accurate death certification. We report a rare case of familial filicide in which twin sisters both attempted to kill their respective children. We then suggest a detailed classification of filicide subtypes that provides a framework of motives and precipitating factors leading to filicide. We identify 16 subtypes of filicide, each of which is sufficiently characteristic to warrant a separate category. We describe in some detail the characteristic features of these subtypes. A knowledge of filicide subtypes contributes to interpretation of difficult cases. Furthermore, to protect potential child homicide victims, it is necessary to know how and why they are killed. Epidemiologic studies using filicide subtypes as their basis could provide information leading to strategies for prevention.",
"title": ""
},
{
"docid": "9246700eca378427ea2ea3c20a4377b3",
"text": "This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the wellknown convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.",
"title": ""
},
{
"docid": "eabeed186d3ca4a372f5f83169d44e57",
"text": "In disciplines as diverse as social network analysis and neuroscience, many large graphs are believed to be composed of loosely connected smaller graph primitives, whose structure is more amenable to analysis We propose a robust, scalable, integrated methodology for community detection and community comparison in graphs. In our procedure, we first embed a graph into an appropriate Euclidean space to obtain a low-dimensional representation, and then cluster the vertices into communities. We next employ nonparametric graph inference techniques to identify structural similarity among these communities. These two steps are then applied recursively on the communities, allowing us to detect more fine-grained structure. We describe a hierarchical stochastic blockmodel—namely, a stochastic blockmodel with a natural hierarchical structure—and establish conditions under which our algorithm yields consistent estimates of model parameters and motifs, which we define to be stochastically similar groups of subgraphs. Finally, we demonstrate the effectiveness of our algorithm in both simulated and real data. Specifically, we address the problem of locating similar sub-communities in a partially reconstructed Drosophila connectome and in the social network Friendster.",
"title": ""
},
{
"docid": "6b6099ee6f04f1b490b7e483de3087ff",
"text": "International Electrotechnical Commission (IEC) standard 61850 proposes the Ethernet-based communication networks for protection and automation within the power substation. Major manufacturers are currently developing products for the process bus in compliance with IEC 61850 part 9-2. For the successful implementation of the IEC 61850-9-2 process bus, it is important to analyze the performance of time-critical messages for the substation protection and control functions. This paper presents the performance evaluation of the IEC 61850-9-2 process bus for a typical 345 kV/230 kV substation by studying the time-critical sampled value messages delay and loss by using the OPNET simulation tool in the first part of this paper. In the second part, this paper presents a corrective measure to address the issues with the several sampled value messages lost and/or delayed by proposing the sampled value estimation algorithm for any digital substation relaying. Finally, the proposed sampled value estimation algorithm has been examined for various power system scenarios with the help of PSCAD/EMTDC and MATLAB simulation tools.",
"title": ""
},
{
"docid": "df5ce1a194802b0f6dac28d1a05bb08e",
"text": "This paper presents a 77-GHz CMOS frequency-modulated continuous-wave (FMCW) frequency synthesizer with the capability of reconfigurable chirps. The frequency-sweep range and sweep time of the chirp signals can be reconfigured every cycle such that the frequency-hopping random chirp signal can be realized for an FMCW radar transceiver. The frequency synthesizer adopts the fractional-N phase-locked-loop technique and is fully integrated in TSMC 65-nm digital CMOS technology. The silicon area of the synthesizer is 0.65 mm × 0.45 mm and it consumes 51.3 mW of power. The measured output phase noise of the synthesizer is -85.1 dBc/Hz at 1-MHz offset and the root-mean-square modulation frequency error is smaller than 73 kHz.",
"title": ""
},
{
"docid": "7252372bdacaa69b93e52a7741c8f4c2",
"text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation",
"title": ""
},
{
"docid": "5213ed67780b194a609220677b9c1dd4",
"text": "Cardiovascular diseases (CVD) are initiated by endothelial dysfunction and resultant expression of adhesion molecules for inflammatory cells. Inflammatory cells secrete cytokines/chemokines and growth factors and promote CVD. Additionally, vascular cells themselves produce and secrete several factors, some of which can be useful for the early diagnosis and evaluation of disease severity of CVD. Among vascular cells, abundant vascular smooth muscle cells (VSMCs) secrete a variety of humoral factors that affect vascular functions in an autocrine/paracrine manner. Among these factors, we reported that CyPA (cyclophilin A) is secreted mainly from VSMCs in response to Rho-kinase activation and excessive reactive oxygen species (ROS). Additionally, extracellular CyPA augments ROS production, damages vascular functions, and promotes CVD. Importantly, a recent study in ATVB demonstrated that ambient air pollution increases serum levels of inflammatory cytokines. Moreover, Bell et al reported an association of air pollution exposure with high-density lipoprotein (HDL) cholesterol and particle number. In a large, multiethnic cohort study of men and women free of prevalent clinical CVD, they found that higher concentrations of PM2.5 over a 3-month time period was associated with lower HDL particle number, and higher annual concentrations of black carbon were associated with lower HDL cholesterol. Together with the authors’ previous work on biomarkers of oxidative stress, they provided evidence for potential pathways that may explain the link between air pollution exposure and acute cardiovascular events. The objective of this review is to highlight the novel research in the field of biomarkers for CVD.",
"title": ""
},
{
"docid": "a497cb84141c7db35cd9a835b11f33d2",
"text": "Ubiquitous nature of online social media and ever expending usage of short text messages becomes a potential source of crowd wisdom extraction especially in terms of sentiments therefore sentiment classification and analysis is a significant task of current research purview. Major challenge in this area is to tame the data in terms of noise, relevance, emoticons, folksonomies and slangs. This works is an effort to see the effect of pre-processing on twitter data for the fortification of sentiment classification especially in terms of slang word. The proposed method of pre-processing relies on the bindings of slang words on other coexisting words to check the significance and sentiment translation of the slang word. We have used n-gram to find the bindings and conditional random fields to check the significance of slang word. Experiments were carried out to observe the effect of proposed method on sentiment classification which clearly indicates the improvements in accuracy of classification. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016).",
"title": ""
},
{
"docid": "76f4d1051bcb75156f4fcf402b1ebf27",
"text": "Slowly but surely, Alzheimer's disease (AD) patients lose their memory and their cognitive abilities, and even their personalities may change dramatically. These changes are due to the progressive dysfunction and death of nerve cells that are responsible for the storage and processing of information. Although drugs can temporarily improve memory, at present there are no treatments that can stop or reverse the inexorable neurodegenerative process. But rapid progress towards understanding the cellular and molecular alterations that are responsible for the neuron's demise may soon help in developing effective preventative and therapeutic strategies.",
"title": ""
},
{
"docid": "d1aa525575e33c587d86e89566c21a49",
"text": "This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.",
"title": ""
},
{
"docid": "9110970e05ed5f5365d613f6f8f2c8ba",
"text": "Abstrak –The objective of this paper is a new MeanMedian filtering for denoising extremely corrupted images by impulsive noise. Whenever an image is converted from one form to another, some of degradation occurs at the output. Improvement in the quality of these degraded images can be achieved by the application of Restoration and /or Enhancement techniques. Noise removing is one of the categories of Enhancement. Removing noise from the original signal is still a challenging problem. Mean filtering fails to effectively remove heavy tailed noise & performance poorly in the presence of signal dependent noise. The successes of median filters are edge preservation and efficient attenuation of impulsive noise. An important shortcoming of the median filter is that the output is one of the samples in the input window. Based on this mixture distributions are proposed to effectively remove impulsive noise characteristics. Finally, the results of comparative analysis of mean-median algorithm with mean, median filters for impulsive noise removal show a high efficiency of this approach relatively to other ones.",
"title": ""
}
] |
scidocsrr
|
dc9e73ad88b78a880060bf6e18c43a81
|
Covariance Tracking using Model Update Based on Lie Algebra
|
[
{
"docid": "1d61e1eb5275444c6a2a3f8ad5c2865a",
"text": "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore,we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance fetures is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. European Conference on Computer Vision (ECCV) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2006 201 Broadway, Cambridge, Massachusetts 02139 Region Covariance: A Fast Descriptor for Detection and Classification Oncel Tuzel, Fatih Porikli, and Peter Meer 1 Computer Science Department, 2 Electrical and Computer Engineering Department, Rutgers University, Piscataway, NJ 08854 {otuzel, meer}@caip.rutgers.edu 3 Mitsubishi Electric Research Laboratories, Cambridge, MA 02139 {fatih}@merl.com Abstract. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.",
"title": ""
}
] |
[
{
"docid": "9b6191f96f096035429583e8799a2eb2",
"text": "Recognition of food images is challenging due to their diversity and practical for health care on foods for people. In this paper, we propose an automatic food image recognition system for 85 food categories by fusing various kinds of image features including bag-of-features~(BoF), color histogram, Gabor features and gradient histogram with Multiple Kernel Learning~(MKL). In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 62.52% classification rate for 85 food categories.",
"title": ""
},
{
"docid": "101e93562935c799c3c3fa62be98bf09",
"text": "This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line , we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.",
"title": ""
},
{
"docid": "83fba4d122d9c13c4492dfce9c8d8e89",
"text": "We propose two metrics to demonstrate the impact integrating human-computer interaction (HCI) activities in software engineering (SE) processes. User experience metric (UXM) is a product metric that measures the subjective and ephemeral notion of the user’s experience with a product. Index of integration (IoI) is a process metric that measures how integrated the HCI activities were with the SE process. Both metrics have an organizational perspective and can be applied to a wide range of products and projects. Attempt was made to keep the metrics light-weight. While the main motivation behind proposing the two metrics was to establish a correlation between them and thereby demonstrate the effectiveness of the process, several other applications are emerging. The two metrics were evaluated with three industry projects and reviewed by four faculty members from a university and modified based on the feedback.",
"title": ""
},
{
"docid": "79910e1dadf52be1b278d2e57d9bdb9e",
"text": "Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "8d8e7327f79b256b1ee9dac9a2573b55",
"text": "The objective of this work is set-based face recognition, i.e. to decide if two sets of images of a face are of the same person or not. Conventionally, the set-wise feature descriptor is computed as an average of the descriptors from individual face images within the set. In this paper, we design a neural network architecture that learns to aggregate based on both “visual” quality (resolution, illumination), and “content” quality (relative importance for discriminative classification). To this end, we propose a Multicolumn Network (MN) that takes a set of images (the number in the set can vary) as input, and learns to compute a fix-sized feature descriptor for the entire set. To encourage high-quality representations, each individual input image is first weighted by its “visual” quality, determined by a self-quality assessment module, and followed by a dynamic recalibration based on “content” qualities relative to the other images within the set. Both of these qualities are learnt implicitly during training for setwise classification. Comparing with the previous state-of-the-art architectures trained with the same dataset (VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the IARPA IJB face recognition benchmarks, and exceed the state of the art for all methods on these benchmarks.",
"title": ""
},
{
"docid": "c253083ab44c842819059ad64781d51d",
"text": "RGB-D data is getting ever more interest from the research community as both cheap cameras appear in the market and the applications of this type of data become more common. A current trend in processing image data is the use of convolutional neural networks (CNNs) that have consistently beat competition in most benchmark data sets. In this paper we investigate the possibility of transferring knowledge between CNNs when processing RGB-D data with the goal of both improving accuracy and reducing training time. We present experiments that show that our proposed approach can achieve both these goals.",
"title": ""
},
{
"docid": "13a23fe61319bc82b8b3e88ea895218c",
"text": "A new generation of robots is being designed for human occupied workspaces where safety is of great concern. This research demonstrates the use of a capacitive skin sensor for collision detection. Tests demonstrate that the sensor reduces impact forces and can detect and characterize collision events, providing information that may be used in the future for force reduction behaviors. Various parameters that affect collision severity, including interface friction, interface stiffness, end tip velocity and joint stiffness irrespective of controller bandwidth are also explored using the sensor to provide information about the contact force at the site of impact. Joint stiffness is made independent of controller bandwidth limitations using passive torsional springs of various stiffnesses. Results indicate a positive correlation between peak impact force and joint stiffness, skin friction and interface stiffness, with implications for future skin and robot link designs and post-collision behaviors.",
"title": ""
},
{
"docid": "121a8470fcbf121e5f4c42594c6d24fe",
"text": "Research has consistently found that school students who do not identify as self-declared completely heterosexual are at increased risk of victimization by bullying from peers. This study examined heterosexual and nonheterosexual university students' involvement in both traditional and cyber forms of bullying, as either bullies or victims. Five hundred twenty-eight first-year university students (M=19.52 years old) were surveyed about their sexual orientation and their bullying experiences over the previous 12 months. The results showed that nonheterosexual young people reported higher levels of involvement in traditional bullying, both as victims and perpetrators, in comparison to heterosexual students. In contrast, cyberbullying trends were generally found to be similar for heterosexual and nonheterosexual young people. Gender differences were also found. The implications of these results are discussed in terms of intervention and prevention of the victimization of nonheterosexual university students.",
"title": ""
},
{
"docid": "af0cfa757d5e419f4e0d00da20e2db8a",
"text": "Vertebrate CpG islands (CGIs) are short interspersed DNA sequences that deviate significantly from the average genomic pattern by being GC-rich, CpG-rich, and predominantly nonmethylated. Most, perhaps all, CGIs are sites of transcription initiation, including thousands that are remote from currently annotated promoters. Shared DNA sequence features adapt CGIs for promoter function by destabilizing nucleosomes and attracting proteins that create a transcriptionally permissive chromatin state. Silencing of CGI promoters is achieved through dense CpG methylation or polycomb recruitment, again using their distinctive DNA sequence composition. CGIs are therefore generically equipped to influence local chromatin structure and simplify regulation of gene activity.",
"title": ""
},
{
"docid": "756acd9371f7f0c30b10b55742d93730",
"text": "Pseudo-Relevance Feedback (PRF) is an important general technique for improving retrieval effectiveness without requiring any user effort. Several state-of-the-art PRF models are based on the language modeling approach where a query language model is learned based on feedback documents. In all these models, feedback documents are represented with unigram language models smoothed with a collection language model. While collection language model-based smoothing has proven both effective and necessary in using language models for retrieval, we use axiomatic analysis to show that this smoothing scheme inherently causes the feedback model to favor frequent terms and thus violates the IDF constraint needed to ensure selection of discriminative feedback terms. To address this problem, we propose replacing collection language model-based smoothing in the feedback stage with additive smoothing, which is analytically shown to select more discriminative terms. Empirical evaluation further confirms that additive smoothing indeed significantly outperforms collection-based smoothing methods in multiple language model-based PRF models.",
"title": ""
},
{
"docid": "17c278b8ab68aada7284bbcbc6e765a5",
"text": "This paper proposes a novel control scheme of single-phase-to-three-phase pulsewidth-modulation (PWM) converters for low-power three-phase induction motor drives, where a single-phase half-bridge PWM rectifier and a two-leg inverter are used. With this converter topology, the number of switching devices is reduced to six from ten in the case of full-bridge rectifier and three-leg inverter systems. In addition, the source voltage sensor is eliminated with a state observer, which controls the deviation between the model current and the system current to be zero. A simple scalar voltage modulation method is used for a two-leg inverter, and a new technique to eliminate the effect of the dc-link voltage ripple on the inverter output current is proposed. Although the converter topology itself is of lower cost than the conventional one, it retains the same functions such as sinusoidal input current, unity power factor, dc-link voltage control, bidirectional power flow, and variable-voltage and variable-frequency output voltage. The experimental results for the V/f control of 3-hp induction motor drives controlled by a digital signal processor TMS320C31 chip have verified the effectiveness of the proposed scheme",
"title": ""
},
{
"docid": "535ebbee465f6a009a2a85c47115a51b",
"text": "Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.",
"title": ""
},
{
"docid": "599e203a8090cc45b6dc2263567f2a5f",
"text": "We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our method's effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.",
"title": ""
},
{
"docid": "abd2756b3804895a7e96a2a401d73395",
"text": "Procedural noise is a fundamental tool in Computer Graphics. However, designing noise patterns is hard. In this paper, we present Gabor noise by example, a method to estimate the parameters of bandwidth-quantized Gabor noise, a procedural noise function that can generate noise with an arbitrary power spectrum, from exemplar Gaussian textures, a class of textures that is completely characterized by their power spectrum. More specifically, we introduce (i) bandwidth-quantized Gabor noise, a generalization of Gabor noise to arbitrary power spectra that enables robust parameter estimation and efficient procedural evaluation; (ii) a robust parameter estimation technique for quantized-bandwidth Gabor noise, that automatically decomposes the noisy power spectrum estimate of an exemplar into a sparse sum of Gaussians using non-negative basis pursuit denoising; and (iii) an efficient procedural evaluation scheme for bandwidth-quantized Gabor noise, that uses multi-grid evaluation and importance sampling of the kernel parameters. Gabor noise by example preserves the traditional advantages of procedural noise, including a compact representation and a fast on-the-fly evaluation, and is mathematically well-founded.",
"title": ""
},
{
"docid": "68c6e72469ceba84ef71d47f1836dd21",
"text": "Recurrent neural networks, particularly the long short-term memory networks, are extremely appealing for sequence-tosequence learning tasks. Despite their great success, they typically suffer from a fundamental shortcoming: they are prone to generate unbalanced targets with good prefixes but bad suffixes, and thus performance suffers when dealing with long sequences. We propose a simple yet effective approach to overcome this shortcoming. Our approach relies on the agreement between a pair of target-directional LSTMs, which generates more balanced targets. In addition, we develop two efficient approximate search methods for agreement that are empirically shown to be almost optimal in terms of sequence-level losses. Extensive experiments were performed on two standard sequence-to-sequence transduction tasks: machine transliteration and grapheme-to-phoneme transformation. The results show that the proposed approach achieves consistent and substantial improvements, compared to six state-of-the-art systems. In particular, our approach outperforms the best reported error rates by a margin (up to 9% relative gains) on the grapheme-to-phoneme task. Our toolkit is publicly available on https://github.com/lemaoliu/Agtarbidir. Recurrent neural networks (RNNs) (Mikolov et al. 2010), particularly Long Short-term Memory networks (LSTMs)1 (Hochreiter and Schmidhuber 1997; Graves 2013), provide a universal and powerful solution for various tasks that have traditionally required carefully designed, task-specific solutions. On classification tasks (Graves and Schmidhuber 2008; Tai, Socher, and Manning 2015), they can readily summarize an unbounded context which is difficult for tranditional solutions, and this leads to more reliable prediciton. They have advantages over traditional solutions on a more general and challenging tasks such as sequence-to-sequence learning (Sutskever, Vinyals, and Le 2014), where a series of local but dependent predictions are required. RNNs make use of the contextual information for the entire source sequence and also critically are able to exploit the entire sequence of previous predictions. On various sequence-tosequence transduction tasks, RNNs have been shown to be comparable to the state-of-the-art (Bahdanau, Cho, and BenCopyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Throughout this paper, an LSTM network denote a particular RNN with LSTM hidden units. Figure 1: Illustraction of the fundamental shortcoming of an LSTM in decoding. gio 2015; Meng et al. 2015) or superior (Jean et al. 2015; Luong et al. 2015). Despite their sucesses on sequence-to-sequnce learning, RNNs suffer from a fundamental and crucial shortcoming, which has surprisingly been overlooked. When making predictions (in decoding), an LSTM needs to encode the previous local predictions as a part of the contextual information. If some of previous predictions are incorrect, the context for subsequent predictions might include some noises, which undermine the quality of subsequent predicitons, as shown in Figure 1. In the figure, larger fonts indicate greater confidence in the predicted target character. The prediction at t = 7 uses a context consisting of the input and all previous predictions. Since at t = 5 the prediction is incorrect, i.e. it should be ‘R’ (the green character in the reference) instead of ‘L’, it leads to an incorrect prediction at t = 7. In this way, an LSTM is more likely to generate an unbalanced sequence deteriorating in quality as the target sequence is generated. A statistical analysis on the real prediction results from an LSTM was performed in order to motivate the work reported here. The analysis supports our hypothesis, and found that on test examples longer than 10 characters, the precision of predictions for the first two characters was higher than 77%, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16)",
"title": ""
},
{
"docid": "abeccd593d90415c843385fe6ef7608f",
"text": "A1 Functional advantages of cell-type heterogeneity in neural circuits Tatyana O. Sharpee A2 Mesoscopic modeling of propagating waves in visual cortex Alain Destexhe A3 Dynamics and biomarkers of mental disorders Mitsuo Kawato F1 Precise recruitment of spiking output at theta frequencies requires dendritic h-channels in multi-compartment models of oriens-lacunosum/moleculare hippocampal interneurons Vladislav Sekulić, Frances K. Skinner F2 Kernel methods in reconstruction of current sources from extracellular potentials for single cells and the whole brains Daniel K. Wójcik, Chaitanya Chintaluri, Dorottya Cserpán, Zoltán Somogyvári F3 The synchronized periods depend on intracellular transcriptional repression mechanisms in circadian clocks. Jae Kyoung Kim, Zachary P. Kilpatrick, Matthew R. Bennett, Kresimir Josić O1 Assessing irregularity and coordination of spiking-bursting rhythms in central pattern generators Irene Elices, David Arroyo, Rafael Levi, Francisco B. Rodriguez, Pablo Varona O2 Regulation of top-down processing by cortically-projecting parvalbumin positive neurons in basal forebrain Eunjin Hwang, Bowon Kim, Hio-Been Han, Tae Kim, James T. McKenna, Ritchie E. Brown, Robert W. McCarley, Jee Hyun Choi O3 Modeling auditory stream segregation, build-up and bistability James Rankin, Pamela Osborn Popp, John Rinzel O4 Strong competition between tonotopic neural ensembles explains pitch-related dynamics of auditory cortex evoked fields Alejandro Tabas, André Rupp, Emili Balaguer-Ballester O5 A simple model of retinal response to multi-electrode stimulation Matias I. Maturana, David B. Grayden, Shaun L. Cloherty, Tatiana Kameneva, Michael R. Ibbotson, Hamish Meffin O6 Noise correlations in V4 area correlate with behavioral performance in visual discrimination task Veronika Koren, Timm Lochmann, Valentin Dragoi, Klaus Obermayer O7 Input-location dependent gain modulation in cerebellar nucleus neurons Maria Psarrou, Maria Schilstra, Neil Davey, Benjamin Torben-Nielsen, Volker Steuber O8 Analytic solution of cable energy function for cortical axons and dendrites Huiwen Ju, Jiao Yu, Michael L. Hines, Liang Chen, Yuguo Yu O9 C. elegans interactome: interactive visualization of Caenorhabditis elegans worm neuronal network Jimin Kim, Will Leahy, Eli Shlizerman O10 Is the model any good? Objective criteria for computational neuroscience model selection Justas Birgiolas, Richard C. Gerkin, Sharon M. Crook O11 Cooperation and competition of gamma oscillation mechanisms Atthaphon Viriyopase, Raoul-Martin Memmesheimer, Stan Gielen O12 A discrete structure of the brain waves Yuri Dabaghian, Justin DeVito, Luca Perotti O13 Direction-specific silencing of the Drosophila gaze stabilization system Anmo J. Kim, Lisa M. Fenk, Cheng Lyu, Gaby Maimon O14 What does the fruit fly think about values? A model of olfactory associative learning Chang Zhao, Yves Widmer, Simon Sprecher,Walter Senn O15 Effects of ionic diffusion on power spectra of local field potentials (LFP) Geir Halnes, Tuomo Mäki-Marttunen, Daniel Keller, Klas H. Pettersen,Ole A. Andreassen, Gaute T. Einevoll O16 Large-scale cortical models towards understanding relationship between brain structure abnormalities and cognitive deficits Yasunori Yamada O17 Spatial coarse-graining the brain: origin of minicolumns Moira L. Steyn-Ross, D. Alistair Steyn-Ross O18 Modeling large-scale cortical networks with laminar structure Jorge F. Mejias, John D. Murray, Henry Kennedy, Xiao-Jing Wang O19 Information filtering by partial synchronous spikes in a neural population Alexandra Kruscha, Jan Grewe, Jan Benda, Benjamin Lindner O20 Decoding context-dependent olfactory valence in Drosophila Laurent Badel, Kazumi Ohta, Yoshiko Tsuchimoto, Hokto Kazama P1 Neural network as a scale-free network: the role of a hub B. Kahng P2 Hemodynamic responses to emotions and decisions using near-infrared spectroscopy optical imaging Nicoladie D. Tam P3 Phase space analysis of hemodynamic responses to intentional movement directions using functional near-infrared spectroscopy (fNIRS) optical imaging technique Nicoladie D.Tam, Luca Pollonini, George Zouridakis P4 Modeling jamming avoidance of weakly electric fish Jaehyun Soh, DaeEun Kim P5 Synergy and redundancy of retinal ganglion cells in prediction Minsu Yoo, S. E. Palmer P6 A neural field model with a third dimension representing cortical depth Viviana Culmone, Ingo Bojak P7 Network analysis of a probabilistic connectivity model of the Xenopus tadpole spinal cord Andrea Ferrario, Robert Merrison-Hort, Roman Borisyuk P8 The recognition dynamics in the brain Chang Sub Kim P9 Multivariate spike train analysis using a positive definite kernel Taro Tezuka P10 Synchronization of burst periods may govern slow brain dynamics during general anesthesia Pangyu Joo P11 The ionic basis of heterogeneity affects stochastic synchrony Young-Ah Rho, Shawn D. Burton, G. Bard Ermentrout, Jaeseung Jeong, Nathaniel N. Urban P12 Circular statistics of noise in spike trains with a periodic component Petr Marsalek P14 Representations of directions in EEG-BCI using Gaussian readouts Hoon-Hee Kim, Seok-hyun Moon, Do-won Lee, Sung-beom Lee, Ji-yong Lee, Jaeseung Jeong P15 Action selection and reinforcement learning in basal ganglia during reaching movements Yaroslav I. Molkov, Khaldoun Hamade, Wondimu Teka, William H. Barnett, Taegyo Kim, Sergey Markin, Ilya A. Rybak P17 Axon guidance: modeling axonal growth in T-Junction assay Csaba Forro, Harald Dermutz, László Demkó, János Vörös P19 Transient cell assembly networks encode persistent spatial memories Yuri Dabaghian, Andrey Babichev P20 Theory of population coupling and applications to describe high order correlations in large populations of interacting neurons Haiping Huang P21 Design of biologically-realistic simulations for motor control Sergio Verduzco-Flores P22 Towards understanding the functional impact of the behavioural variability of neurons Filipa Dos Santos, Peter Andras P23 Different oscillatory dynamics underlying gamma entrainment deficits in schizophrenia Christoph Metzner, Achim Schweikard, Bartosz Zurowski P24 Memory recall and spike frequency adaptation James P. Roach, Leonard M. Sander, Michal R. Zochowski P25 Stability of neural networks and memory consolidation preferentially occur near criticality Quinton M. Skilling, Nicolette Ognjanovski, Sara J. Aton, Michal Zochowski P26 Stochastic Oscillation in Self-Organized Critical States of Small Systems: Sensitive Resting State in Neural Systems Sheng-Jun Wang, Guang Ouyang, Jing Guang, Mingsha Zhang, K. Y. Michael Wong, Changsong Zhou P27 Neurofield: a C++ library for fast simulation of 2D neural field models Peter A. Robinson, Paula Sanz-Leon, Peter M. Drysdale, Felix Fung, Romesh G. Abeysuriya, Chris J. Rennie, Xuelong Zhao P28 Action-based grounding: Beyond encoding/decoding in neural code Yoonsuck Choe, Huei-Fang Yang P29 Neural computation in a dynamical system with multiple time scales Yuanyuan Mi, Xiaohan Lin, Si Wu P30 Maximum entropy models for 3D layouts of orientation selectivity Joscha Liedtke, Manuel Schottdorf, Fred Wolf P31 A behavioral assay for probing computations underlying curiosity in rodents Yoriko Yamamura, Jeffery R. Wickens P32 Using statistical sampling to balance error function contributions to optimization of conductance-based models Timothy Rumbell, Julia Ramsey, Amy Reyes, Danel Draguljić, Patrick R. Hof, Jennifer Luebke, Christina M. Weaver P33 Exploration and implementation of a self-growing and self-organizing neuron network building algorithm Hu He, Xu Yang, Hailin Ma, Zhiheng Xu, Yuzhe Wang P34 Disrupted resting state brain network in obese subjects: a data-driven graph theory analysis Kwangyeol Baek, Laurel S. Morris, Prantik Kundu, Valerie Voon P35 Dynamics of cooperative excitatory and inhibitory plasticity Everton J. Agnes, Tim P. Vogels P36 Frequency-dependent oscillatory signal gating in feed-forward networks of integrate-and-fire neurons William F. Podlaski, Tim P. Vogels P37 Phenomenological neural model for adaptation of neurons in area IT Martin Giese, Pradeep Kuravi, Rufin Vogels P38 ICGenealogy: towards a common topology of neuronal ion channel function and genealogy in model and experiment Alexander Seeholzer, William Podlaski, Rajnish Ranjan, Tim Vogels P39 Temporal input discrimination from the interaction between dynamic synapses and neural subthreshold oscillations Joaquin J. Torres, Fabiano Baroni, Roberto Latorre, Pablo Varona P40 Different roles for transient and sustained activity during active visual processing Bart Gips, Eric Lowet, Mark J. Roberts, Peter de Weerd, Ole Jensen, Jan van der Eerden P41 Scale-free functional networks of 2D Ising model are highly robust against structural defects: neuroscience implications Abdorreza Goodarzinick, Mohammad D. Niry, Alireza Valizadeh P42 High frequency neuron can facilitate propagation of signal in neural networks Aref Pariz, Shervin S. Parsi, Alireza Valizadeh P43 Investigating the effect of Alzheimer’s disease related amyloidopathy on gamma oscillations in the CA1 region of the hippocampus Julia M. Warburton, Lucia Marucci, Francesco Tamagnini, Jon Brown, Krasimira Tsaneva-Atanasova P44 Long-tailed distributions of inhibitory and excitatory weights in a balanced network with eSTDP and iSTDP Florence I. Kleberg, Jochen Triesch P45 Simulation of EMG recording from hand muscle due to TMS of motor cortex Bahar Moezzi, Nicolangelo Iannella, Natalie Schaworonkow, Lukas Plogmacher, Mitchell R. Goldsworthy, Brenton Hordacre, Mark D. McDonnell, Michael C. Ridding, Jochen Triesch P46 Structure and dynamics of axon network formed in primary cell culture Martin Zapotocky, Daniel Smit, Coralie Fouquet, Alain Trembleau P47 Efficient signal processing and sampling in random networks that generate variability Sakyasingha Dasgupta, Isao Nishikawa, Kazuyuki Aihara, Taro Toyoizumi P48 Modeling the effect of riluzole on bursting in respiratory neural networks Daniel T",
"title": ""
},
{
"docid": "ae72dc57784a9b3bb05dea9418e28914",
"text": "This study explores Internet addiction among some of the Taiwan's college students. Also covered are a discussion of the Internet as a form of addiction, and related literature on this issue. This study used the Uses and Grati®cations theory and the Play theory in mass communication. Nine hundred and ten valid surveys were collected from 12 universities and colleges around Taiwan. The results indicated that Internet addiction does exist among some of Taiwan's college students. In particular, 54 students were identi®ed as Internet addicts. It was found that Internet addicts spent almost triple the number of hours connected to the Internet as compare to non-addicts, and spent signi®cantly more time on BBSs, the WWW, e-mail and games than non-addicts. The addict group found the Internet entertaining, interesting, interactive, and satisfactory. The addict group rated Internet impacts on their studies and daily life routines signi®cantly more negatively than the non-addict group. The study also found that the most powerful predictor of Internet addiction is the communication pleasure score, followed by BBS use hours, sex, satisfaction score, and e-mail-use hours. 7 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "90125582272e3f16a34d5d0c885f573a",
"text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.",
"title": ""
}
] |
scidocsrr
|
5786f0d5addf93608bc4c2b80ffd8e7e
|
Harvesting Wireless Power: Survey of Energy-Harvester Conversion Efficiency in Far-Field, Wireless Power Transfer Systems
|
[
{
"docid": "ba3f3ca8a34e1ea6e54fe9dde673b51f",
"text": "This paper proposes a high-efficiency dual-band on-chip rectifying antenna (rectenna) at 35 and 94 GHz for wireless power transmission. The rectenna is designed in slotline (SL) and finite-width ground coplanar waveguide (FGCPW) transmission lines in a CMOS 0.13-μm process. The rectenna comprises a high gain linear tapered slot antenna (LTSA), an FGCPW to SL transition, a bandpass filter, and a full-wave rectifier. The LTSA achieves a VSWR=2 fractional bandwidth of 82% and 41%, and a gain of 7.4 and 6.5 dBi at the frequencies of 35 and 94 GHz. The measured power conversion efficiencies are 53% and 37% in free space at 35 and 94 GHz, while the incident radiation power density is 30 mW/cm2 . The fabricated rectenna occupies a compact size of 2.9 mm2.",
"title": ""
}
] |
[
{
"docid": "1f095f22ac95b995cfe2c2d9ccf54be6",
"text": "Q-learning can be used to learn a control policy that maximises a scalar reward through interaction with the environment. Qlearning is commonly applied to problems with discrete states and actions. We describe a method suitable for control tasks which require continuous actions, in response to continuous states. The system consists of a neural network coupled with a novel interpolator. Simulation results are presented for a non-holonomic control task. Advantage Learning, a variation of Q-learning, is shown enhance learning speed and reliability",
"title": ""
},
{
"docid": "76105ede3908516cebd3bb84ad965be0",
"text": "897 don't know the coin used for each set of tosses. However, if we had some way of completing the data (in our case, guessing correctly which coin was used in each of the five sets), then we could reduce parameter estimation for this problem with incomplete data to maximum likelihood estimation with complete data. One iterative scheme for obtaining completions could work as follows: starting from some initial parameters, θ ˆ ˆ ˆ = θ Α ,θ Β (t) (t) (t) (), determine for each of the five sets whether coin A or coin B was more likely to have generated the observed flips (using the current parameter estimates). Then, assume these completions (that is, guessed coin assignments) to be correct, and apply the regular maximum likelihood estimation procedure to get θ ˆ(t+1). Finally, repeat these two steps until convergence. As the estimated model improves, so too will the quality of the resulting completions. The expectation maximization algorithm is a refinement on this basic idea. Rather than picking the single most likely completion of the missing coin assignments on each iteration, the expectation maximization algorithm computes probabilities for each possible completion of the missing data, using the current parameters θ ˆ(t). These probabilities are used to create a weighted training set consisting of all possible completions of the data. Finally, a modified version of maximum likelihood estimation that deals with weighted training examples provides new parameter estimates, θ ˆ(t+1). By using weighted training examples rather than choosing the single best completion, the expectation maximization algorithm accounts for the confidence of the model in each completion of the data (Fig. 1b). In summary, the expectation maximiza-tion algorithm alternates between the steps z = (z 1 , z 2 ,…, z 5), where x i ∈ {0,1,…,10} is the number of heads observed during the ith set of tosses, and z i ∈ {A,B} is the identity of the coin used during the ith set of tosses. Parameter estimation in this setting is known as the complete data case in that the values of all relevant random variables in our model (that is, the result of each coin flip and the type of coin used for each flip) are known. Here, a simple way to estimate θ A and θ B is to return the observed proportions of heads for each coin: (1) θ Α ˆ = # of heads using …",
"title": ""
},
{
"docid": "09deba1b4b2dd95b821a4f5de68c7f7b",
"text": "BACKGROUND\nStudies have shown that a significant proportion of people with epilepsy use complementary and alternative medicine (CAM). CAM use is known to vary between different ethnic groups and cultural contexts; however, little attention has been devoted to inter-ethnic differences within the UK population. We studied the use of biomedicine, complementary and alternative medicine, and ethnomedicine in a sample of people with epilepsy of South Asian origin living in the north of England.\n\n\nMETHODS\nInterviews were conducted with 30 people of South Asian origin and 16 carers drawn from a sampling frame of patients over 18 years old with epilepsy, compiled from epilepsy registers and hospital databases. All interviews were tape-recorded, translated if required and transcribed. A framework approach was adopted to analyse the data.\n\n\nRESULTS\nAll those interviewed were taking conventional anti-epileptic drugs. Most had also sought help from traditional South Asian practitioners, but only two people had tried conventional CAM. Decisions to consult a traditional healer were taken by families rather than by individuals with epilepsy. Those who made the decision to consult a traditional healer were usually older family members and their motivations and perceptions of safety and efficacy often differed from those of the recipients of the treatment. No-one had discussed the use of traditional therapies with their doctor. The patterns observed in the UK mirrored those reported among people with epilepsy in India and Pakistan.\n\n\nCONCLUSION\nThe health care-seeking behaviour of study participants, although mainly confined within the ethnomedicine sector, shared much in common with that of people who use global CAM. The appeal of traditional therapies lay in their religious and moral legitimacy within the South Asian community, especially to the older generation who were disproportionately influential in the determination of treatment choices. As a second generation made up of people of Pakistani origin born in the UK reach the age when they are the influential decision makers in their families, resort to traditional therapies may decline. People had long experience of navigating plural systems of health care and avoided potential conflict by maintaining strict separation between different sectors. Health care practitioners need to approach these issues with sensitivity and to regard traditional healers as potential allies, rather than competitors or quacks.",
"title": ""
},
{
"docid": "b5896a60a00d40eac55ed604120b59f2",
"text": "A Proactive Recommender System (PRS) actively pushes recommendations to users when the current context seems appropriate. Despite the advantages of PRSs, especially in the mobile scenario where users could be provided with relevant items on-the-fly when needed, the area of PRSs is still unexplored with many challenges. In particular, it is crucial to identify the relevant items for the target users as well as to determine the right context for pushing these items, since otherwise the user acceptance, and therefore system success, will be negatively impacted. In this paper, we propose a new model that scores each item on two dimensions, preference fit and context fit, to proactively push relevant items to the target user in the right context. Furthermore, we present the preliminary design of a prototype of a mobile Point of Interest (POI) recommender which will be implemented in order to evaluate the practicality and effectiveness of our proposed model.",
"title": ""
},
{
"docid": "3724a800d0c802203835ef9f68a87836",
"text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.",
"title": ""
},
{
"docid": "629648968e2b378f46fa19ae6a343e70",
"text": "BACKGROUND\nAustralia was one of the first countries to introduce a publicly funded national human papillomavirus (HPV) vaccination program that commenced in April 2007, using the quadrivalent HPV vaccine targeting 12- to 13-year-old girls on an ongoing basis. Two-year catch-up programs were offered to 14- to 17- year-old girls in schools and 18- to 26-year-old women in community-based settings. We present data from the school-based program on population-level vaccine effectiveness against cervical abnormalities in Victoria, Australia.\n\n\nMETHODS\nData for women age-eligible for the HPV vaccination program were linked between the Victorian Cervical Cytology Registry and the National HPV Vaccination Program Register to create a cohort of screening women who were either vaccinated or unvaccinated. Entry into the cohort was 1 April 2007 or at first Pap test for women not already screening. Vaccine effectiveness (VE) and hazard ratios (HR) for cervical abnormalities by vaccination status between 1 April 2007 and 31 December 2011 were calculated using proportional hazards regression.\n\n\nRESULTS\nThe study included 14,085 unvaccinated and 24,871 vaccinated women attending screening who were eligible for vaccination at school, 85.0% of whom had received three doses. Detection rates of histologically confirmed high-grade (HG) cervical abnormalities and high-grade cytology (HGC) were significantly lower for vaccinated women (any dose) (HG 4.8 per 1,000 person-years, HGC 11.9 per 1,000 person-years) compared with unvaccinated women (HG 6.4 per 1,000 person-years, HGC 15.3 per 1,000 person-years) HR 0.72 (95% CI 0.58 to 0.91) and HR 0.75 (95% CI 0.65 to 0.87), respectively. The HR for low-grade (LG) cytological abnormalities was 0.76 (95% CI 0.72 to 0.80). VE adjusted a priori for age at first screening, socioeconomic status and remoteness index, for women who were completely vaccinated, was greatest for CIN3+/AIS at 47.5% (95% CI 22.7 to 64.4) and 36.4% (95% CI 9.8 to 55.1) for women who received any dose of vaccine, and was negatively associated with age. For women who received only one or two doses of vaccine, HRs for HG histology were not significantly different from 1.0, although the number of outcomes was small.\n\n\nCONCLUSION\nA population-based HPV vaccination program in schools significantly reduced cervical abnormalities for vaccinated women within five years of implementation, with the greatest vaccine effectiveness observed for the youngest women.",
"title": ""
},
{
"docid": "b1a656d86ed4c9469f8d2a04186ff8bc",
"text": "The wealth of social information presented on Facebook is astounding. While these affordances allow users to keep up-to-date, they also produce a basis for social comparison and envy on an unprecedented scale. Even though envy may endanger users’ life satisfaction and lead to platform avoidance, no study exists uncovering this dynamics. To close this gap, we build on responses of 584 Facebook users collected as part of two independent studies. In study 1, we explore the scale, scope, and nature of envy incidents triggered by Facebook. In study 2, the role of envy feelings is examined as a mediator between intensity of passive following on Facebook and users’ life satisfaction. Confirming full mediation, we demonstrate that passive following exacerbates envy feelings, which decrease life satisfaction. From a provider’s perspective, our findings signal that users frequently perceive Facebook as a stressful environment, which may, in the long-run, endanger platform sustainability.",
"title": ""
},
{
"docid": "8c9823e26dba4b8aa3dcf0f967e529e1",
"text": "Anthropologists and archaeologists have paid little attention to the origin of music and musicality — far less than for either language or ‘art’. While art has been seen as an index of cognitive complexity and language as an essential tool of communication, music has suffered from our perception that it is an epiphenomenal ‘leisure activity’, and archaeologically inaccessible to boot. Nothing could be further from the truth, according to Steven Mithen; music is integral to human social life, he argues, and we can investigate its ancestry with the same rich range of analyses — neurological, physiological, ethnographic, linguistic, ethological and even archaeological — which have been deployed to study language.",
"title": ""
},
{
"docid": "d06393c467e19b0827eea5f86bbf4e98",
"text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.",
"title": ""
},
{
"docid": "2d73a7ab1e5a784d4755ed2fe44078db",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "f8e4db50272d14f026d0956ac25d39d6",
"text": "Automated estimation of the allocation of a driver's visual attention could be a critical component of future advanced driver assistance systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. But in practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects but can't provide as fine-grained of a resolution in localizing the gaze. For the purpose of keeping the driver safe, it's sufficient to partition gaze into regions. In this effort, a proposed system extracts facial features and classifies their spatial configuration into six regions in real time. The proposed method achieves an average accuracy of 91.4 percent at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.",
"title": ""
},
{
"docid": "51a37ec1069dceb1a532235f4702682f",
"text": "Abstract— This paper presented the design of four-port network directional coupler at X-band frequency (8.2-12.4 GHz) by using substrate integrated waveguide (SIW) technique. SIW appears few years back which provides an excellent platform in order to design millimeter-wave circuits such as filter, antenna, resonator, coupler and power divider. It offers great compensation for smaller size and can be easily integrated with other planar circuits. The fabrication process can simply be done by using standard Printed Circuit Board (PCB) process where the cost of the manufacturing process will be reduced compared to the conventional waveguide. The directional coupler basically implemented at radar, satellite and point-to-point radio. The simulations for this SIW directional coupler design shows good performances with low insertion loss, low return loss, broad operational bandwidth and have high isolation. Keyword-Bandwidth, Coupling, Directional coupler, Four-port network, Isolation",
"title": ""
},
{
"docid": "fc4ea7391c1500851ec0d37beed4cd90",
"text": "As a crucial operation, routing plays an important role in various communication networks. In the context of data and sensor networks, routing strategies such as shortest-path, multi-path and potential-based (“all-path”) routing have been developed. Existing results in the literature show that the shortest path and all-path routing can be obtained from L1 and L2 flow optimization, respectively. Based on this connection between routing and flow optimization in a network, in this paper we develop a unifying theoretical framework by considering flow optimization with mixed (weighted) L1/L2-norms. We obtain a surprising result: as we vary the trade-off parameter θ, the routing graphs induced by the optimal flow solutions span from shortest-path to multi-path to all-path routing-this entire sequence of routing graphs is referred to as the routing continuum. We also develop an efficient iterative algorithm for computing the entire routing continuum. Several generalizations are also considered, with applications to traffic engineering, wireless sensor networks, and network robustness analysis.",
"title": ""
},
{
"docid": "87fefd773ea10a006dbc9b76f4f1e4c1",
"text": "An underwater robotic assistant could help a human diver by illuminating work areas, fetching tools from the surface, or monitoring the diver's activity for abnormal behavior. However, in order for basic Underwater Human-Robot Interaction (UHRI) to be successful, the robotic assistant has to first be able to detect and track the diver. This paper discusses the detection and tracking of a diver with a high-frequency forward-looking sonar. The first step in the diver detection involves utilizing classical 2D image processing techniques to segment moving objects in the sonar image. The moving objects are then passed through a blob detection algorithm, and then the blob clusters are processed by the cluster classification process. Cluster classification is accomplished by matching observed cluster trajectories with trained Hidden Markov Models (HMM), which results in a cluster being classified as either a diver or clutter. Real-world results show that a moving diver can be autonomously distinguished from stationary objects in a noisy sonar image and tracked.",
"title": ""
},
{
"docid": "9095b7af97f9ff8a4258aa89b0ded6b6",
"text": "Data augmentation is the process of generating samples by transforming training data, with the target of improving the accuracy and robustness of classifiers. In this paper, we propose a new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation. Specifically, for each sample, our main idea is to seek a small transformation that yields maximal classification loss on the transformed sample. We employ a trust-region optimization strategy, which consists of solving a sequence of linear programs. Our data augmentation scheme is then integrated into a Stochastic Gradient Descent algorithm for training deep neural networks. We perform experiments on two datasets, and show that that the proposed scheme outperforms random data augmentation algorithms in terms of accuracy and robustness, while yielding comparable or superior results with respect to existing selective sampling approaches.",
"title": ""
},
{
"docid": "966d7821eb78330693e8b9b4498cdade",
"text": "Accidents and inflicted trauma account for 33% and 5-8% of childhood deaths, respectively. Injuries secondary to falling televisions have been reported in the clinical literature. However, descriptions of such injuries at autopsy are limited. The severity and patterns of injury may mimic those considered ''typical'' of inflicted trauma. Thus, integration of data from clinical, scene investigation, and autopsy is necessary for determination of the cause and manner of death. We present autopsy findings from two cases which illustrate injuries sustained from falling televisions. Findings common to both cases include subscalpular hemorrhages, skull fractures, subdural hemorrhages, brain injuries, and optic nerve sheath hemorrhages. The first case showed postsurgical changes secondary to evacuation of a posterior fossa hematoma; three-dimensional reconstruction of the admission computed tomography scan demonstrated the extent of the preintervention skull fractures. In addition, the second case showed a right epidural hematoma. Only case two showed retinal hemorrhage.",
"title": ""
},
{
"docid": "957a3970611470b611c024ed3b558115",
"text": "SHARE is a unique panel database of micro data on health, socio-economic status and social and family networks covering most of the European Union and Israel. To date, SHARE has collected three panel waves (2004, 2006, 2010) of current living circumstances and retrospective life histories (2008, SHARELIFE); 6 additional waves are planned until 2024. The more than 150 000 interviews give a broad picture of life after the age of 50 years, measuring physical and mental health, economic and non-economic activities, income and wealth, transfers of time and money within and outside the family as well as life satisfaction and well-being. The data are available to the scientific community free of charge at www.share-project.org after registration. SHARE is harmonized with the US Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA) and has become a role model for several ageing surveys worldwide. SHARE's scientific power is based on its panel design that grasps the dynamic character of the ageing process, its multidisciplinary approach that delivers the full picture of individual and societal ageing, and its cross-nationally ex-ante harmonized design that permits international comparisons of health, economic and social outcomes in Europe and the USA.",
"title": ""
},
{
"docid": "1931717eae1b7b952f18ff9df92ede67",
"text": "The task of implicit discourse relation classification has received increased attention in recent years, including two CoNNL shared tasks on the topic. Existing machine learning models for the task train on sections 2-21 of the PDTB and test on section 23, which includes a total of 761 implicit discourse relations. In this paper, we’d like to make a methodological point, arguing that the standard test set is too small to draw conclusions about whether the inclusion of certain features constitute a genuine improvement, or whether one got lucky with some properties of the test set, and argue for the adoption of cross validation for the discourse relation classification task by the community.",
"title": ""
},
{
"docid": "751e95c13346b18714c5ce5dcb4d1af2",
"text": "Purpose – The purpose of this paper is to propose how to minimize the risks of implementing business process reengineering (BPR) by measuring readiness. For this purpose, the paper proposes an assessment approach for readiness in BPR efforts based on the critical success and failure factors. Design/methodology/approach – A relevant literature review, which investigates success and failure indicators in BPR efforts is carried out and a new categorized list of indicators are proposed. This is a base for conducting a survey to measure the BPR readiness, which has been run in two companies and compared based on a diamond model. Findings – In this research, readiness indicators are determined based on critical success and failure factors. The readiness indicators include six categories. The first five categories, egalitarian leadership, collaborative working environment, top management commitment, supportive management, and use of information technology are positive indicators. The sixth category, resistance to change has a negative role. This paper reports survey results indicating BPR readiness in two Iranian companies. After comparing the position of the two cases, the paper offers several guidelines for amplifying the success points and decreasing failure points and hence, increasing the rate of success. Originality/value – High-failure rate of BPR has been introduced as a main barrier in reengineering processes. In addition, it makes a fear, which in turn can be a failure factor. This paper tries to fill the gap in the literature on decreasing risk in BPR projects by introducing a BPR readiness assessment approach. In addition, the proposed questionnaire is generic and can be utilized in a facilitated manner.",
"title": ""
}
] |
scidocsrr
|
57b6c18791bd7a8a4191f708e404409a
|
Battery optimization in smartphones for remote health monitoring systems to enhance user adherence
|
[
{
"docid": "ae9fb1b7ff6821dd29945f768426d7fc",
"text": "Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients.",
"title": ""
},
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
}
] |
[
{
"docid": "aa7114bf0038f2ab4df6908ed7d28813",
"text": "Sematch is an integrated framework for the development, evaluation and application of semantic similarity for Knowledge Graphs. The framework provides a number of similarity tools and datasets, and allows users to compute semantic similarity scores of concepts, words, and entities, as well as to interact with Knowledge Graphs through SPARQL queries. Sematch focuses on knowledge-based semantic similarity that relies on structural knowledge in a given taxonomy (e.g. depth, path length, least common subsumer), and statistical information contents. Researchers can use Sematch to develop and evaluate semantic similarity metrics and exploit these metrics in applications. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7c9cd59a4bb14f678c57ad438f1add12",
"text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.",
"title": ""
},
{
"docid": "208acb6756248e1f7603d4866a8e5f26",
"text": "Meeting citizens’ requirements economically and efficiently is the most important objective of Smart Cities. As a matter of fact, they are considered a key concept both for future Internet and information and communications technology. It is expected that a wide range of services will be made available for residential users (e.g. intelligent transportation systems, e-government, e-banking, e-commerce and smart management of energy demand), public administration entities, public safety and civil protection agencies and so on with increased quality, lower costs and reduced environmental impact. In order to achieve these ambitious objectives, new technologies should be developed such as non-invasive sensing, highly parallel processing, smart grids and mobile broadband communications. This paper considers the communication aspects of Smart City applications, specifically, the role of the latest developments of Long-Term Evolution-Advanced standard, which forecast the increase of broadband coverage by means of small cells. We shall demonstrate that the novel concept of small cell fully meets the emerging communication and networking requirements of future Smart Cities. To this aim, a feasible network architecture for future Smart Cities, based on small cells, will be discussed in the framework of a future smarter and user-centric perspective of forthcoming 4G mobile technologies. Copyright © 2013 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "a8cb644c1a7670670299d33c1e1e53d3",
"text": "In Java, C or C++, attempts to dereference the null value result in an exception or a segmentation fault. Hence, it is important to identify those program points where this undesired behaviour might occur or prove the other program points (and possibly the entire program) safe. To that purpose, null-pointer analysis of computer programs checks or infers non-null annotations for variables and object fields. With few notable exceptions, null-pointer analyses currently use run-time checks or are incorrect or only verify manually provided annotations. In this paper, we use abstract interpretation to build and prove correct a first, flow and context-sensitive static null-pointer analysis for Java bytecode (and hence Java) which infers non-null annotations. It is based on Boolean formulas, implemented with binary decision diagrams. For better precision, it identifies instance or static fields that remain always non-null after being initialised. Our experiments show this analysis faster and more precise than the correct null-pointer analysis by Hubert, Jensen and Pichardie. Moreover, our analysis deals with exceptions, which is not the case of most others; its formulation is theoretically clean and its implementation strong and scalable. We subsequently improve that analysis by using local reasoning about fields that are not always non-null, but happen to hold a non-null value when they are accessed. This is a frequent situation, since programmers typically check a field for non-nullness before its access. We conclude with an example of use of our analyses to infer null-pointer annotations which are more precise than those that other inference tools can achieve.",
"title": ""
},
{
"docid": "03ae01e41526a1548a17fe1f92499a24",
"text": "Barcodes enable automated work processes without human intervention, and are widely deployed because they are fast and accurate, eliminate many errors and often save time and money. In order to increase the data capacity of barcodes, two dimensional (2D) code were developed; the main challenges of 2D codes lie in their need to store more information and more character types without compromising their practical efficiency. This paper proposes the High Capacity Colored Two Dimensional (HCC2D) code, a new 2D code which aims at increasing the space available for data, while preserving the strong reliability and robustness properties of QR. The use of colored modules in HCC2D poses some new and non-trivial computer vision challenges. We developed a prototype of HCC2D, which realizes the entire Print&Scan process. The performance of HCC2D was evaluated considering different operating scenarios and data densities. HCC2D was compared to other barcodes, such as QR and Microsoft's HCCB; the experiment results showed that HCC2D codes obtain data densities close to HCCB and strong robustness similar to QR.",
"title": ""
},
{
"docid": "66b104459bdfc063cf7559c363c5802f",
"text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.",
"title": ""
},
{
"docid": "e2a6b7730198cbea992947a8d2814ba8",
"text": "Some individuals have a greater capacity than others to carry out sophisticated information processing about emotions and emotion-relevant stimuli and to use this information as a guide to thinking and behavior. The authors have termed this set of abilities emotional intelligence (EI). Since the introduction of the concept, however, a schism has developed in which some researchers focus on EI as a distinct group of mental abilities, and other researchers instead study an eclectic mix of positive traits such as happiness, self-esteem, and optimism. Clarifying what EI is and is not can help the field by better distinguishing research that is truly pertinent to EI from research that is not. EI--conceptualized as an ability--is an important variable both conceptually and empirically, and it shows incremental validity for predicting socially relevant outcomes.",
"title": ""
},
{
"docid": "71e6994bf56ed193a3a04728c7022a45",
"text": "To evaluate timing and duration differences in airway protection and esophageal opening after oral intubation and mechanical ventilation for acute respiratory distress syndrome (ARDS) survivors versus age-matched healthy volunteers. Orally intubated adult (≥ 18 years old) patients receiving mechanical ventilation for ARDS were evaluated for swallowing impairments via a videofluoroscopic swallow study (VFSS) during usual care. Exclusion criteria were tracheostomy, neurological impairment, and head and neck cancer. Previously recruited healthy volunteers (n = 56) served as age-matched controls. All subjects were evaluated using 5-ml thin liquid barium boluses. VFSS recordings were reviewed frame-by-frame for the onsets of 9 pharyngeal and laryngeal events during swallowing. Eleven patients met inclusion criteria, with a median (interquartile range [IQR]) intubation duration of 14 (9, 16) days, and VFSSs completed a median of 5 (4, 13) days post-extubation. After arrival of the bolus in the pharynx, ARDS patients achieved maximum laryngeal closure a median (IQR) of 184 (158, 351) ms later than age-matched, healthy volunteers (p < 0.001) and it took longer to achieve laryngeal closure with a median (IQR) difference of 151 (103, 217) ms (p < 0.001), although there was no significant difference in duration of laryngeal closure. Pharyngoesophageal segment opening was a median (IQR) of − 116 (− 183, 1) ms (p = 0.004) shorter than in age-matched, healthy controls. Evaluation of swallowing physiology after oral endotracheal intubation in ARDS patients demonstrates slowed pharyngeal and laryngeal swallowing timing, suggesting swallow-related muscle weakness. These findings may highlight specific areas for further evaluation and potential therapeutic intervention to reduce post-extubation aspiration.",
"title": ""
},
{
"docid": "35dbef4cc4b8588d451008b8156f326f",
"text": "Raman spectroscopy is a powerful tool for studying the biochemical composition of tissues and cells in the human body. We describe the initial results of a feasibility study to design and build a miniature, fiber optic probe incorporated into a standard hypodermic needle. This probe is intended for use in optical biopsies of solid tissues to provide valuable information of disease type, such as in the lymphatic system, breast, or prostate, or of such tissue types as muscle, fat, or spinal, when identifying a critical injection site. The optical design and fabrication of this probe is described, and example spectra of various ex vivo samples are shown.",
"title": ""
},
{
"docid": "655ebc05eafbca9b9079224d1013e8fc",
"text": "This paper examines the degree of stability in the structure of the corporate elite network in the US during the 1980s and 1990s. Several studies have documented that board-toboard ties serve as a mechanism for the diffusion of corporate practices, strategies, and structures; thus, the overall structure of the network can shape the nature and rate of aggregate corporate change. But upheavals in the nature of corporate governance and nearly complete turnover in the firms and directors at the core of the network since 1980 prompt a reassessment of the network’s topography.We find that the aggregate connectivity of the network is remarkably stable and appears to be an intrinsic property of the interlock network, resilient to major changes in corporate governance.After a brief review of elite studies in the US, we take advantage of the recent advances in the theoretical and methodological tools for analyzing network structures to examine the network properties of the directors and companies in 1982, 1990, and 1999. We use concepts from smallworld analysis to explain our finding that the structure of the corporate elite is resilient to macro and micro changes affecting corporate governance.",
"title": ""
},
{
"docid": "2693030e6575cb7faec59aaec6387e2c",
"text": "Human Resource (HR) applications can be used to provide fair and consistent decisions, and to improve the effectiveness of decision making processes. Besides that, among the challenge for HR professionals is to manage organization talents, especially to ensure the right person for the right job at the right time. For that reason, in this article, we attempt to describe the potential to implement one of the talent management tasks i.e. identifying existing talent by predicting their performance as one of HR application for talent management. This study suggests the potential HR system architecture for talent forecasting by using past experience knowledge known as Knowledge Discovery in Database (KDD) or Data Mining. This article consists of three main parts; the first part deals with the overview of HR applications, the prediction techniques and application, the general view of Data mining and the basic concept of talent management in HRM. The second part is to understand the use of Data Mining technique in order to solve one of the talent management tasks, and the third part is to propose the potential HR system architecture for talent forecasting. Keywords—HR Application, Knowledge Discovery in Database (KDD), Talent Forecasting.",
"title": ""
},
{
"docid": "f88c6e0c818266f685cee72b8af5f341",
"text": "Flying animals with flapping wings may best exemplify the astonishing ability of natural selection on design optimization by excelling both stability and maneuverability at insect/hummingbird scale. Flapping Wing Micro Air Vehicle (FWMAV) holds great promise in bridging the performance gap between engineering system and their natural counterparts. Designing and constructing such a system is a challenging problem under stringent size, weight and power (SWaP) constraints. In this work, we presented a systematic approach for design optimization and integration for a hummingbird inspired FWMAV. Our formulation covers aspects of actuation, dynamics, flight stability and control, which was validated by experimental data for both rigid and flexible wings, ranging from low to high wing loading. The optimization yields prototypes with onboard sensors, electronics, and computation units. The prototype flaps at 30Hz to 40Hz, with 7.5 to 12 grams of system weight and 12 to 20 grams of maximum lift. Liftoff was demonstrated with added payloads. Flapping wing platforms with different requirements and scales can now be designed and optimized with minor modifications of proposed formulation.",
"title": ""
},
{
"docid": "45c917e024842ff7e087e4c46a05be25",
"text": "A centrifugal pump that employs a bearingless motor with 5-axis active control has been developed. In this paper, a novel bearingless canned motor pump is proposed, and differences from the conventional structure are explained. A key difference between the proposed and conventional bearingless canned motor pumps is the use of passive magnetic bearings; in the proposed pump, the amount of permanent magnets (PMs) is reduced by 30% and the length of the rotor is shortened. Despite the decrease in the total volume of PMs, the proposed structure can generate large suspension forces and high torque compared with the conventional design by the use of the passive magnetic bearings. In addition, levitation and rotation experiments demonstrated that the proposed motor is suitable for use as a bearingless canned motor pump.",
"title": ""
},
{
"docid": "2d04a311815c8fef8728e4a992d3efac",
"text": "The amidase activities of two Aminobacter sp. strains (DSM24754 and DSM24755) towards the aryl-substituted substrates phenylhydantoin, indolylmethyl hydantoin, D,L-6-phenyl-5,6-dihydrouracil (PheDU) and para-chloro-D,L-6-phenyl-5,6-dihydrouracil were compared. Both strains showed hydantoinase and dihydropyrimidinase activity by hydrolyzing all substrates to the corresponding N-carbamoyl-α- or N-carbamoyl-β-amino acids. However, carbamoylase activity and thus a further degradation of these products to α- and β-amino acids was not detected. Additionally, the genes coding for a dihydropyrimidinase and a carbamoylase of Aminobacter sp. DSM24754 were elucidated. For Aminobacter sp. DSM24755 a dihydropyrimidinase gene flanked by two genes coding for putative ABC transporter proteins was detected. The deduced amino acid sequences of both dihydropyrimidinases are highly similar to the well-studied dihydropyrimidinase of Sinorhizobium meliloti CECT4114. The latter enzyme is reported to accept substituted hydantoins and dihydropyrimidines as substrates. The deduced amino acid sequence of the carbamoylase gene shows a high similarity to the very thermostable enzyme of Pseudomonas sp. KNK003A.",
"title": ""
},
{
"docid": "47f2a5a61677330fc85ff6ac700ac39f",
"text": "We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.",
"title": ""
},
{
"docid": "948295ca3a97f7449548e58e02dbdd62",
"text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.",
"title": ""
},
{
"docid": "c8d2092150e1e50232a5bc3847520d19",
"text": "Thermoregulation disorders are associated with Body temperature fluctuation. Both hyper- and hypothermia are evidence of an ongoing pathological process. Contralateral symmetry in the Body heat spread is considered normal, while asymmetry, if above a certain level, implies an underlying pathology. Infrared thermography (IRT) is employed in many medical fields including ophthalmology. The earliest attempts of eye surface temperature evaluation were made in the 19th century. Over the last 50 years, different authors have been using this method to assess ocular adnexa, however, the technique remains insufficiently studied. The reported IRT data is often contradictory, which may be due to heterogeneity (in terms of severity) of patient groups and disparities between research parameters.",
"title": ""
},
{
"docid": "d339f7d94334a2ccc256c29c63fd936f",
"text": "The random waypoint model is a frequently used mobility model for simulation–based studies of wireless ad hoc networks. This paper investigates the spatial node distribution that results from using this model. We show and interpret simulation results on a square and circular system area, derive an analytical expression of the expected node distribution in one dimension, and give an approximation for the two–dimensional case. Finally, the concept of attraction areas and a modified random waypoint model, the random borderpoint model, is analyzed by simulation.",
"title": ""
},
{
"docid": "aae42d6671c810cf07c088e0d91234b6",
"text": "Cooperative technological solutions for Distributed Denial-of-Service (DDoS) attacks are already available, yet organizations in the best position to implement them lack incentive to do so, and the victims of DDoS attacks cannot find effective methods to motivate them. In this article we discuss two components of the technological solutions to DDoS attacks: cooperative filtering and cooperative traffic smoothing by caching. We then analyze the broken incentive chain in each of these technological solutions. As a remedy, we propose usage-based pricing and Capacity Provision Networks, which enable victims to disseminate enough incentive along attack paths to stimulate cooperation against DDoS attacks.",
"title": ""
}
] |
scidocsrr
|
531eb001433723ecc73ba8e7bbcaf96b
|
Frontiers of biomedical text mining: current progress
|
[
{
"docid": "ccb5a426e9636186d2819f34b5f0d5e8",
"text": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).",
"title": ""
}
] |
[
{
"docid": "c1bfef951e9775f6ffc949c5110e1bd1",
"text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.",
"title": ""
},
{
"docid": "da4c4b0ba4e42b578b380286fab6bbb8",
"text": "For many years, scholars and investment professionals have argued that value strategies outperform the market. These value strategies call for buying stocks that have low prices relative to earnings, dividends, book assets, or other measures of fundamental value. While there is some agreement that value strategies produce higher returns, the interpretation of why they do so is more controversial. This article provides evidence that value strategies yield higher returns because these strategies exploit the suboptimal behavior of the typical investor and not because these strategies are fundamentally riskier. FORMANY YEARS, SCHOLARS and investment professionals have argued that value strategies outperform the market (Graham and Dodd (1934) and Dreman (1977)). These value strategies call for buying stocks that have low prices relative to earnings, dividends, historical prices, book assets, or other measures of value. In recent years, value strategies have attracted academic attention as well. Basu (19771, Jaffe, Keim, and Westerfield (1989), Chan, Hamao, and Lakonishok (1991), and Fama and French (1992) show that stocks with high earnings/price ratios earn higher returns. De Bondt and Thaler (1985, 1987) argue that extreme losers outperform the market over the subsequent several years. Despite considerable criticism (Chan (1988) and Ball and Kothari (1989)), their analysis has generally stood up to the tests (Chopra, Lakonishok, and Ritter (1992)). Rosenberg, Reid, and Lanstein (1984) show that stocks with high book relative to market values of equity outperform the market. Further work (Chan, Hamao, and Lakonishok (1991) 'Lakonishok is from the University of Illinois, Shleifer is from Harvard University, and Vishny is from the University of Chicago. We are indebted to Gil Beebower, Fischer Black, Stephen Brown, K. C. Chan, Louis Chan, Eugene Fama, Kenneth French, Bob Haugen, Jay Ritter, Rene Stulz, and two anonymous referees for helpful comments and to Han Qu for outstanding research assistance. This article has been presented at the Bcrlrcley Program in Finance, University of California (Berkeley), the Center for Research in Securities Prices Conference, the University of Chicago, the University of Illinois, the Massachusetts Institute of Technology, the National Bureau of Economic Research (Asset Pricing and Behavioral Finance Groups), New York University, Pensions and Investments Conference, the Institute for Quantitative Research in Finance (United States and Europe), Society of Quantitative Analysts, Stanford University, the University of Toronto, and Tel Aviv University. The research was supported by the National Science Foundation, Bradley Foundation, Russell Sage Foundation, the National Bureau of Economic Research Asset Management Research Advisory Group, and the National Center for Supercomputing Applications, University of Illinois. The Journal of Finance and Fama and French (1992)) has both extended and refined these results. Finally, Chan, Hamao, and Lakonishok (1991) show that a high ratio of cash flow to price also predicts higher returns. Interestingly, many of these results have been obtained for both the United States and Japan. Certain types of value strategies, then, appear to have beaten the market. While there is some agreement that value strategies have produced superior returns, the interpretation of why they have done so is more controversial. Value strategies might produce higher returns because they are contrarian to \"naive\"' strategies followed by other investors. These naive strategies might range from extrapolating past earnings growth too far into the future, to assuming a trend in stock prices, to overreacting to good or bad news, or to simply equating a good investment with a well-run company irrespective of price. Regardless of the reason, some investors tend to get overly excited about stocks that have done very well in the past and buy them up, so that these \"glamour\" stocks become overpriced. Similarly, they overreact to stocks that have done very badly, oversell them, and these out-of-favor \"value\" stocks become underpriced. Contrarian investors bet against such naive investors. Because contrarian strategies invest disproportionately in stocks that are underpriced and underinvest in stocks that are overpriced, they outperform the market (see De Bondt and Thaler (1985) and Haugen (1994)). An alternative explanation of why value strategies have produced superior returns, argued most forcefully by Fama and French (1992), is that they are fundamentally riskier. That is, investors in value stocks, such as high bookto-market stocks, tend to bear higher fundamental risk of some sort, and their higher average returns are simply compensation for this risk. This argument is also used by critics of De Bondt and Thaler (Chan (1988) and Ball and Kothari (1989)) to dismiss their overreaction story. Whether value strategies have produced higher returns because they are contrarian to naive strategies or because they are fundamentally riskier remains an open question. In this article, we try to shed further light on the two potential explanations for why value strategies work. We do so along two dimensions. First, we examine more closely the predictions of the contrarian model. In particular, one natural version of the contrarian model argues that the overpriced glamour stocks are those which, first, have performed well in the past, and second, are expected by the market to perform well in the future. Similarly, the underpriced out-of-favor or value stocks are those that have performed poorly in the past and are expected to continue to perform poorly. Value strategies that bet against those investors who extrapolate past performance too far into the future produce superior returns. In principle, this version of the contrarian model is testable because past performance and expectation of future performance are two distinct and separately measurable characteristics of glamour and value. In this article, past performance is measured using 'What we call \"naive strategies\" are also sometimes referred to as \"popular models\" (Shiller (1984)) and \"noise\" (Black (1986)). Contrarian Investment, Extrapolation, and Risk 1543 information on past growth in sales, earnings, and cash flow, and expected performance is measured by multiples of price to current earnings and cash flow. We examine the most obvious implication of the contrarian model, namely that value stocks outperform glamour stocks. We start with simple onevariable classifications of glamour and value stocks that rely in most cases on measures of either past growth or expected future growth. We then move on to classifications in which glamour and value are defined using both past growth and expected future growth. In addition, we compare past, expected, and future growth rates of glamour and value stocks. Our version of the contrarian model predicts that differences in expected future growth rates are linked to past growth and overestimate actual future growth differences between glamour and value firms. We find that a wide range of value strategies have produced higher returns, and that the pattern of past, expected, and actual future growth rates is consistent with the contrarian model. The second question we ask is whether value stocks are indeed fundamentally riskier than glamour stocks. To be fundamentally riskier, value stocks must underperform glamour stocks with some frequency, and particularly in the states of the world when the marginal utility of wealth is high. This view of risk motivates our tests. We look at the frequency of superior (and inferior) performance of value strategies, as well as at their performance in bad states of the world, such as extreme down markets and economic recessions. We also look at the betas and standard deviations of value and glamour strategies. We find little, if any, support for the view that value strategies are fundamentally riskier. Our results raise the obvious question of how the higher expected returns on value strategies could have continued if such strategies are not fundamentally riskier? We present some possible explanations that rely both on behavioral strategies favored by individual investors and on agency problems plaguing institutional investors. The next section of the article briefly discusses our methodology. Section I1 examines a variety of simple classification schemes for glamour and value stocks based on the book-to-market ratio, the cash flow-to-price ratio, the earnings-to-price ratio, and past growth in sales. Section I1 shows that all of these simple value strategies have produced superior returns and motivates our subsequent use of combinations of measures of past and expected growth. Section I11 then examines the performance of value strategies that are defined using both past growth and current multiples. These two-dimensional value strategies outperform glamour strategies by approximately 10 to 11 percent per year. Moreover, the superior performance of value stocks relative to glamour stocks persists when we restrict our attention to the largest 50 percent or largest 20 percent of stocks by market capitalization. Section IV provides evidence that contrarian strategies work because they exploit expectational errors implicit in stock prices. Specifically, the differences in expected growth rates between glamour and value stocks implicit in their 1544 The Journal of Finance relative valuation multiples significantly overestimate actual future growth rate differences. Section V examines risk characteristics of value strategies and provides evidence that, over longer horizons, value strategies have outperformed glamour strategies quite consistently and have done particularly well in \"bad\" states of the world. This evidence provides no support for the hypothesis that value strategies are fundamentally riskier. Finally, Section VI attempts to interpret our findings.",
"title": ""
},
{
"docid": "de3789fe0dccb53fe8555e039fde1bc6",
"text": "Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individuallevel observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about $2.9 billion in consumer surplus in the four U.S. cities included in our analysis. For each dollar spent by consumers, about $1.60 of consumer surplus is generated. Back-of-the-envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.",
"title": ""
},
{
"docid": "3665fcef99cb1c45a6833ba04a7eb7ac",
"text": "The increasing understanding of the advantages offered by fish and insect-like locomotion is creating a demand for muscle-like materials capable of mimicking nature's mechanisms. Actuator materials that employ voltage, field, light, or temperature driven dimensional changes to produce forces and displacements are suggesting new approaches to propulsion and maneuverability. Fundamental properties of these new materials are presented, and examples of potential undersea applications are examined in order to assist those involved in device design and in actuator research to evaluate the current status and the developing potential of these artificial muscle technologies. Technologies described are based on newly explored materials developed over the past decade, and also on older materials whose properties are not widely known. The materials are dielectric elastomers, ferroelectric polymers, liquid crystal elastomers, thermal and ferroelectric shape memory alloys, ionic polymer/metal composites, conducting polymers, and carbon nanotubes. Relative merits and challenges associated with the artificial muscle technologies are elucidated in two case studies. A summary table provides a quick guide to all technologies that are discussed.",
"title": ""
},
{
"docid": "58ea96e65ce2f767064a32b1e9f60338",
"text": "We present an approach to the problem of real-time identification of vehicle motion models based on fitting, on a continuous basis, parametrized slip models to observed behavior. Our approach is unique in that we generate parametric models capturing the dynamics of systematic error (i.e. slip) and then predict trajectories for arbitrary inputs on arbitrary terrain. The integrated error dynamics are linearized with respect to the unknown parameters to produce an observer relating errors in predicted slip to errors in the parameters. An Extended Kalman filter is used to identify this model on-line. The filter forms innovations based on residual differences between the motion originally predicted using the present model and the motion ultimately experienced by the vehicle. Our results show that the models converge in a few seconds and they reduce prediction error for even benign maneuvers where errors might be expected to be small already. Results are presented for both a skid-steered and an Ackerman steer vehicle.",
"title": ""
},
{
"docid": "70bee569e694c92b79bd5e7dc586cbdc",
"text": "Synchronous reluctance machines (SynRM) have been used widely in industries for instance, in ABB's new VSD product package based on SynRM technology. It is due to their unique merits such as high efficiency, fast dynamic response, and low cost. However, considering the major requirements for traction applications such as high torque and power density, low torque ripple, wide speed range, proper size, and capability of meeting a specific torque envelope, this machine is still under investigation to be developed for traction applications. Since the choice of motor for traction is generally determined by manufacturers with respect to three dominant factors: cost, weight, and size, the SynRM can be considered a strong alternative due to its high efficiency and lower cost. Hence, the machine's proper size estimation is a major step of the design process before attempting the rotor geometry design. This is crucial in passenger vehicles in which compactness is a requirement and the size and weight are indeed the design limitations. This paper presents a methodology for sizing a SynRM. The electric and magnetic parameters of the proposed machine in conjunction with the core dimensions are calculated. Then, the proposed method's validity and evaluation are done using FE analysis.",
"title": ""
},
{
"docid": "38247a496f72b778a99927417f6d3695",
"text": "Human faces are neither exactly Lambertian nor entirely convex and hence most models in literature which make the Lambertian assumption, fall short when dealing with specularities and cast shadows. In this paper, we present a novel anti-symmetric tensor spline (a spline for tensor-valued functions) based method for the estimation of the Apparent BRDF (ABRDF) field for human faces that seamlessly accounts for specularities and cast shadows. Furthermore, unlike other methods, it does not require any 3D information to build the model and can work with as few as 9 images. In order to validate the accuracy of our anti-symmetric tensor spline model, we present a novel approximation of the ABRDF using a continuous mixture of single-lobed spherical functions. We demonstrate the effectiveness of our anti-symmetric tensor-spline model in comparison to other popular models in the literature, by presenting extensive results for face relighting and face recognition using the Extended Yale B database.",
"title": ""
},
{
"docid": "4162c6bbaac397ff24e337fa4af08abd",
"text": "We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and we classify the intent of the spoken utterance based on the lattice embedding computed by a LATTICERNN. We show that making decisions based on the full ASR output lattice, as opposed to 1-best or n-best hypotheses, makes SLU systems more robust to ASR errors. Our experiments yield improvements of 13% over a baseline RNN system trained on transcriptions and 10% over an nbest list rescoring system for intent classification.",
"title": ""
},
{
"docid": "7b19a4e0f756a25bd468798bb9711422",
"text": "Object perception in 3-D is a highly challenging problem in computer vision. The major concern in these tasks involves object occlusion, different object poses, appearance and limited perception of the environment by individual sensors in terms of range measurements. In this particular project, our goal is improving 3D perception of the environment by using fusion from lidars and cameras with focus to autonomous driving. The main reason for using lidars and cameras are to combine the complementary information from each of the modalities for efficient feature set extraction that leads to improved perception.",
"title": ""
},
{
"docid": "251d81d261b531c832b8dcf8ec3575ae",
"text": "The goal of this study was to examine the use of pornographic materials by sex offenders during the commission of their crimes. A sample of 561 sex offenders was examined. There were 181 offenders against children, 144 offenders against adults, 223 incest offenders, 8 exhibitionists, and 5 miscellaneous cases. All but four cases were men. A total of 96 (17%) offenders had used pornography at the time of their offenses. More offenders against children than against adults used pornography in the offenses. Of the users, 55% showed pornographic materials to their victims and 36% took pictures, mostly of child victims. Nine cases were involved in the distribution of pornography. Results showed that pornography plays only a minor role in the commission of sexual offenses, however the current findings raise a major concern that pornography use in the commission of sexual crimes primarily involved child victims.",
"title": ""
},
{
"docid": "d4f2cf2b793a83bfb488d58842db5ea5",
"text": "If your letter had praised everything of mine, I would not have been as pleased as I am by your attempt to disprove and reject certain points. I regard this as a mark of friendship and the other as one of adulation. But in return I ask you to listen with an open mind to my rebuttal. For what you say, if it were allowed to pass without any reply from me, would be too one-sided. It is always a source of satisfaction to come across an article where one is cited often, especially by two scholars who have contributed so much to advance the study of subjective well-being. Of course, my happiness would have been greater had the references been favorable, rather than unfavorable. (Hereafter I use happiness and satisfaction interchangeably.) I take it that the Hagerty-Veenhoven (hereafter H-V) article (2003) is a rebuttal of my 1995 paper (Easterlin 1995), because there is only one reference to time series results of studies by other scholars done in the almost 10-year period since publication of my article. Indeed, I believe I detect an echo of a similar critique by one of the authors of my 1974 article (cf. Easterlin 1974 and Veenhoven 1991; for comments on the latter, see Easterlin 2004 forthcoming). 3 Apparently the editor and referee(s) of this Journal also viewed the H-V paper as a comment on my 1995 article; otherwise it would be hard to explain the absence of the customary literature review and reconciliation of new and disparate results with those of prior work. It seems appropriate, therefore, to offer a few comments in response, especially since the conclusions of the H-V article will no doubt be cited often as substantially different from my own when, in fact, they are not. I will focus on the time series analysis in the section \" Descriptive Statistics of Happiness and Income \" (pp. 11-18) which I take to be the heart of their article. Until one is sure about the data, methodology, and results of the time series analysis, hypothesis testing is superfluous. 1 THE UNITED STATES I was quite surprised to find the one country whose data I thought I knew fairly well to be among the seven for whom a significant positive correlation is reported between happiness and income. I had found no significant relationship between happiness and time over a period in which GDP …",
"title": ""
},
{
"docid": "ebafef08b98f0581210749c570504599",
"text": "In this paper we examine the effect of receptive field designs on classification accuracy in the commonly adopted pipeline of image classification. While existing algorithms usually use manually defined spatial regions for pooling, we show that learning more adaptive receptive fields increases performance even with a significantly smaller codebook size at the coding layer. To learn the optimal pooling parameters, we adopt the idea of over-completeness by starting with a large number of receptive field candidates, and train a classifier with structured sparsity to only use a sparse subset of all the features. An efficient algorithm based on incremental feature selection and retraining is proposed for fast learning. With this method, we achieve the best published performance on the CIFAR-10 dataset, using a much lower dimensional feature space than previous methods.",
"title": ""
},
{
"docid": "3c80aa753cac4bebd8c6808a361973c7",
"text": "We develop a computer-assisted method for the discovery of insightful conceptualizations, in the form of clusterings (i.e., partitions) of input objects. Each of the numerous fully automated methods of cluster analysis proposed in statistics, computer science, and biology optimize a different objective function. Almost all are well defined, but how to determine before the fact which one, if any, will partition a given set of objects in an \"insightful\" or \"useful\" way for a given user is unknown and difficult, if not logically impossible. We develop a metric space of partitions from all existing cluster analysis methods applied to a given dataset (along with millions of other solutions we add based on combinations of existing clusterings) and enable a user to explore and interact with it and quickly reveal or prompt useful or insightful conceptualizations. In addition, although it is uncommon to do so in unsupervised learning problems, we offer and implement evaluation designs that make our computer-assisted approach vulnerable to being proven suboptimal in specific data types. We demonstrate that our approach facilitates more efficient and insightful discovery of useful information than expert human coders or many existing fully automated methods.",
"title": ""
},
{
"docid": "c5f614aa960dcec670fac661ec5fa467",
"text": "We describe a dataset of several thousand calibrated, time-stamped, geo-referenced, high dynamic range color images, acquired under uncontrolled, variable illumination conditions in an outdoor region spanning several hundred meters. The image data is grouped into several regions which have little mutual inter-visibility. For each group, the calibration data is globally consistent on average to roughly five centimeters and 0 1°, or about four pixels of epipolar registration. All image, feature and calibration data is available for interactive inspection and downloading at http://city.lcs.mit.edu/data. Calibrated imagery is of fundamental interest in a variety of applications. We have made this data available in the belief that researchers in computer graphics, computer vision, photogrammetry and digital cartography will find it of value as a test set for their own image registration algorithms, as a calibrated image set for applications such as image-based rendering, metric 3D reconstruction, and appearance recovery, and as input for existing GIS applications.",
"title": ""
},
{
"docid": "55b2465349e4965a35b4c894c5545afb",
"text": "Context-awareness is a key concept in ubiquitous computing. But to avoid developing dedicated context-awareness sub-systems for specific application areas there is a need for more generic programming frameworks. Such frameworks can help the programmer to develop and deploy context-aware applications faster. This paper describes the Java Context-Awareness Framework – JCAF, which is a Java-based context-awareness infrastructure and programming API for creating context-aware computer applications. The paper presents the design principles behind JCAF, its runtime architecture, and its programming API. The paper presents some applications of using JCAF in three different applications and discusses lessons learned from using JCAF.",
"title": ""
},
{
"docid": "8d73ecdcbebed67393d31095d8a72ee0",
"text": "This paper presents a method for autonomous recharging of a mobile robot, a necessity for achieving long-term robotic activity without human intervention. A recharging station is designed consisting of a stationary docking station and a docking mechanism mounted to an ER-1 Evolution Robotics robot. The docking station and docking mechanism serve as a dual-power source, providing a mechanical and electrical connection between the recharging system of the robot and a laptop placed on it. Docking strategy algorithms use vision based navigation. The result is a significantly low-cost, high-entrance angle tolerant system. Iterative improvements to the system, to resist environmental perturbations and implement obstacle avoidance, ultimately resulted in a docking success rate of 100 percent over 50 trials.",
"title": ""
},
{
"docid": "c182be9222690ffe1c94729b2b79d8ed",
"text": "A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.",
"title": ""
},
{
"docid": "3b5340113d583b138834119614046151",
"text": "This paper presents the recent advancements in the control of multiple-degree-of-freedom hydraulic robotic manipulators. A literature review is performed on their control, covering both free-space and constrained motions of serial and parallel manipulators. Stability-guaranteed control system design is the primary requirement for all control systems. Thus, this paper pays special attention to such systems. An objective evaluation of the effectiveness of different methods and the state of the art in a given field is one of the cornerstones of scientific research and progress. For this purpose, the maximum position tracking error <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math></inline-formula> and a performance indicator <inline-formula><tex-math notation=\"LaTeX\">$\\rho$ </tex-math></inline-formula> (the ratio of <inline-formula><tex-math notation=\"LaTeX\">$|e|_{\\rm max}$</tex-math> </inline-formula> with respect to the maximum velocity) are used to evaluate and benchmark different free-space control methods in the literature. These indicators showed that stability-guaranteed nonlinear model based control designs have resulted in the most advanced control performance. In addition to stable closed-loop control, lack of energy efficiency is another significant challenge in hydraulic robotic systems. This paper pays special attention to these challenges in hydraulic robotic systems and discusses their reciprocal contradiction. Potential solutions to improve the system energy efficiency without control performance deterioration are discussed. Finally, for hydraulic robotic systems, open problems are defined and future trends are projected.",
"title": ""
},
{
"docid": "aa84af0f609f2593e4e8c33d3f2bd91c",
"text": "Massively Multiplayer Online Role Playing Games (MMORPGs) create large virtual communities. Online gaming shows potential not just for entertaining, but also for education. The aim of this research project is to investigate the use of commercial MMORPGs to support second language teaching. MMORPGs offer a digital safe space in which students can communicate by using their target language with global players. This qualitative research based on ethnography and action research investigates the students’ experiences of language learning and performing while they play in the MMORPGs. Research was conducted in both the ‘real’ and ‘virtual’ worlds. In the real world the researcher observes the interaction with the MMORPGs by the students through actual discussion, and screen video captures while they are playing. In the virtual world, the researcher takes on the role of a character in the MMORPG enabling the researcher to get an inside point of view of the students and their own MMORPG characters. This latter approach also uses action research to allow the researcher to provide anonymous/private support to the students including in-game instruction, confidence building, and some support of language issues in a safe and friendly way. Using action research with MMORPGs in the real world facilitates a number of opportunities for learning and teaching including opportunities to practice language and individual and group experiences of communicating with other native/ second language speakers for the students. The researcher can also develop tutorial exercises and discussion for teaching plans based on the students’ experiences with the MMORPGs. The results from this research study demonstrate that MMORPGs offer a safe, fun, informal and effective learning space for supporting language teaching. Furthermore the use of MMORPGs help the students’ confidence in using their second language and provide additional benefits such as a better understanding of the culture and use of language in different contexts.",
"title": ""
},
{
"docid": "5f8f9a407c42a6a3c6c269c22d36f684",
"text": "This paper proposes a coarse-fine dual-loop architecture for the digital low drop-out (LDO) regulators with fast transient response and more than 200-mA load capacity. In the proposed scheme, the output voltage is coregulated by two loops, namely, the coarse loop and the fine loop. The coarse loop adopts a fast current-mirror flash analog to digital converter and supplies high output current to enhance the transient performance, while the fine loop delivers low output current and helps reduce the voltage ripples and improve the regulation accuracies. Besides, a digital controller is implemented to prevent contentions between the two loops. Fabricated in a 28-nm Samsung CMOS process, the proposed digital LDO achieves maximum load up to 200 mA when the input and the output voltages are 1.1 and 0.9 V, respectively, with a chip area of 0.021 mm2. The measured output voltage drop of around 120 mV is observed for a load step of 180 mA.",
"title": ""
}
] |
scidocsrr
|
b915a3d4289c57ae8b2054d18bc8475e
|
Fully Connected Object Proposals for Video Segmentation
|
[
{
"docid": "3ae5e7ac5433f2449cd893e49f1b2553",
"text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.",
"title": ""
}
] |
[
{
"docid": "a98f643c2a0e40a767f5ef57b0152adb",
"text": "Techniques for recognizing high-level events in consumer videos on the Internet have many applications. Systems that produced state-of-the-art recognition performance usually contain modules requiring extensive computation, such as the extraction of the temporal motion trajectories, which cannot be deployed on large-scale datasets. In this paper, we provide a comprehensive study on efficient methods in this area and identify technical options for super fast event recognition in Internet videos. We start from analyzing a multimodal baseline that has produced good performance on popular benchmarks, by systematically evaluating each component in terms of both computational cost and contribution to recognition accuracy. After that, we identify alternative features, classifiers, and fusion strategies that can all be efficiently computed. In addition, we also provide a study on the following interesting question: for event recognition in Internet videos, what is the minimum number of visual and audio frames needed to obtain a comparable accuracy to that of using all the frames? Results on two rigorously designed datasets indicate that similar results can be maintained by using only a small portion of the visual frames. We also find that, different from the visual frames, the soundtracks contain little redundant information and thus sampling is always harmful. Integrating all the findings, our suggested recognition system is 2,350-fold faster than a baseline approach with even higher recognition accuracies. It recognizes 20 classes on a 120-second video sequence in just 1.78 seconds, using a regular desktop computer.",
"title": ""
},
{
"docid": "f78534a09317be5097963d068c6af2cd",
"text": "Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.",
"title": ""
},
{
"docid": "028eb67d71987c33c4a331cf02c6ff00",
"text": "We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.",
"title": ""
},
{
"docid": "afadbcb8c025ad6feca693c05ce7b43f",
"text": "A data structure that implements a mergeable double-ended priority queue, namely therelaxed min-max heap, is presented. A relaxed min-max heap ofn items can be constructed inO(n) time. In the worst case, operationsfind_min() andfind_max() can be performed in constant time, while each of the operationsmerge(),insert(),delete_min(),delete_max(),decrease_key(), anddelete_key() can be performed inO(logn) time. Moreover,insert() hasO(1) amortized running time. If lazy merging is used,merge() will also haveO(1) worst-case and amortized time. The relaxed min-max heap is the first data structure that achieves these bounds using only two pointers (puls one bit) per item.",
"title": ""
},
{
"docid": "65baa2316024ca738f566a53818fc626",
"text": "The proper usage and creation of transfer functions for time-varying data sets is an often ignored problem in volume visualization. Although methods and guidelines exist for time-invariant data, little formal study for the timevarying case has been performed. This paper examines this problem, and reports the study that we have conducted to determine how the dynamic behavior of time-varying data may be captured by a single or small set of transfer functions. The criteria which dictate when more than one transfer function is needed were also investigated. Four data sets with different temporal characteristics were used for our study. Results obtained using two different classes of methods are discussed, along with lessons learned. These methods, including a new multiresolution opacity map approach, can be used for semi-automatic generation of transfer functions to explore large-scale time-varying data sets.",
"title": ""
},
{
"docid": "52ef7357fa379b7eede3c4ceee448a81",
"text": "(Note: This is a completely revised version of the article that was originally published in ACM Crossroads, Volume 13, Issue 4. Revisions were needed because of major changes to the Natural Language Toolkit project. The code in this version of the article will always conform to the very latest version of NLTK (v2.0b9 as of November 2010). Although the code is always tested, it is possible that a bug or two may have been introduced in the code during the course of this revision. If you find any, please report them to the author. If you are still using version 0.7 of the toolkit for some reason, please refer to http://www.acm.org/crossroads/xrds13-4/natural_language.html).",
"title": ""
},
{
"docid": "5409b6586b89bd3f0b21e7984383e1e1",
"text": "The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. In this talk I present an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions. The necessary and sufficient ingredients are Bayesian probability theory; algorithmic information theory; universal Turing machines; the agent framework; sequential decision theory; and reinforcement learning, which are all important subjects in their own right. I also present some recent approximations, implementations, and applications of this modern top-down approach to AI. Marcus Hutter 3 Universal Artificial Intelligence Overview Goal: Construct a single universal agent that learns to act optimally in any environment. State of the art: Formal (mathematical, non-comp.) definition of such an agent. Accomplishment: Well-defines AI. Formalizes rational intelligence. Formal “solution” of the AI problem in the sense of ... =⇒ Reduces the conceptional AI problem to a (pure) computational problem. Evidence: Mathematical optimality proofs and some experimental results. Marcus Hutter 4 Universal Artificial Intelligence",
"title": ""
},
{
"docid": "8f177b79f0b89510bd84e1f503b5475f",
"text": "We propose a distributed cooperative framework among base stations (BS) with load balancing (dubbed as inter-BS for simplicity) for improving energy efficiency of OFDMA-based cellular access networks. Proposed inter-BS cooperation is formulated following the principle of ecological self-organization. Based on the network traffic, BSs mutually cooperate for distributing traffic among themselves and thus, the number of active BSs is dynamically adjusted for energy savings. For reducing the number of inter-BS communications, a three-step measure is taken by using estimated load factor (LF), initializing the algorithm with only the active BSs and differentiating neighboring BSs according to their operating modes for distributing traffic. An exponentially weighted moving average (EWMA)-based technique is proposed for estimating the LF in advance based on the historical data. Various selection schemes for finding the best BSs to distribute traffic are also explored. Furthermore, we present an analytical formulation for modeling the dynamic switching of BSs. A thorough investigation under a wide range of network settings is carried out in the context of an LTE system. Results demonstrate a significant enhancement in network energy efficiency yielding a much higher savings than the compared schemes. Moreover, frequency of inter-BS correspondences can be reduced by over 80%.",
"title": ""
},
{
"docid": "c42aaf64a6da2792575793a034820dcb",
"text": "Psychologists and psychiatrists commonly rely on self-reports or interviews to diagnose or treat behavioral addictions. The present study introduces a novel source of data: recordings of the actual problem behavior under investigation. A total of N = 58 participants were asked to fill in a questionnaire measuring problematic mobile phone behavior featuring several questions on weekly phone usage. After filling in the questionnaire, all participants received an application to be installed on their smartphones, which recorded their phone usage for five weeks. The analyses revealed that weekly phone usage in hours was overestimated; in contrast, numbers of call and text message related variables were underestimated. Importantly, several associations between actual usage and being addicted to mobile phones could be derived exclusively from the recorded behavior, but not from self-report variables. The study demonstrates the potential benefit to include methods of psychoinformatics in the diagnosis and treatment of problematic mobile phone use.",
"title": ""
},
{
"docid": "31045b2c3709102abe66906a0e8ae706",
"text": "Tandem mass spectrometry fragments a large number of molecules of the same peptide sequence into charged molecules of prefix and suffix peptide subsequences and then measures mass/charge ratios of these ions. The de novo peptide sequencing problem is to reconstruct the peptide sequence from a given tandem mass spectral data of k ions. By implicitly transforming the spectral data into an NC-spectrum graph G (V, E) where /V/ = 2k + 2, we can solve this problem in O(/V//E/) time and O(/V/2) space using dynamic programming. For an ideal noise-free spectrum with only b- and y-ions, we improve the algorithm to O(/V/ + /E/) time and O(/V/) space. Our approach can be further used to discover a modified amino acid in O(/V//E/) time. The algorithms have been implemented and tested on experimental data.",
"title": ""
},
{
"docid": "4e19a7342ff32f82bc743f40b3395ee3",
"text": "The face image is the most accessible biometric modality which is used for highly accurate face recognition systems, while it is vulnerable to many different types of presentation attacks. Face anti-spoofing is a very critical step before feeding the face image to biometric systems. In this paper, we propose a novel two-stream CNN-based approach for face anti-spoofing, by extracting the local features and holistic depth maps from the face images. The local features facilitate CNN to discriminate the spoof patches independent of the spatial face areas. On the other hand, holistic depth map examine whether the input image has a face-like depth. Extensive experiments are conducted on the challenging databases (CASIA-FASD, MSU-USSA, and Replay Attack), with comparison to the state of the art.",
"title": ""
},
{
"docid": "0b1b4c8d501c3b1ab350efe4f2249978",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "ec4dae5e2aa5a5ef67944d82a6324c9d",
"text": "Parallel collection processing based on second-order functions such as map and reduce has been widely adopted for scalable data analysis. Initially popularized by Google, over the past decade this programming paradigm has found its way in the core APIs of parallel dataflow engines such as Hadoop's MapReduce, Spark's RDDs, and Flink's DataSets. We review programming patterns typical of these APIs and discuss how they relate to the underlying parallel execution model. We argue that fixing the abstraction leaks exposed by these patterns will reduce the cost of data analysis due to improved programmer productivity. To achieve that, we first revisit the algebraic foundations of parallel collection processing. Based on that, we propose a simplified API that (i) provides proper support for nested collection processing and (ii) alleviates the need of certain second-order primitives through comprehensions -- a declarative syntax akin to SQL. Finally, we present a metaprogramming pipeline that performs algebraic rewrites and physical optimizations which allow us to target parallel dataflow engines like Spark and Flink with competitive performance.",
"title": ""
},
{
"docid": "b100ca202f99e3ee086cd61f01349a30",
"text": "This paper is concerned with inertial-sensor-based tracking of the gravitation direction in mobile devices such as smartphones. Although this tracking problem is a classical one, choosing a good state-space for this problem is not entirely trivial. Even though for many other orientation related tasks a quaternion-based representation tends to work well, for gravitation tracking their use is not always advisable. In this paper we present a convenient linear quaternion-free state-space model for gravitation tracking. We also discuss the efficient implementation of the Kalman filter and smoother for the model. Furthermore, we propose an adaption mechanism for the Kalman filter which is able to filter out shot-noises similarly as has been proposed in context of adaptive and robust Kalman filtering. We compare the proposed approach to other approaches using measurement data collected with a smartphone.",
"title": ""
},
{
"docid": "aeac0766cc4e29fa0614649279970276",
"text": "Over the last two releases SQL Server has integrated two specialized engines into the core system: the Apollo column store engine for analytical workloads and the Hekaton in-memory engine for high-performance OLTP workloads. There is an increasing demand for real-time analytics, that is, for running analytical queries and reporting on the same system as transaction processing so as to have access to the freshest data. SQL Server 2016 will include enhancements to column store indexes and in-memory tables that significantly improve performance on such hybrid workloads. This paper describes four such enhancements: column store indexes on inmemory tables, making secondary column store indexes on diskbased tables updatable, allowing B-tree indexes on primary column store indexes, and further speeding up the column store scan operator.",
"title": ""
},
{
"docid": "e44d7f7668590726def631c5ec5f5506",
"text": "Today thanks to low cost and high performance DSP's, Kalman filtering (KF) becomes an efficient candidate to avoid mechanical sensors in motor control. We present in this work experimental results by using a steady state KF method to estimate the speed and rotor position for hybrid stepper motor. With this method the computing time is reduced. The Kalman gain is pre-computed from numerical simulation and introduced as a constant in the real time algorithm. The load torque is also on-line estimated by the same algorithm. At start-up the initial rotor position is detected by the impulse current method.",
"title": ""
},
{
"docid": "a0071f44de7741eb914c1fdb0e21026d",
"text": "This study examined relationships between mindfulness and indices of happiness and explored a fivefactor model of mindfulness. Previous research using this mindfulness model has shown that several facets predicted psychological well-being (PWB) in meditating and non-meditating individuals. The current study tested the hypothesis that the prediction of PWB by mindfulness would be augmented and partially mediated by self-compassion. Participants were 27 men and 96 women (mean age = 20.9 years). All completed self-report measures of mindfulness, PWB, personality traits (NEO-PI-R), and self-compassion. Results show that mindfulness is related to psychologically adaptive variables and that self-compassion is a crucial attitudinal factor in the mindfulness–happiness relationship. Findings are interpreted from the humanistic perspective of a healthy personality. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "018d855cdd9a5e95beba0ae39dddf4ce",
"text": "Citation Agrawal, Ajay K., Catalini, Christian, and Goldfarb, Avi. \"Some Simple Economics of Crowdfunding.\" Innovation Policy and the Economy 2013, ed. Josh Lerner and Scott Stern, Univeristy of Chicago Press, 2014, 1-47. © 2014 National Bureau of Economic Research Innovation Policy and the Economy As Published http://press.uchicago.edu/ucp/books/book/distributed/I/bo185081 09.html Publisher University of Chicago Press",
"title": ""
},
{
"docid": "61998885a181e074eadd41a2f067f697",
"text": "Introduction. Opinion mining has been receiving increasing attention from a broad range of scientific communities since early 2000s. The present study aims to systematically investigate the intellectual structure of opinion mining research. Method. Using topic search, citation expansion, and patent search, we collected 5,596 bibliographic records of opinion mining research. Then, intellectual landscapes, emerging trends, and recent developments were identified. We also captured domain-level citation trends, subject category assignment, keyword co-occurrence, document co-citation network, and landmark articles. Analysis. Our study was guided by scientometric approaches implemented in CiteSpace, a visual analytic system based on networks of co-cited documents. We also employed a dual-map overlay technique to investigate epistemological characteristics of the domain. Results. We found that the investigation of algorithmic and linguistic aspects of opinion mining has been of the community’s greatest interest to understand, quantify, and apply the sentiment orientation of texts. Recent thematic trends reveal that practical applications of opinion mining such as the prediction of market value and investigation of social aspects of product feedback have received increasing attention from the community. Conclusion. Opinion mining is fast-growing and still developing, exploring the refinements of related techniques and applications in a variety of domains. We plan to apply the proposed analytics to more diverse domains and comprehensive publication materials to gain more generalized understanding of the true structure of a science.",
"title": ""
},
{
"docid": "61ecbc652cf9f57136e8c1cd6fed2fb0",
"text": "Recent advancements in digital technology have attracted the interest of educators and researchers to develop technology-assisted inquiry-based learning environments in the domain of school science education. Traditionally, school science education has followed deductive and inductive forms of inquiry investigation, while the abductive form of inquiry has previously been sparsely explored in the literature related to computers and education. We have therefore designed a mobile learning application ‘ThinknLearn’, which assists high school students in generating hypotheses during abductive inquiry investigations. The M3 evaluation framework was used to investigate the effectiveness of using ‘ThinknLearn’ to facilitate student learning. The results indicated in this paper showed improvements in the experimental group’s learning performance as compared to a control group in pre-post tests. In addition, the experimental group also maintained this advantage during retention tests as well as developing positive attitudes toward mobile learning. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
4a4dcac3e74c7460b6acb1a84d617cbd
|
Self-supervised Learning of Motion Capture
|
[
{
"docid": "046710d2b22adeec4a8ebc3656e274be",
"text": "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30% on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.",
"title": ""
},
{
"docid": "c002b17f95a154ab394fd345dbfd2fdb",
"text": "This paper presents a method to estimate 3D human pose and body shape from monocular videos. While recent approaches infer the 3D pose from silhouettes and landmarks, we exploit properties of optical flow to temporally constrain the reconstructed motion. We estimate human motion by minimizing the difference between computed flow fields and the output of our novel flow renderer. By just using a single semi-automatic initialization step, we are able to reconstruct monocular sequences without joint annotation. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. Fig. 1: Following our main idea we compute the optical flow between two consecutive frames and match it to an optical flow field estimated by our proposed optical flow renderer. From left to right: input frame, color-coded observed flow, estimated flow, resulting pose.",
"title": ""
},
{
"docid": "33de1981b2d9a0aa1955602006d09db9",
"text": "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"title": ""
},
{
"docid": "98e557f291de3b305a91e47f59a9ed34",
"text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"title": ""
}
] |
[
{
"docid": "e352b5d4bfe4557b27e6caaddbc4da61",
"text": "This paper presents ILGM (the Infant Learning to Grasp Model), the first computational model of infant grasp learning that is constrained by the infant motor development literature. By grasp learning we mean learning how to make motor plans in response to sensory stimuli such that open-loop execution of the plan leads to a successful grasp. The open-loop assumption is justified by the behavioral evidence that early grasping is based on open-loop control rather than on-line visual feedback. Key elements of the infancy period, namely elementary motor schemas, the exploratory nature of infant motor interaction, and inherent motor variability are captured in the model. In particular we show, through computational modeling, how an existing behavior (reaching) yields a more complex behavior (grasping) through interactive goal-directed trial and error learning. Our study focuses on how the infant learns to generate grasps that match the affordances presented by objects in the environment. ILGM was designed to learn execution parameters for controlling the hand movement as well as for modulating the reach to provide a successful grasp matching the target object affordance. Moreover, ILGM produces testable predictions regarding infant motor learning processes and poses new questions to experimentalists.",
"title": ""
},
{
"docid": "65771b7a20b2002c9a6c4b4e1a9aa2c0",
"text": "Understanding the effect of blur is an important problem in unconstrained visual analysis. We address this problem in the context of image-based recognition by a fusion of image-formation models and differential geometric tools. First, we discuss the space spanned by blurred versions of an image and then, under certain assumptions, provide a differential geometric analysis of that space. More specifically, we create a subspace resulting from convolution of an image with a complete set of orthonormal basis functions of a prespecified maximum size (that can represent an arbitrary blur kernel within that size), and show that the corresponding subspaces created from a clean image and its blurred versions are equal under the ideal case of zero noise and some assumptions on the properties of blur kernels. We then study the practical utility of this subspace representation for the problem of direct recognition of blurred faces by viewing the subspaces as points on the Grassmann manifold and present methods to perform recognition for cases where the blur is both homogenous and spatially varying. We empirically analyze the effect of noise, as well as the presence of other facial variations between the gallery and probe images, and provide comparisons with existing approaches on standard data sets.",
"title": ""
},
{
"docid": "d8eee79312660f4da03a29372fc87d7e",
"text": "Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children’s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.1",
"title": ""
},
{
"docid": "e9fe2ce5cb1c264c8abb22a97aec0a69",
"text": "In this paper we report on the latest developments in biomimetic flow-sensors based on the flow sensitive mechano-sensors of crickets. Crickets have one form of acoustic sensing evolved in the form of mechano receptive sensory hairs. These filiform hairs are highly perceptive to low-frequency sound with energy sensitivities close to thermal threshold. Arrays of artificial hair sensors have been fabricated using a surface micromachining technology to form suspended silicon nitride membranes and double-layer SU-8 processing to form 1 mm long hairs. Previously, we have shown that these hairs are sensitive to low-frequency sound, using a laser vibrometer setup to detect the movements of the nitride membranes. We have now realized readout electronics to detect the movements capacitively, using electrodes integrated on the membranes.",
"title": ""
},
{
"docid": "ca73a14417bbd9e56a155cd2309af20f",
"text": "At frequencies below 1 GHz, vegetation is becoming transparent, the more so the lower the frequency. Tree clutter on the other hand tends to be as strong as in the microwave regime as at frequencies above 200 MHz. Below 100 MHz, i.e. in the VHF band, tree clutter levels are significantly smaller. Foliage penetration SAR is feasible at both UHF and VHF but has to overcome significant challenges. For one, resolution must be high, viz. of meter order at VHF and submeter order for UHF. In both cases resolution of wavelength order is thus called for, requiring special processing methods which will be discussed here. Secondly, the signal-to-noise budget is critical due to the severe radio frequency interference below 1 GHz. In fact SAR operation at these frequencies is not feasible unless there are some means to identify and remove the RF1. Thirdly, for SAR surveillance the target detection method is crucial. VHF resolution is too low to make any target recognition scheme effective as a means to reduce clutter false alarms. At UHF, even though resolution can be made high, intense forest clutter level creates a very difficult environment for target discrimination. These concerns and their remedies are discussed in the paper.",
"title": ""
},
{
"docid": "24a4fb7f87d6ee75aa26aeb6b77f68bb",
"text": "Networked learning is much more ambitious than previous approaches of ICT-support in education. It is therefore more difficult to evaluate the effectiveness and efficiency of the networked learning activities. Evaluation of learners’ interactions in networked learning environments is a difficult, resource and expertise demanding task. Educators participating in online learning environments, have very little support by integrated tools to evaluate students’ activities and identify learners’ online browsing behavior and interactions. As a consequence, educators are in need for non-intrusive and automatic ways to get feedback from learners’ progress in order to better follow their learning process and appraise the online course effectiveness. They also need specialized tools for authoring, delivering, gathering and analysing data for evaluating the learning effectiveness of networked learning courses. Thus, the aim of this paper is to propose a new set of services for the evaluator and lecturer so that he/she can easily evaluate the learners’ progress and produce evaluation reports based on learners’ behaviour within a Learning Management System. These services allow the evaluator to easily track down the learners’ online behavior at specific milestones set up, gather feedback in an automatic way and present them in a comprehensive way. The innovation of the proposed set of services lies on the effort to adopt/adapt some of the web usage mining techniques combining them with the use of semantic description of networked learning tasks",
"title": ""
},
{
"docid": "019d465534b9229c2a97f1727c400832",
"text": "OBJECTIVE\nResearch on learning from feedback has produced ambiguous guidelines for feedback design--some have advocated minimal feedback, whereas others have recommended more extensive feedback that highly supported performance. The objective of the current study was to investigate how individual differences in cognitive resources may predict feedback requirements and resolve previous conflicted findings.\n\n\nMETHOD\nCognitive resources were controlled for by comparing samples from populations with known differences, older and younger adults.To control for task demands, a simple rule-based learning task was created in which participants learned to identify fake Windows pop-ups. Pop-ups were divided into two categories--those that required fluid ability to identify and those that could be identified using crystallized intelligence.\n\n\nRESULTS\nIn general, results showed participants given higher feedback learned more. However, when analyzed by type of task demand, younger adults performed comparably with both levels of feedback for both cues whereas older adults benefited from increased feedbackfor fluid ability cues but from decreased feedback for crystallized ability cues.\n\n\nCONCLUSION\nOne explanation for the current findings is feedback requirements are connected to the cognitive abilities of the learner-those with higher abilities for the type of demands imposed by the task are likely to benefit from reduced feedback.\n\n\nAPPLICATION\nWe suggest the following considerations for feedback design: Incorporate learner characteristics and task demands when designing learning support via feedback.",
"title": ""
},
{
"docid": "819d077913e6a956fc57241f81a73df3",
"text": "Humans are avid consumers of visual content. Every day, people watch videos, play games, and share photos on social media. However, there is an asymmetry—while everybody is able to consume visual data, only a chosen few are talented enough to express themselves visually. For the rest of us, most attempts at creating realistic visual content end up quickly “falling off” what we could consider to be natural images. In this thesis, we investigate several machine learning approaches for preserving visual realism while creating and manipulating photographs. We use these methods as training wheels for visual content creation. These methods not only help users easily synthesize realistic photos but also enable previously not possible visual effects.",
"title": ""
},
{
"docid": "4dd2fc66b1a2f758192b02971476b4cc",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "c6cdc9a18c1e3dc0c58331fc6995c42e",
"text": "There is no universal gold standard classification system for mandibular condylar process fractures. A clinically relevant mandibular condyle classification system should be easy to understand, and be easy to recall, for implementation into the management of a condylar fracture. An accurate appreciation of the location of the mandibular condylar fracture assists with the determination of either an operative or nonoperative management regimen.",
"title": ""
},
{
"docid": "84a0d6ed5da2fafb025202a4c15875f7",
"text": "Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for each graph data while training. To efficiently learn the graph, a distance metric learning is proposed. Extensive experiments on nine graph-structured datasets have demonstrated the superior performance improvement on both convergence speed and predictive accuracy.",
"title": ""
},
{
"docid": "946a5835970a54c748031f2c9945a661",
"text": "There is a general move in the aerospace industry to increase the amount of electrically powered equipment on future aircraft. This is generally referred to as the \"more electric aircraft\" and brings on a number of technical challenges that need to be addressed and overcome. Recent advancements in power electronics technology are enabling new systems to be developed and applied to aerospace applications. The growing trend is to connect the AC generator to the aircraft engine via a direct connection or a fixed ratio transmission thus, resulting in the generator providing a variable frequency supply. This move offers benefits to the aircraft such as reducing the weight and improving the reliability. Many aircraft power systems are now operating with a variable frequency over a typical range of 350 Hz to 800 Hz which varies with the engine speed[1,2]. This paper presents the results from a simple scheme for an adaptive control algorithm which could be suitable for use with an electric actuator (or other) aircraft load. The design of this system poses significant challenges due to the nature of the load range and supply frequency variation and requires many features such as: 1) Small input current harmonics to minimize losses., 2) Minimum size and weight to maximize portability and power density. Details will be given on the design methodology and simulation results obtained.",
"title": ""
},
{
"docid": "913478fa2a53363c4d8dc6212c960cbf",
"text": "The rapidly growing world energy use has already raised concerns over supply difficulties, exhaustion of energy resources and heavy environmental impacts (ozone layer depletion, global warming, climate change, etc.). The global contribution from buildings towards energy consumption, both residential and commercial, has steadily increased reaching figures between 20% and 40% in developed countries, and has exceeded the other major sectors: industrial and transportation. Growth in population, increasing demand for building services and comfort levels, together with the rise in time spent inside buildings, assure the upward trend in energy demand will continue in the future. For this reason, energy efficiency in buildings is today a prime objective for energy policy at regional, national and international levels. Among building services, the growth in HVAC systems energy use is particularly significant (50% of building consumption and 20% of total consumption in the USA). This paper analyses available information concerning energy consumption in buildings, and particularly related to HVAC systems. Many questions arise: Is the necessary information available? Which are the main building types? What end uses should be considered in the breakdown? Comparisons between different countries are presented specially for commercial buildings. The case of offices is analysed in deeper detail. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b9f11c9ba4792e112bd0bf8500e01ded",
"text": "We describe a database of static images and video clips of human faces and people that is useful for testing algorithms for face and person recognition, head/eye tracking, and computer graphics modeling of natural human motions. For each person there are nine static \"facial mug shots\" and a series of video streams. The videos include a \"moving facial mug shot,\" a facial speech clip, one or more dynamic facial expression clips, two gait videos, and a conversation video taken at a moderate distance from the camera. Complete data sets are available for 284 subjects and duplicate data sets, taken subsequent to the original set, are available for 229 subjects.",
"title": ""
},
{
"docid": "7be3de98485a50c1ee56d808ad18e0c5",
"text": "All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Consequently, plausible models of human cognition should exhibit similar patterns of gradual forgetting old information as new information is acquired. Only rarely (see Box 3) does new learning in natural cognitive systems co pletely disrupt or erase previously learned information. In other words, natural cognitive systems do not, in general, forget catastrophically. Unfortunately, however, this is precisely what occurs under certain circumstances in distributed connectionist networks. It turns out that the very features that give these networks their much-touted abilities to generalize, to function in the presence of degraded input, etc., are the root cause of catastrophic forgetting. The challenge is how to keep the advantages of distributed connectionist networks while avoiding the problem of catastrophic forgetting. In this article, we examine the causes, consequences and numerous solutions to the problem of catastrophic forgetting in neural networks. We consider how the brain might have overcome this problem and explore the consequences of this solution. Introduction By the end of the 1980’s many of the early problems with connectionist networks, such as their difficulties with sequence-learning and the profoundly stimulus-response nature of supervised learning algorithms such as error backpropagation had been largely solved. However, as these problems were being solved, another was discovered by McCloskey and Cohen and Ratcliff . They suggested that there might be a fundamental imitation to this type of distributed architecture, in the same way that Minsky and Papert 3 had shown twenty years before that there were certain fundamental limitations to what a perceptron 4,5 could do. They observed that under certain conditions, the process of learning a new set of patterns suddenly and completely erased a network’s knowledge of what it had already learned. They referred to this phenomenon as catastrophic interference (or catastrophic forgetting) and suggested that the underlying reason for this difficulty was the very thing — a single set of shared weights — that gave the networks their remarkable abilities to generalize and degrade gracefully. Catastrophic interference is a radical manifestation of a more general problem for connectionist models of memory — in fact, for any model of memory —, the so-called “stability-plasticity” problem. 6,7 The problem is how to design a system that is simultaneously sensitive to, but not radically disrupted by, new input. In this article we will focus primarily on a particular, widely used class of distributed neural network architectures — namely, those with a single set of shared (or partially shared) multiplicative weights. While this defines a very broad class of networks, this definition is certainly not exhaustive. In the remainder of this article we will discuss the numerous attempts over the last decade to solve this problem within the context of this type of network.",
"title": ""
},
{
"docid": "7e736d4f906a28d4796fe7ac404b5f94",
"text": "The internal program representation chosen for a software development environment plays a critical role in the nature of that environment. A form should facilitate implementation and contribute to the responsiveness of the environment to the user. The program dependence graph (PDG) may be a suitable internal form. It allows programs to be sliced in linear time for debugging and for use by language-directed editors. The slices obtained are more accurate than those obtained with existing methods because I/O is accounted for correctly and irrelevant statements on multi-statement lines are not displayed. The PDG may be interpreted in a data driven fashion or may have highly optimized (including vectorized) code produced from it. It is amenable to incremental data flow analysis, improving response time to the user in an interactive environment and facilitating debugging through data flow anomaly detection. It may also offer a good basis for software complexity metrics, adding to the completeness of an environment based on it.",
"title": ""
},
{
"docid": "a3772746888956cf78e56084f74df0bf",
"text": "Emerging interest of trading companies and hedge funds in mining social web has created new avenues for intelligent systems that make use of public opinion in driving investment decisions. It is well accepted that at high frequency trading, investors are tracking memes rising up in microblogging forums to count for the public behavior as an important feature while making short term investment decisions. We investigate the complex relationship between tweet board literature (like bullishness, volume, agreement etc) with the financial market instruments (like volatility, trading volume and stock prices). We have analyzed Twitter sentiments for more than 4 million tweets between June 2010 and July 2011 for DJIA, NASDAQ-100 and 11 other big cap technological stocks. Our results show high correlation (upto 0.88 for returns) between stock prices and twitter sentiments. Further, using Granger’s Causality Analysis, we have validated that the movement of stock prices and indices are greatly affected in the short term by Twitter discussions. Finally, we have implemented Expert Model Mining System (EMMS) to demonstrate that our forecasted returns give a high value of R-square (0.952) with low Maximum Absolute Percentage Error (MaxAPE) of 1.76% for Dow Jones Industrial Average (DJIA). We introduce a novel way to make use of market monitoring elements derived from public mood to retain a portfolio within limited risk state (highly improved hedging bets) during typical market conditions.",
"title": ""
},
{
"docid": "ecb06a681f7d14fc690376b4c5a630af",
"text": "Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.",
"title": ""
},
{
"docid": "dbb3ffab2b2a8619ccfdef04be155496",
"text": "Online discussion communities play an important role in the development of relationships and the transfer of knowledge within and across organizations. Their underlying technologies enhance these processes by providing infrastructures through which group-based communication can occur. Community administrators often make decisions about technologies with the goal of enhancing the user experience, but the impact of such decisions on how a community develops must also be considered. To shed light on this complex and underresearched phenomenon, we offer a model of key latent constructs influenced by technology choices and possible causal paths by which they have dynamic effects on communities. Two important community characteristics that can be impacted are community size (number of members) and community resilience (membership that is willing to remain involved with the community in spite of variability and change in the topics discussed). To model community development, we build on attraction–selection–attrition (ASA) theory, introducing two new concepts: participation costs (how much time and effort are required to engage with content provided in a community) and topic consistency cues (how strongly a community signals that topics that may appear in the future will be consistent with what it has hosted in the past). We use the proposed ASA theory of online communities (OCASA) to develop a simulation model of community size and resilience that affirms some conventional wisdom and also has novel and counterintuitive implications. Analysis of the model leads to testable new propositions about the causal paths by which technology choices affect the emergence of community size and community resilience, and associated implications for community sustainability. 1",
"title": ""
},
{
"docid": "af12d1794a65cb3818f1561384e069b2",
"text": " Multi-Criteria Decision Making (MCDM) methods have evolved to accommodate various types of applications. Dozens of methods have been developed, with even small variations to existing methods causing the creation of new branches of research. This paper performs a literature review of common Multi-Criteria Decision Making methods, examines the advantages and disadvantages of the identified methods, and explains how their common applications relate to their relative strengths and weaknesses. The analysis of MCDM methods performed in this paper provides a clear guide for how MCDM methods should be used in particular situations.",
"title": ""
}
] |
scidocsrr
|
251f643a1520bee0962922d0a60bab59
|
An Integrated UAV Navigation System Based on Aerial Image Matching
|
[
{
"docid": "5157063545b7ec7193126951c3bdb850",
"text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.",
"title": ""
},
{
"docid": "08bd4d2c48ebde047a8b36ce72fe61b6",
"text": "S imultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association , and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsifica-tion in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance based methods, and multihypothesis techniques. The third development discussed in this tutorial is …",
"title": ""
}
] |
[
{
"docid": "0dbad8ca53615294bc25f7a2d8d41d99",
"text": "Faceted search is becoming a popular method to allow users to interactively search and navigate complex information spaces. A faceted search system presents users with key-value metadata that is used for query refinement. While popular in e-commerce and digital libraries, not much research has been conducted on which metadata to present to a user in order to improve the search experience. Nor are there repeatable benchmarks for evaluating a faceted search engine. This paper proposes the use of collaborative filtering and personalization to customize the search interface to each user's behavior. This paper also proposes a utility based framework to evaluate the faceted interface. In order to demonstrate these ideas and better understand personalized faceted search, several faceted search algorithms are proposed and evaluated using the novel evaluation methodology.",
"title": ""
},
{
"docid": "6844c0ab63ee51775f311bd63d05a455",
"text": "In a first step toward the development of an efficient and accurate protocol to estimate amino acids' pKa's in proteins, we present in this work how to reproduce the pKa's of alcohol and thiol based residues (namely tyrosine, serine, and cysteine) in aqueous solution from the knowledge of the experimental pKa's of phenols, alcohols, and thiols. Our protocol is based on the linear relationship between computed atomic charges of the anionic form of the molecules (being either phenolates, alkoxides, or thiolates) and their respective experimental pKa values. It is tested with different environment approaches (gas phase or continuum solvent-based approaches), with five distinct atomic charge models (Mulliken, Löwdin, NPA, Merz-Kollman, and CHelpG), and with nine different DFT functionals combined with 16 different basis sets. Moreover, the capability of semiempirical methods (AM1, RM1, PM3, and PM6) to also predict pKa's of thiols, phenols, and alcohols is analyzed. From our benchmarks, the best combination to reproduce experimental pKa's is to compute NPA atomic charge using the CPCM model at the B3LYP/3-21G and M062X/6-311G levels for alcohols (R(2) = 0.995) and thiols (R(2) = 0.986), respectively. The applicability of the suggested protocol is tested with tyrosine and cysteine amino acids, and precise pKa predictions are obtained. The stability of the amino acid pKa's with respect to geometrical changes is also tested by MM-MD and DFT-MD calculations. Considering its strong accuracy and its high computational efficiency, these pKa prediction calculations using atomic charges indicate a promising method for predicting amino acids' pKa in a protein environment.",
"title": ""
},
{
"docid": "d2e434f472b60e17ab92290c78706945",
"text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.",
"title": ""
},
{
"docid": "26b5d72d3135623765b389c8a2f40625",
"text": "Data preprocessing is a fundamental part of any machine learning application and frequently the most time-consuming aspect when developing a machine learning solution. Preprocessing for deep learning is characterized by pipelines that lazily load data and perform data transformation, augmentation, batching and logging. Many of these functions are common across applications but require different arrangements for training, testing or inference. Here we introduce a novel software framework named nuts-flow/ml that encapsulates common preprocessing operations as components, which can be flexibly arranged to rapidly construct efficient preprocessing pipelines for deep learning.",
"title": ""
},
{
"docid": "8bf1b97320a6b7319e4b36dfc11b6c7b",
"text": "In recent years, virtual reality exposure therapy (VRET) has become an interesting alternative for the treatment of anxiety disorders. Research has focused on the efficacy of VRET in treating anxiety disorders: phobias, panic disorder, and posttraumatic stress disorder. In this systematic review, strict methodological criteria are used to give an overview of the controlled trials regarding the efficacy of VRET in patients with anxiety disorders. Furthermore, research into process variables such as the therapeutic alliance and cognitions and enhancement of therapy effects through cognitive enhancers is discussed. The implications for implementation into clinical practice are considered.",
"title": ""
},
{
"docid": "7e38ba11e394acd7d5f62d6a11253075",
"text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.",
"title": ""
},
{
"docid": "4301af5b0c7910480af37f01847fb1fe",
"text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.",
"title": ""
},
{
"docid": "85f126fe22e74e3f5b1f1ad3adec0036",
"text": "Debate is open as to whether social media communities resemble real-life communities, and to what extent. We contribute to this discussion by testing whether established sociological theories of real-life networks hold in Twitter. In particular, for 228,359 Twitter profiles, we compute network metrics (e.g., reciprocity, structural holes, simmelian ties) that the sociological literature has found to be related to parts of one’s social world (i.e., to topics, geography and emotions), and test whether these real-life associations still hold in Twitter. We find that, much like individuals in real-life communities, social brokers (those who span structural holes) are opinion leaders who tweet about diverse topics, have geographically wide networks, and express not only positive but also negative emotions. Furthermore, Twitter users who express positive (negative) emotions cluster together, to the extent of having a correlation coefficient between one’s emotions and those of friends as high as 0.45. Understanding Twitter’s social dynamics does not only have theoretical implications for studies of social networks but also has practical implications, including the design of self-reflecting user interfaces that make people aware of their emotions, spam detection tools, and effective marketing campaigns.",
"title": ""
},
{
"docid": "d0a6ca9838f8844077fdac61d1d75af1",
"text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-",
"title": ""
},
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "7a7e0363ca4ad5c83a571449f53834ca",
"text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.",
"title": ""
},
{
"docid": "2fc645ec4f9fe757be65f3f02b803b50",
"text": "Multicast communication plays a crucial role in Mobile Adhoc Networks (MANETs). MANETs provide low cost, self configuring devices for multimedia data communication in military battlefield scenarios, disaster and public safety networks (PSN). Multicast communication improves the network performance in terms of bandwidth consumption, battery power and routing overhead as compared to unicast for same volume of data communication. In recent past, a number of multicast routing protocols (MRPs) have been proposed that tried to resolve issues and challenges in MRP. Multicast based group communication demands dynamic construction of efficient and reliable route for multimedia data communication during high node mobility, contention, routing and channel overhead. This paper gives an insight into the merits and demerits of the currently known research techniques and provides a better environment to make reliable MRP. It presents a ample study of various Quality of Service (QoS) techniques and existing enhancement in mesh based MRPs. Mesh topology based MRPs are classified according to their enhancement in routing mechanism and QoS modification on On-Demand Multicast Routing Protocol (ODMRP) protocol to improve performance metrics. This paper covers the most recent, robust and reliable QoS and Mesh based MRPs, classified based on their operational features, with their advantages and limitations, and provides comparison of their performance parameters.",
"title": ""
},
{
"docid": "1cc81fa2fbfc2a47eb07bb7ef969d657",
"text": "Wind Turbines (WT) are one of the fastest growing sources of power production in the world today and there is a constant need to reduce the costs of operating and maintaining them. Condition monitoring (CM) is a tool commonly employed for the early detection of faults/failures so as to minimise downtime and maximize productivity. This paper provides a review of the state-of-the-art in the CM of wind turbines, describing the different maintenance strategies, CM techniques and methods, and highlighting in a table the various combinations of these that have been reported in the literature. Future research opportunities in fault diagnostics are identified using a qualitative fault tree analysis. Crown Copyright 2012 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b8b7abcef8e23f774bd4e74067a27e6f",
"text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright 1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA",
"title": ""
},
{
"docid": "6168c4c547dca25544eedf336e369d95",
"text": "Big Data means a very large amount of data and includes a range of methodologies such as big data collection, processing, storage, management, and analysis. Since Big Data Text Mining extracts a lot of features and data, clustering and classification can result in high computational complexity and the low reliability of the analysis results. In particular, a TDM (Term Document Matrix) obtained through text mining represents term-document features but features a sparse matrix. In this paper, the study focuses on selecting a set of optimized features from the corpus. A Genetic Algorithm (GA) is used to extract terms (features) as desired according to term importance calculated by the equation found. The study revolves around feature selection method to lower computational complexity and to increase analytical performance.We designed a new genetic algorithm to extract features in text mining. TF-IDF is used to reflect document-term relationships in feature extraction. Through the repetitive process, features are selected as many as the predetermined number. We have conducted clustering experiments on a set of spammail documents to verify and to improve feature selection performance. And we found that the proposal FSGA algorithm shown better performance of Text Clustering and Classification than using all of features.",
"title": ""
},
{
"docid": "9a4dab93461185ea98ccea7733081f73",
"text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.",
"title": ""
},
{
"docid": "8549f04362f52ddec78e48dd6e1cadce",
"text": "In recent years both the number and the size of organisational databases have increased rapidly. However, although available processing power has also grown, the increase in stored data has not necessarily led to a corresponding increase in useful information and knowledge. This has led to a growing interest in the development of tools capable of harnessing the increased processing power available to better utilise the potential of stored data. The terms “Knowledge Discovery in Databases” and “Data Mining” have been adopted for a field of research dealing with the automatic discovery of knowledge implicit within databases. Data mining is useful in situations where the volume of data is either too large or too complicated for manual processing or, to a lesser extent, where human experts are unavailable to provide knowledge. The success already attained by a wide range of data mining applications has continued to prompt further investigation into alternative data mining techniques and the extension of data mining to new domains. This paper surveys, from the standpoint of the database systems community, current issues in data mining research by examining the architectural and process models adopted by knowledge discovery systems, the different types of discovered knowledge, the way knowledge discovery systems operate on different data types, various techniques for knowledge discovery and the ways in which discovered knowledge is used.",
"title": ""
},
{
"docid": "00daf995562570c89901ca73e23dd29d",
"text": "As advances in technology make payloads and instruments for space missions smaller, lighter, and more power efficient, a niche market is emerging from the university community to perform rapidly developed, low-cost missions on very small spacecraft - micro, nano, and picosatellites. Among this class of spacecraft, are CubeSats, with a basic form of 10 times 10 times 10 cm, weighing a maximum of 1kg. In order to serve as viable alternative to larger spacecraft, small satellite platforms must provide the end user with access to space and similar functionality to mainstream missions. However, despite recent advances, small satellites have not been able to reach their full potential. Without launch vehicles dedicated to launching small satellites as primary payloads, launch opportunities only exist in the form of co-manifest or secondary payload missions, with launches often subsidized by the government. In addition, power, size, and mass constraints create additional hurdles for small satellites. To date, the primary method of increasing a small satellite's capability has been focused on miniaturization of technology. The CubeSat Program embraces this approach, but has also focused on developing an infrastructure to offset unavoidable limitations caused by the constraints of small satellite missions. The main components of this infrastructure are: an extensive developer community, standards for spacecraft and launch vehicle interfaces, and a network of ground stations. This paper will focus on the CubeSat Program, its history, and the philosophy behind the various elements that make it a practical an enabling alternative for access to space.",
"title": ""
},
{
"docid": "ebf8c89f326b0c1e9b0d2f565b5b30a6",
"text": "OBJECTIVE\nTo identify the cross-national prevalence of psychotic symptoms in the general population and to analyze their impact on health status.\n\n\nMETHOD\nThe sample was composed of 256,445 subjects (55.9% women), from nationally representative samples of 52 countries worldwide participating in the World Health Organization's World Health Survey. Standardized and weighted prevalence of psychotic symptoms were calculated in addition to the impact on health status as assessed by functioning in multiple domains.\n\n\nRESULTS\nOverall prevalences for specific symptoms ranged from 4.80% (SE = 0.14) for delusions of control to 8.37% (SE = 0.20) for delusions of reference and persecution. Prevalence figures varied greatly across countries. All symptoms of psychosis produced a significant decline in health status after controlling for potential confounders. There was a clear change in health impact between subjects not reporting any symptom and those reporting at least one symptom (effect size of 0.55).\n\n\nCONCLUSIONS\nThe prevalence of the presence of at least one psychotic symptom has a wide range worldwide varying as much as from 0.8% to 31.4%. Psychotic symptoms signal a problem of potential public health concern, independent of the presence of a full diagnosis of psychosis, as they are common and are related to a significant decrement in health status. The presence of at least one psychotic symptom is related to a significant poorer health status, with a regular linear decrement in health depending on the number of symptoms.",
"title": ""
},
{
"docid": "27c2c015c6daaac99b34d00845ec646c",
"text": "Virtual worlds, such as Second Life and Everquest, have grown into virtual game communities that have economic potential. In such communities, virtual items are bought and sold between individuals for real money. The study detailed in this paper aims to identify, model and test the individual determinants for the decision to purchase virtual items within virtual game communities. A comprehensive understanding of these key determinants will enable researchers to further the understanding of player behavior towards virtual item transactions, which are an important aspect of the economic system within virtual games and often raise one of the biggest challenges for game community operators. A model will be developed via a mixture of new constructs and established theories, including the theory of planned behavior (TPB), the technology acceptance model (TAM), trust theory and unified theory of acceptance and use of technology (UTAUT). For this purpose the research uses a sequential, multi-method approach in two phases: combining the use of inductive, qualitative data from focus groups and expert interviews in phase one; and deductive, quantitative survey data in phase two. The final model will hopefully provide an impetus to further research in the area of virtual game community transaction behavior. The paper rounds off with a discussion of further research challenges in this area over the next seven years.",
"title": ""
}
] |
scidocsrr
|
8689795501a0356b68f3008a6ea9aeef
|
SHILLING ATTACK DETECTION IN RECOMMENDER SYSTEMS USING CLASSIFICATION TECHNIQUES
|
[
{
"docid": "5e5681f0bc44eebce176a806d30c37c9",
"text": "Shilling attackers apply biased rating profiles to recommender systems for manipulating online product recommendations. Although many studies have been devoted to shilling attack detection, few of them can handle the hybrid shilling attacks that usually happen in practice, and the studies for real-life applications are rarely seen. Moreover, little attention has yet been paid to modeling both labeled and unlabeled user profiles, although there are often a few labeled but numerous unlabeled users available in practice. This paper presents a Hybrid Shilling Attack Detector, or HySAD for short, to tackle these problems. In particular, HySAD introduces MC-Relief to select effective detection metrics, and Semi-supervised Naive Bayes (SNB_lambda) to precisely separate Random-Filler model attackers and Average-Filler model attackers from normal users. Thorough experiments on MovieLens and Netflix datasets demonstrate the effectiveness of HySAD in detecting hybrid shilling attacks, and its robustness for various obfuscated strategies. A real-life case study on product reviews of Amazon.cn is also provided, which further demonstrates that HySAD can effectively improve the accuracy of a collaborative-filtering based recommender system, and provide interesting opportunities for in-depth analysis of attacker behaviors. These, in turn, justify the value of HySAD for real-world applications.",
"title": ""
}
] |
[
{
"docid": "cfb7a8e268662a4e442dc33c8978585b",
"text": "Air Traffic Control (ATC) plays a crucial role in the modern air transportation system. As a decentralized system, every control sector in the ATC network system needs to use all sorts of available information to manage local air traffic in a safe, smooth and cost-efficient way. A key issue is: how each individual ATC sector should use global traffic information to make local ATC decisions, such that the global air traffic, not just the local, can be improved. This paper reports a simulation study on ATC strategies aiming to address the above issue. The coming-in traffic to sectors is the focus, and the ATC strategy means how to define and apply various local ATC rules, such as first-come-first-served rule, to the coming-in traffic according to the global traffic information. A simplified ATC network model is set up and a software simulation system is then developed. The simulation results reveal that, even for a same set of ATC rules, a bad strategy of applying them can cause heavy traffic congestion, while a good strategy can significantly reduce delays, improve safety, and increase efficiency of using airspace.",
"title": ""
},
{
"docid": "5172a41cd749c7b2f6eed3a7e25969dd",
"text": "Missing values in inputs, outputs cannot be handled by the original data envelopment analysis (DEA) models. In this paper we introduce an approach based on interval DEA that allows the evaluation of the units with missing values along with the other units with available crisp data. The missing values are replaced by intervals in which the unknown values are likely to belong. The constant bounds of the intervals, depending on the application, can be estimated by using statistical or experiential techniques. For the units with missing values, the proposed models are able to identify an upper and a lower bound of their efficiency scores. The efficiency analysis is further extended by estimating new values for the initial interval bounds that may turn the unit to an efficient one. The proposed methodology is illustrated by an application which evaluates the efficiency of a set of secondary public schools in Greece, a number of which appears to have missing values in some inputs and outputs. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "92a00453bc0c2115a8b37e5acc81f193",
"text": "Choosing the appropriate software development methodology is something which continues to occupy the minds of many IT professionals. The introduction of “Agile” development methodologies such as XP and SCRUM held the promise of improved software quality and reduced delivery times. Combined with a Lean philosophy, there would seem to be potential for much benefit. While evidence does exist to support many of the Lean/Agile claims, we look here at how such methodologies are being adopted in the rigorous environment of safety-critical embedded software development due to its high regulation. Drawing on the results of a systematic literature review we find that evidence is sparse for Lean/Agile adoption in these domains. However, where it has been trialled, “out-of-the-box” Agile practices do not seem to fully suit these environments but rather tailored Agile versions combined with more planbased practices seem to be making inroads.",
"title": ""
},
{
"docid": "af85d7541ecd30d95236bb8779b7c9ab",
"text": "The paper presents a Markov chain-based method for automatic written language identification. Given a training document in a specific language, each word can be represented as a Markov chain of letters. Using the entire training document regarded as a set of Markov chains, the set of initial and transition probabilities can be calculated and referred to as a Markov model for that language. Given an unknown language string, the maximum likelihood decision rule was used to identify language. Experimental results showed that the proposed method achieved lower error rate and faster identification speed than the current n-gram method.",
"title": ""
},
{
"docid": "103951fcfead2de24396e7ad81ec0221",
"text": "Numerous applications in scientific, medical, and military areas demand robust, compact, sensitive, and fast ultraviolet (UV) detection. Our (Al)GaN photodiodes pose high avalanche gain and single-photon detection efficiency that can measure up to these requirements. Inherit advantage of back-illumination in our devices offers an easier integration and layout packaging via flip-chip hybridization for UV focal plane arrays that may find uses from space applications to hostile-agent detection. Thanks to the recent (Al)GaN material optimization, III-Nitrides, known to have fast carrier dynamics and short relaxation times, are employed in (Al)GaN based superlattices that absorb in near-infrared regime. In this work, we explain the origins of our high performance UV APDs, and employ our (Al)GaN material knowledge for intersubband applications. We also discuss the extension of this material engineering into the far infrared, and even the terahertz (THz) region.",
"title": ""
},
{
"docid": "c451c09ca5535cce49d4fa5d0df7318f",
"text": "This paper features the kinematic analysis of a SCORBOT-ER Vplus robot arm which is used for doing successful robotic manipulation task in its workspace. The SCORBOT-ER Vplus is a 5-dof vertical articulated robot and all the joints are revolute [1]. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study along with 4x4 homogeneous matrix. SCORBOT-ER Vplus is a dependable and safe robotic system designed for laboratory and training applications. This versatile system allows students to gain theoretical and practical experience in robotics, automation and control systems. The MATLAB 8.0 is used to solve this mathematical model for a set of joint parameter.",
"title": ""
},
{
"docid": "2f84b44cdce52068b7e692dad7feb178",
"text": "Two stage PCR has been used to introduce single amino acid substitutions into the EF hand structures of the Ca(2+)-activated photoprotein aequorin. Transcription of PCR products, followed by cell free translation of the mRNA, allowed characterisation of recombinant proteins in vitro. Substitution of D to A at position 119 produced an active photoprotein with a Ca2+ affinity reduced by a factor of 20 compared to the wild type recombinant aequorin. This recombinant protein will be suitable for measuring Ca2+ inside the endoplasmic reticulum, the mitochondria, endosomes and the outside of live cells.",
"title": ""
},
{
"docid": "e6dcae244f91dc2d7e843d9860ac1cfd",
"text": "After Disney's Michael Eisner, Miramax's Harvey Weinstein, and Hewlett-Packard's Carly Fiorina fell from their heights of power, the business media quickly proclaimed thatthe reign of abrasive, intimidating leaders was over. However, it's premature to proclaim their extinction. Many great intimidators have done fine for a long time and continue to thrive. Their modus operandi runs counter to a lot of preconceptions about what it takes to be a good leader. They're rough, loud, and in your face. Their tactics include invading others' personal space, staging tantrums, keeping people guessing, and possessing an indisputable command of facts. But make no mistake--great intimidators are not your typical bullies. They're driven by vision, not by sheer ego or malice. Beneath their tough exteriors and sharp edges are some genuine, deep insights into human motivation and organizational behavior. Indeed, these leaders possess political intelligence, which can make the difference between paralysis and successful--if sometimes wrenching--organizational change. Like socially intelligent leaders, politically intelligent leaders are adept at sizing up others, but they notice different things. Those with social intelligence assess people's strengths and figure out how to leverage them; those with political intelligence exploit people's weaknesses and insecurities. Despite all the obvious drawbacks of working under them, great intimidators often attract the best and brightest. And their appeal goes beyond their ability to inspire high performance. Many accomplished professionals who gravitate toward these leaders want to cultivate a little \"inner intimidator\" of their own. In the author's research, quite a few individuals reported having positive relationships with intimidating leaders. In fact, some described these relationships as profoundly educational and even transformational. So before we throw out all the great intimidators, the author argues, we should stop to consider what we would lose.",
"title": ""
},
{
"docid": "0ce06f95b1dafcac6dad4413c8b81970",
"text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.",
"title": ""
},
{
"docid": "49703dde57425d5db0affbf59d3ebe2e",
"text": "The art of memory (ars memoriae) used since classical times includes using a well-known scene to associate each view or part of the scene with a different item in a speech. This memory technique is also known as the \"method of loci.\" The new theory is proposed that this type of memory is implemented in the CA3 region of the hippocampus where there are spatial view cells in primates that allow a particular view to be associated with a particular object in an event or episodic memory. Given that the CA3 cells with their extensive recurrent collateral system connecting different CA3 cells, and associative synaptic modifiability, form an autoassociation or attractor network, the spatial view cells with their approximately Gaussian view fields become linked in a continuous attractor network. As the view space is traversed continuously (e.g., by self-motion or imagined self-motion across the scene), the views are therefore successively recalled in the correct order, with no view missing, and with low interference between the items to be recalled. Given that each spatial view has been associated with a different discrete item, the items are recalled in the correct order, with none missing. This is the first neuroscience theory of ars memoriae. The theory provides a foundation for understanding how a key feature of ars memoriae, the ability to use a spatial scene to encode a sequence of items to be remembered, is implemented. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "9d530fbbdb4448175f655b6cc8b4d539",
"text": "Cognitive big data: survey and review on big data research and its implications. What is really ‘new’ in big data? Artur Lugmayr Björn Stockleben Christoph Scheib Mathew Mailaparampil Article information: To cite this document: Artur Lugmayr Björn Stockleben Christoph Scheib Mathew Mailaparampil , (2017),\" Cognitive big data: survey and review on big data research and its implications. What is really ‘new’ in big data? \", Journal of Knowledge Management, Vol. 21 Iss 1 pp. Permanent link to this document: http://dx.doi.org/10.1108/JKM-07-2016-0307",
"title": ""
},
{
"docid": "b34beab849a50ff04a948f277643fb74",
"text": "To cite: Hirai T, Koster M. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/ bcr-2013-009759 DESCRIPTION A 22-year-old man with a history of intravenous heroin misuse, presented with 1 week of fatigue and fever. Blood cultures were positive for methicillin-sensitive Staphylococcus aureus. Physical examination showed multiple painful 1– 2 mm macular rashes on the palm and soles bilaterally (figures 1 and 2). Splinter haemorrhages (figure 3) and conjunctival petechiae (figure 4) were also noted. A transoesophageal echocardiogram demonstrated a 16-mm vegetation on the mitral valve (figure 5). Vegitations >10 mm in diameter and infection involving the mitral valve are independently associated with an increased risk of embolisation. However, he decided medical management after extensive discussion and was treated with intravenous nafcillin for 6 weeks. He returned 8 weeks later with acute shortness of breath and evidence of a perforated mitral valve for which he subsequently underwent a successful mitral valve repair with an uneventful recovery.",
"title": ""
},
{
"docid": "578b2b86a50f1b2e43f9efe0233b492a",
"text": "Perceived racism contributes to persistent health stress leading to health disparities. African American/Black persons (BPs) believe subtle, rather than overt, interpersonal racism is increasing. Sue and colleagues describe interpersonal racism as racial microaggressions: \"routine\" marginalizing indignities by White persons (WPs) toward BPs that contribute to health stress. In this narrative, exploratory study, Black adults (n = 10) were asked about specific racial microaggressions; they all experienced multiple types. Categorical and narrative analysis captured interpretations, strategies, and health stress attributions. Six iconic narratives contextualized health stress responses. Diverse mental and physical symptoms were attributed to racial microaggressions. Few strategies in response had positive outcomes. Future research includes development of coping strategies for BPs in these interactions, exploration of WPs awareness of their behaviors, and preventing racial microaggressions in health encounters that exacerbate health disparities.",
"title": ""
},
{
"docid": "60a92a659fbfe0c81da9a6902e062455",
"text": "Public knowledge of crime and justice is largely derived from the media. This paper examines the influence of media consumption on fear of crime, punitive attitudes and perceived police effectiveness. This research contributes to the literature by expanding knowledge on the relationship between fear of crime and media consumption. This study also contributes to limited research on the media’s influence on punitive attitudes, while providing a much-needed analysis of the relationship between media consumption and satisfaction with the police. Employing OLS regression, the results indicate that respondents who are regular viewers of crime drama are more likely to fear crime. However, the relationship is weak. Furthermore, the results indicate that gender, education, income, age, perceived neighborhood problems and police effectiveness are statistically related to fear of crime. In addition, fear of crime, income, marital status, race, and education are statistically related to punitive attitudes. Finally, age, fear of crime, race, and perceived neighborhood problems are statistically related to perceived police effectiveness.",
"title": ""
},
{
"docid": "4d2bfda62140962af079817fc7dbd43e",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "c1f5f0df64dd0be18ca01efb90bb2909",
"text": "A number of machine learning algorithms are using a metric, or a distance, in order to compare individuals. The Euclidean distance is usually employed, but it may be more efficient to learn a parametric distance such as Mahalanobis metric. Learning such a metric is a hot topic since more than ten years now, and a number of methods have been proposed to efficiently learn it. However, the nature of the problem makes it quite difficult for large scale data, as well as data for which classes overlap. This paper presents a simple way of improving accuracy and scalability of any iterative metric learning algorithm, where constraints are obtained prior to the algorithm. The proposed approach relies on a loss-dependent weighted selection of constraints that are used for learning the metric. Using the corresponding dedicated loss function, the method clearly allows to obtain better results than state-of-the-art methods, both in terms of accuracy and time complexity. Some experimental results on real world, and potentially large, datasets are demonstrating the effectiveness of our proposition. Keywords—Active learning, boosting, constraint selection, Mahalanobis distance, metric learning",
"title": ""
},
{
"docid": "b381b859cececfd094e4e11663d481a6",
"text": "Social media are increasingly implemented in work organizations as tools for communication among employees. It is important that we develop an understanding of how they enable and constrain the communicative activities through which work is accomplished because it is these very dynamics that constitute and perpetuate organizations. We begin by offering a definition of enterprise social media and providing a rough historical account of the various avenues through which these technologies have entered and continue to enter the workplace. We also review areas of research covered by papers in this special issue and papers on enterprise social media published elsewhere to take stock of the current state of out knowledge and to propose directions for future research.",
"title": ""
},
{
"docid": "960c2ad0a058e526901d23c9d301701c",
"text": "Preliminary notes High-rise buildings are designed and constructed by use of modern materials and integral structural systems which are not usual for typical buildings. The existing seismic regulations act as a limiting factor and cannot cover specific behaviour of these buildings. Considering the increasing trend in their construction worldwide, additional investigations are necessary, particularly for structures in seismically active areas. It is necessary to elaborate official codes which will clearly prescribe methods, procedures and criteria for analysis and design of such type of structures. The main goal of the paper is to present a review of the existing structural systems, design recommendations and guidelines for high-rises worldwide, as well as selected results from seismic performance of 44 stories RC high-rise building which is a unique experience coming from design and construction of the four high-rise buildings in Skopje (Macedonia).",
"title": ""
},
{
"docid": "eee9bbc4e57981813a45114061ef01ec",
"text": "Although Marx-bank connection of avalanche transistors is widely used in applications requiring high-voltage nanosecond and subnanosecond pulses, the physical mechanisms responsible for the voltage-ramp-initiated switching of a single transistor in the Marx chain remain unclear. It is shown here by detailed comparison of experiments with physical modeling that picosecond switching determined by double avalanche injection in the collector-base diode gives way to formation and shrinkage of the collector field domain typical of avalanche transistors under the second breakdown. The latter regime, characterized by a lower residual voltage, becomes possible despite a short-connected emitter and base, thanks to the 2-D effects.",
"title": ""
},
{
"docid": "998bf65b2e95db90eb9fab8e13b47ff6",
"text": "Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification. Despite the success, the huge number of parameters blocks its deployment to situations with light computing resources. Researchers resort to the redundancy in the weights of DNNs and attempt to find how fewer parameters can be chosen while preserving the accuracy at the same time. Although several promising results have been shown along this research line, most existing methods either fail to significantly compress a well-trained deep network or require a heavy fine-tuning process for the compressed network to regain the original performance. In this paper, we propose the Block Term networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers). In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them. We conduct extensive experiments on benchmark datasets to demonstrate that BT-layers can achieve a very large compression ratio on the number of parameters while preserving the representation power of the original FC-layers as much as possible. Specifically, we can get a higher performance while requiring fewer parameters compared with the tensor train method.",
"title": ""
}
] |
scidocsrr
|
cc831b4cdc07726953e9b1963b010dd7
|
Modelling Class Noise with Symmetric and Asymmetric Distributions
|
[
{
"docid": "c117bb1f7a25c44cbd0d75b7376022f6",
"text": "Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples withnoisy labels. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We demonstrate the feasibility of the approach on two real-world data-sets.",
"title": ""
}
] |
[
{
"docid": "35ce8c11fa7dd22ef0daf9d0bd624978",
"text": "Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a binary classification task, where each region is independently classified using local information. In this paper, we show that jointly predicting OOV regions, and including contextual information from each region, leads to substantial improvement in OOV detection. Compared to the state-of-the-art, we reduce the missed OOV rate from 42.6% to 28.4% at 10% false alarm rate.",
"title": ""
},
{
"docid": "105faef95b27fc0852d95dc0b2306950",
"text": "An amalgamated concept of Internet of m-health Things (m-IoT) has been introduced recently and defined as a new concept that matches the functionalities of m-health and IoT for a new and innovative future (4G health) applications. It is well know that diabetes is a major chronic disease problem worldwide with major economic and social impact. To-date there have not been any studies that address the potential of m-IoT for non-invasive glucose level sensing with advanced opto-physiological assessment technique and diabetes management. In this paper we address the potential benefits of using m-IoT in non-invasive glucose level sensing and the potential m-IoT based architecture for diabetes management. We expect to achieve intelligent identification and management in a heterogeneous connectivity environment from the mobile healthcare perspective. Furthermore this technology will enable new communication connectivity routes between mobile patients and care services through innovative IP based networking architectures.",
"title": ""
},
{
"docid": "718a38a546de2dba3233607d7652c94a",
"text": "In modern power converter circuits, freewheeling diode snappy recovery phenomenon (voltage snap-off) can ultimately destroy the insulated gate bipolar transistor (IGBT) during turn-on and cause a subsequent circuit failure. In this paper, snappy recovery of modern fast power diodes is investigated with the aid of semiconductor device simulation tools, and experimental test results. The work presented here confirms that the reverse recovery process can by expressed by means of diode capacitive effects which influence the reverse recovery characteristics and determine if the diode exhibits soft or snappy recovery behavior. From the experimental and simulation results, a clear view is obtained for the physical process, causes and device/circuit conditions at which snap-off occurs. The analysis is based on the effect of both device and external operating parameters on the excess minority carrier distributions before and during the reverse recovery transient period.",
"title": ""
},
{
"docid": "3cfc860fde33aa93840358a6764a73a2",
"text": "Renal cysts are commonly encountered in clinical practice. Although most cysts found on routine imaging studies are benign, there must be an index of suspicion to exclude a neoplastic process or the presence of a multicystic disorder. This article focuses on the more common adult cystic diseases, including simple and complex renal cysts, autosomal-dominant polycystic kidney disease, and acquired cystic kidney disease.",
"title": ""
},
{
"docid": "2f471c24ccb38e70627eba6383c003e0",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "4e7888845f5c139f109caea7b604cb91",
"text": "Elderly or disabled people usually need augmented nursing attention both in home and clinical environments, especially to perform bathing activities. The development of an assistive robotic bath system, which constitutes a central motivation of this letter, would increase the independence and safety of this procedure, ameliorating in this way the everyday life for this group of people. In general terms, the main goal of this letter is to enable natural, physical human–robot interaction, involving human-friendly and user-adaptive online robot motion planning and interaction control. For this purpose, we employ imitation learning using a leader–follower framework called coordinate change dynamic movement primitives (CC-DMP), in order to incorporate the expertise of professional carers for bathing sequences. In this letter, we propose a vision-based washing system, combining CC-DMP framework with a perception-based controller, to adapt the motion of robot's end effector on moving and deformable surfaces, such as a human body part. The controller guarantees globally uniformly asymptotic convergence to the leader movement primitive while ensuring avoidance of restricted areas, such as sensitive skin body areas. We experimentally tested our approach on a setup including the humanoid robot ARMAR-III and a Kinect v2 camera. The robot executes motions learned from the publicly available KIT whole-body human motion database, achieving good tracking performance in challenging interactive task scenarios.",
"title": ""
},
{
"docid": "f925550d3830944b8649266292eae3fd",
"text": "In the recent years antenna design appears as a mature field of research. It really is not the fact because as the technology grows with new ideas, fitting expectations in the antenna design are always coming up. A Ku-band patch antenna loaded with notches and slit has been designed and simulated using Ansoft HFSS 3D electromagnetic simulation tool. Multi-frequency band operation is obtained from the proposed microstrip antenna. The design was carried out using Glass PTFE as the substrate and copper as antenna material. The designed antennas resonate at 15GHz with return loss over 50dB & VSWR less than 1, on implementing different slots in the radiating patch multiple frequencies resonate at 12.2GHz & 15.00GHz (Return Loss -27.5, -37.73 respectively & VSWR 0.89, 0.24 respectively) and another resonate at 11.16 GHz, 15.64GHz & 17.73 GHz with return loss -18.99, -23.026, -18.156 dB respectively and VSWR 1.95, 1.22 & 2.1 respectively. All the above designed band are used in the satellite application for non-geostationary orbit (NGSO) and fixed-satellite services (FSS) providers to operate in various segments of the Ku-band.",
"title": ""
},
{
"docid": "f502fe9a9758a03758620aeaf8bbeb57",
"text": "Empirical data on design processes were obtained from a set of protocol studies of nine experienced industrial designers, whose designs were evaluated on overall quality and on a variety of aspects including creativity. From the protocol data we identify aspects of creativity in design related to the formulation of the design problem and to the concept of originality. We also apply our observations to a model of creative design as the coevolution of problem/solution spaces, and confirm the general validity of the model. We propose refinements to the co-evolution model, and suggest relevant new concepts of ‘default’ and ‘surprise’ problem/solution spaces.",
"title": ""
},
{
"docid": "26e79793addc4750dcacc0408764d1e1",
"text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.",
"title": ""
},
{
"docid": "219461d5edbcf1c71c3fe7eb70028c65",
"text": "Sparse matrixlization, an innovative programming style for MATLAB, is introduced and used to develop an efficient software package, iFEM, on adaptive finite element methods. In this novel coding style, the sparse matrix and its operation is used extensively in the data structure and algorithms. Our main algorithms are written in one page long with compact data structure following the style “Ten digit, five seconds, and one page” proposed by Trefethen. The resulting code is simple, readable, and efficient. A unique strength of iFEM is the ability to perform three dimensional local mesh refinement and two dimensional mesh coarsening which are not available in existing MATLAB packages. Numerical examples indicate that iFEM can solve problems with size 105 unknowns in few seconds in a standard laptop. iFEM can let researchers considerably reduce development time than traditional programming methods.",
"title": ""
},
{
"docid": "4ef27b194f8446065e6d336f649c0e40",
"text": "Vector space representations of words capture many aspects of word similarity, but such methods tend to produce vector spaces in which antonyms (as well as synonyms) are close to each other. For spectral clustering using such word embeddings, words are points in a vector space where synonyms are linked with positive weights, while antonyms are linked with negative weights. We present a new signed spectral normalized graph cut algorithm, signed clustering, that overlays existing thesauri upon distributionally derived vector representations of words, so that antonym relationships between word pairs are represented by negative weights. Our signed clustering algorithm produces clusters of words that simultaneously capture distributional and synonym relations. By using randomized spectral decomposition (Halko et al., 2011) and sparse matrices, our method is both fast and scalable. We validate our clusters using datasets containing human judgments of word pair similarities and show the benefit of using our word clusters for sentiment prediction.",
"title": ""
},
{
"docid": "71a06e2fe758f0c05266cf7f1d41ca8a",
"text": "As deals are becoming more complex, and as technology, and the people supporting it, are becoming key drivers of merger and acquisition processes, planning of information and communication technologies in early stages of the integration process is vital to the realization of benefits of an Merger & Acquisition process. This statement is substantiated through review of literature from academics as well as practitioners, and case exemplifications of the financial service organization, the Nordea Group.",
"title": ""
},
{
"docid": "b829049a8abf47f8f13595ca54eaa009",
"text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.",
"title": ""
},
{
"docid": "2732a453418db4fcdfe5c657e7d5371b",
"text": "This study assesses the pre-service teachers' self-reported future intentions to use technology in Singapore and Malaysia. A survey was employed to validate items from past research. Using the Technology Acceptance Model (TAM) as a research framework, 495 pre-service teachers from Singapore and Malaysia responded to an 11-item questionnaires containing four constructs: intention to use (ITU), attitude towards computer use (ATCU), perceived usefulness (PU), and perceived ease of use (PEU). Structural equation modelling (SEM) was employed as the main method of analysis in this study. A multi-group analysis of invariance was performed on the two samples. The results show that configural and metric invariance were fully supported while scalar and factor variance invariance were partially supported, suggesting that the 11-item measure of the TAM may be robust across cultures and that the factor loading pattern and factor loadings appeared to be equivalent across the cultures examined. While all the paths in the structural model were significant, the variance accounted for in the dependent variable (ITU) was much larger in the Malaysian sample relative to the Singaporean sample. Keyword: technology, pre-service, teacher, Malaysia, Singapore",
"title": ""
},
{
"docid": "4929e1f954519f0976ec54e9ed8c2c37",
"text": "Software support for making effective pen-based applications is currently rudimentary. To facilitate the creation of such applications, we have developed SATIN, a Java-based toolkit designed to support the creation of applications that leverage the informal nature of pens. This support includes a scenegraph for manipulating and rendering objects; support for zooming and rotating objects, switching between multiple views of an object, integration of pen input with interpreters, libraries for manipulating ink strokes, widgets optimized for pens, and compatibility with Java's Swing toolkit. SATIN includes a generalized architecture for handling pen input, consisting of recognizers, interpreters, and multi-interpreters. In this paper, we describe the functionality and architecture of SATIN, using two applications built with SATIN as examples.",
"title": ""
},
{
"docid": "abbfcc25780c42b5acdae2716cb28891",
"text": "There are few multidimensional measures of individual differences in motivation available. The Assessment of Individual Motives-Questionnaire assesses 15 putative dimensions of motivation. The dimensions are based on evolutionary theory and preliminary evidence suggests the motive scales have good psychometric properties. The scales are reliable and there is evidence of their consensual validity (convergence of self-other ratings) and behavioral validity (relationships with self-other reported behaviors of social importance). Additional validity research is necessary, however, especially with respect to current models of personality. The present study tested two general and 24 specific hypotheses based on proposed evolutionary advantages/disadvantages and fitness benefits/costs of the five-factor model of personality together with the new motive scales in a sample of 424 participants (M age=28.8 yr., SD=14.6). Results were largely supportive of the hypotheses. These results support the validity of new motive dimensions and increase understanding of the five-factor model of personality.",
"title": ""
},
{
"docid": "88ab27740e5c957993fd70f0bf6ac841",
"text": "We examine the problem of discrete stock price prediction using a synthesis of linguistic, financial and statistical techniques to create the Arizona Financial Text System (AZFinText). The research within this paper seeks to contribute to the AZFinText system by comparing AZFinText’s predictions against existing quantitative funds and human stock pricing experts. We approach this line of research using textual representation and statistical machine learning methods on financial news articles partitioned by similar industry and sector groupings. Through our research, we discovered that stocks partitioned by Sectors were most predictable in measures of Closeness, Mean Squared Error (MSE) score of 0.1954, predicted Directional Accuracy of 71.18% and a Simulated Trading return of 8.50% (compared to 5.62% for the S&P 500 index). In direct comparisons to existing market experts and quantitative mutual funds, our system’s trading return of 8.50% outperformed well-known trading experts. Our system also performed well against the top 10 quantitative mutual funds of 2005, where our system would have placed fifth. When comparing AZFinText against only those quantitative funds that monitor the same securities, AZFinText had a 2% higher return than the best performing quant fund.",
"title": ""
},
{
"docid": "3765aae3bd550c2ab5b4b32e1a969c71",
"text": "This paper develops a novel algorithm, termed <italic>SPARse Truncated Amplitude flow</italic> (SPARTA), to reconstruct a sparse signal from a small number of magnitude-only measurements. It deals with what is also known as sparse phase retrieval (PR), which is <italic>NP-hard</italic> in general and emerges in many science and engineering applications. Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations. SPARTA is a simple yet effective, scalable, and fast sparse PR solver. On the theoretical side, for any <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math></inline-formula>-dimensional <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math></inline-formula>-sparse (<inline-formula> <tex-math notation=\"LaTeX\">$k\\ll n$</tex-math></inline-formula>) signal <inline-formula><tex-math notation=\"LaTeX\"> $\\boldsymbol {x}$</tex-math></inline-formula> with minimum (in modulus) nonzero entries on the order of <inline-formula> <tex-math notation=\"LaTeX\">$(1/\\sqrt{k})\\Vert \\boldsymbol {x}\\Vert _2$</tex-math></inline-formula>, SPARTA recovers the signal exactly (up to a global unimodular constant) from about <inline-formula><tex-math notation=\"LaTeX\">$k^2\\log n$ </tex-math></inline-formula> random Gaussian measurements with high probability. Furthermore, SPARTA incurs computational complexity on the order of <inline-formula><tex-math notation=\"LaTeX\">$k^2n\\log n$</tex-math> </inline-formula> with total runtime proportional to the time required to read the data, which improves upon the state of the art by at least a factor of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math></inline-formula>. Finally, SPARTA is robust against additive noise of bounded support. Extensive numerical tests corroborate markedly improved recovery performance and speedups of SPARTA relative to existing alternatives.",
"title": ""
},
{
"docid": "5b9c12c1d65ab52d1a7bb6575c6c0bb1",
"text": "The purpose of image enhancement is to process an acquired image for better contrast and visibility of features of interest for visual examination as well as subsequent computer-aided analysis and diagnosis. Therefore, we have proposed an algorithm for medical images enhancement. In the study, we used top-hat transform, contrast limited histogram equalization and anisotropic diffusion filter methods. The system results are quite satisfactory for many different medical images like lung, breast, brain, knee and etc.",
"title": ""
},
{
"docid": "570e48e839bd2250473d4332adf2b53f",
"text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.",
"title": ""
}
] |
scidocsrr
|
ae03aaac2c276548ea14fe521a93fd48
|
Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
|
[
{
"docid": "18288c42186b7fec24a5884454e69989",
"text": "This article addresses the problem of multichannel audio source separation. We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique. We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura-Saito divergence, and also Kullback-Leibler, Cauchy, mean squared error, and phase-sensitive cost functions. We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration. Finally, we present its application to a speech enhancement problem. The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm.",
"title": ""
},
{
"docid": "884121d37d1b16d7d74878fb6aff9cdb",
"text": "All models are wrong, but some are useful. 2 Acknowledgements The authors of this guide would like to thank David Warde-Farley, Guillaume Alain and Caglar Gulcehre for their valuable feedback. Special thanks to Ethan Schoonover, creator of the Solarized color scheme, 1 whose colors were used for the figures. Feedback Your feedback is welcomed! We did our best to be as precise, informative and up to the point as possible, but should there be anything you feel might be an error or could be rephrased to be more precise or com-prehensible, please don't refrain from contacting us. Likewise, drop us a line if you think there is something that might fit this technical report and you would like us to discuss – we will make our best effort to update this document. Source code and animations The code used to generate this guide along with its figures is available on GitHub. 2 There the reader can also find an animated version of the figures.",
"title": ""
}
] |
[
{
"docid": "527c4c17aadb23a991d85511004a7c4f",
"text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.",
"title": ""
},
{
"docid": "72d38fa8fc9ff402b3ee422a9967e537",
"text": "With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.",
"title": ""
},
{
"docid": "ee94e50dd200a7fe35d87e7eeeeff9af",
"text": "The amount of digital information that is created and used is progressively rising along with the growth of sophisticated hardware and software. In addition, real-world data come in a diversity of forms and can be tremendously bulky. This has augmented the need for powerful algorithms that can deduce and dig out appealing facts and useful information from these data. Text Mining (TM), which is a very complex process; has been successfully used for this purpose. Text mining alternately referred to as text data mining, more or less equivalent to text analytics, can be defined as the process of extracting high-quality information from text. Text mining involves the process of structuring the input data, deriving patterns within the structured data and lastly interpretation and revelation of the output. This paper provides outline on text analytics and social media analytics. At the end, this paper presents our proposed work based on ontology framework to cope up with excessive social media textual data.",
"title": ""
},
{
"docid": "03d5eadaefc71b1da1b26f4e2923a082",
"text": "Sleep is characterized by a structured combination of neuronal oscillations. In the hippocampus, slow-wave sleep (SWS) is marked by high-frequency network oscillations (approximately 200 Hz \"ripples\"), whereas neocortical SWS activity is organized into low-frequency delta (1-4 Hz) and spindle (7-14 Hz) oscillations. While these types of hippocampal and cortical oscillations have been studied extensively in isolation, the relationships between them remain unknown. Here, we demonstrate the existence of temporal correlations between hippocampal ripples and cortical spindles that are also reflected in the correlated activity of single neurons within these brain structures. Spindle-ripple episodes may thus constitute an important mechanism of cortico-hippocampal communication during sleep. This coactivation of hippocampal and neocortical pathways may be important for the process of memory consolidation, during which memories are gradually translated from short-term hippocampal to longer-term neocortical stores.",
"title": ""
},
{
"docid": "c8e34c208f11c367e1f131edaa549c20",
"text": "Recently one dimensional (1-D) nanostructured metal-oxides have attracted much attention because of their potential applications in gas sensors. 1-D nanostructured metal-oxides provide high surface to volume ratio, while maintaining good chemical and thermal stabilities with minimal power consumption and low weight. In recent years, various processing routes have been developed for the synthesis of 1-D nanostructured metal-oxides such as hydrothermal, ultrasonic irradiation, electrospinning, anodization, sol-gel, molten-salt, carbothermal reduction, solid-state chemical reaction, thermal evaporation, vapor-phase transport, aerosol, RF sputtering, molecular beam epitaxy, chemical vapor deposition, gas-phase assisted nanocarving, UV lithography and dry plasma etching. A variety of sensor fabrication processing routes have also been developed. Depending on the materials, morphology and fabrication process the performance of the sensor towards a specific gas shows a varying degree of success. This article reviews and evaluates the performance of 1-D nanostructured metal-oxide gas sensors based on ZnO, SnO(2), TiO(2), In(2)O(3), WO(x), AgVO(3), CdO, MoO(3), CuO, TeO(2) and Fe(2)O(3). Advantages and disadvantages of each sensor are summarized, along with the associated sensing mechanism. Finally, the article concludes with some future directions of research.",
"title": ""
},
{
"docid": "dbdc0a429784aa085c571b7c01e3399f",
"text": "A large number of deaths are caused by Traffic accidents worldwide. The global crisis of road safety can be seen by observing the significant number of deaths and injuries that are caused by road traffic accidents. In many situations the family members or emergency services are not informed in time. This results in delayed emergency service response time, which can lead to an individual’s death or cause severe injury. The purpose of this work is to reduce the response time of emergency services in situations like traffic accidents or other emergencies such as fire, theft/robberies and medical emergencies. By utilizing onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, will drastically increase the chances of survival for emergency victims, and also help save emergency services time and resources. Keywords—Traffic accidents; accident detection; on-board sensor; accelerometer; android smartphones; real-time tracking; emergency services; emergency responder; emergency victim; SOSafe; SOSafe Go; firebase",
"title": ""
},
{
"docid": "532980d1216f9f10332cc13b6a093fb4",
"text": "Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words. Current DSMs, however, represent context words as separate features, which causes the loss of important information for word expectations, such as word order and interrelations. In this paper, we present a DSM which addresses the issue by defining verb contexts as joint dependencies. We test our representation in a verb similarity task on two datasets, showing that joint contexts are more efficient than single dependencies, even with a relatively small amount of training data.",
"title": ""
},
{
"docid": "4b8a46065520d2b7489bf0475321c726",
"text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.",
"title": ""
},
{
"docid": "c0b000176bba658ef702872f0174b602",
"text": "Distributed Denial of Service (DDoS) attacks represent a major threat to uninterrupted and efficient Internet service. In this paper, we empirically evaluate several major information metrics, namely, Hartley entropy, Shannon entropy, Renyi’s entropy, generalized entropy, Kullback–Leibler divergence and generalized information distance measure in their ability to detect both low-rate and high-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic data and an appropriate metric facilitates building an effective model to detect both low-rate and high-rate DDoS attacks. We use MIT Lincoln Laboratory, CAIDA and TUIDS DDoS datasets to illustrate the efficiency and effectiveness of each metric for DDoS detection. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "37a108b2d30a08cb78321f96c1e9eca4",
"text": "The TRAM flap, DIEP flap, and gluteal free flaps are routinely used for breast reconstruction. However, these have seldom been described for reconstruction of buttock deformities. We present three cases of free flaps used to restore significant buttock contour deformities. They introduce vascularised bulky tissue and provide adequate cushioning for future sitting, as well as correction of the aesthetic defect.",
"title": ""
},
{
"docid": "e5bf5516cdd531b85f02ac258420f5ef",
"text": "Management literature is almost unanimous in suggesting to manufacturers that they should integrate services into their core product offering. The literature, however, is surprisingly sparse in describing to what extent services should be integrated, how this integration should be carried out, or in detailing the challenges inherent in the transition to services. Reports on a study of 11 capital equipment manufacturers developing service offerings for their products. Focuses on identifying the dimensions considered when creating a service organization in the context of a manufacturing ®rm, and successful strategies to navigate the transition. Analysis of qualitative data suggests that the transition involves a deliberate developmental process to build capabilities as ®rms shift the nature of the relationship with the product end-users and the focus of the service offering. The report concludes identifying implications of our ®ndings for further research and practitioners.",
"title": ""
},
{
"docid": "edfc15795f1f69d31c36f73c213d2b7d",
"text": "Three studies tested whether adopting strong (relative to weak) approach goals in relationships (i.e., goals focused on the pursuit of positive experiences in one's relationship such as fun, growth, and development) predict greater sexual desire. Study 1 was a 6-month longitudinal study with biweekly assessments of sexual desire. Studies 2 and 3 were 2-week daily experience studies with daily assessments of sexual desire. Results showed that approach relationship goals buffered against declines in sexual desire over time and predicted elevated sexual desire during daily sexual interactions. Approach sexual goals mediated the association between approach relationship goals and daily sexual desire. Individuals with strong approach goals experienced even greater desire on days with positive relationship events and experienced less of a decrease in desire on days with negative relationships events than individuals who were low in approach goals. In two of the three studies, the association between approach relationship goals and sexual desire was stronger for women than for men. Implications of these findings for maintaining sexual desire in long-term relationships are discussed.",
"title": ""
},
{
"docid": "b55d5967005d3b59063ffc4fd7eeb59a",
"text": "In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.",
"title": ""
},
{
"docid": "d3c5a15b14ab5f4a44223e7e571e412e",
"text": "− Instead of minimizing the observed training error, Support Vector Regression (SVR) attempts to minimize the generalization error bound so as to achieve generalized performance. The idea of SVR is based on the computation of a linear regression function in a high dimensional feature space where the input data are mapped via a nonlinear function. SVR has been applied in various fields – time series and financial (noisy and risky) prediction, approximation of complex engineering analyses, convex quadratic programming and choices of loss functions, etc. In this paper, an attempt has been made to review the existing theory, methods, recent developments and scopes of SVR.",
"title": ""
},
{
"docid": "3604763dd721f4bb3f46b65556a50563",
"text": "-Information extraction encountered a new challenge while the spatial resolution is increasing quickly. People suppose that the higher the spatial resolution is, the better the result of classification is. To prove this guess we use two approaches: pixel-based classification and object-oriented classification. The former site test shows one class has different accuracy from various resolution images. Object-oriented approach is an advanced solution for image analysis. The accuracy of objectoriented approach is much higher than those of based-pixel approach. The site result shows that each class has its optimal image segmentation scale. Keywords--Feature, Scale; Resolution; Image analysis.",
"title": ""
},
{
"docid": "584645a035454682222a26870377703c",
"text": "Conventionally, the sum and difference signals of a tracking system are fixed up by sum and difference network and the network is often composed of four or more magic tees whose arms direct at four different directions, which give inconveniences to assemble. In this paper, a waveguide side-wall slot directional coupler and a double dielectric slab filled waveguide phase shifter is used to form a planar magic tee with four arms in the same H-plane. Four planar magic tees can be used to construct the W-band planar monopulse comparator. The planar magic tee is analyzed exactly with Ansoft HFSS software, and is optimized by genetic algorithm. Simulation results are presented, which show good performance.",
"title": ""
},
{
"docid": "4655dcd241aa9e543111c5c95026b365",
"text": "Received: 15 May 2002 Revised: 31 January 2003 Accepted: 18 July 2003 Abstract In this study, we developed a conceptual model for studying the adoption of electronic business (e-business or EB) at the firm level, incorporating six adoption facilitators and inhibitors, based on the technology–organization– environment theoretical framework. Survey data from 3100 businesses and 7500 consumers in eight European countries were used to test the proposed adoption model. We conducted confirmatory factor analysis to assess the reliability and validity of constructs. To examine whether adoption patterns differ across different e-business environments, we divided the full sample into high EB-intensity and low EB-intensity countries. After controlling for variations of industry and country effects, the fitted logit models demonstrated four findings: (1) Technology competence, firm scope and size, consumer readiness, and competitive pressure are significant adoption drivers, while lack of trading partner readiness is a significant adoption inhibitor. (2) As EB-intensity increases, two environmental factors – consumer readiness and lack of trading partner readiness – become less important, while competitive pressure remains significant. (3) In high EB-intensity countries, e-business is no longer a phenomenon dominated by large firms; as more and more firms engage in e-business, network effect works to the advantage of small firms. (4) Firms are more cautious in adopting e-business in high EB-intensity countries – it seems to suggest that the more informed firms are less aggressive in adopting e-business, a somehow surprising result. Explanations and implications are offered. European Journal of Information Systems (2003) 12, 251–268. doi:10.1057/ palgrave.ejis.3000475",
"title": ""
},
{
"docid": "7100b0adb93419a50bbaeb1b7e32edf5",
"text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.",
"title": ""
},
{
"docid": "ccebee3ad589322a9187c2e4539f7da7",
"text": "The present study examined lying behaviour in children between 3 and 7 years of age with two experiments. A temptation resistance paradigm was used in which children were left alone in a room with a music-playing toy placed behind their back. The children were told not to peek at the toy. Most children could not resist the temptation and peeked at the toy. When the experimenter asked them whether they had peeked, about half of the 3-year-olds confessed to their transgression, whereas most older children lied. Naṏ ve adult evaluators (undergraduate students and parents) who watched video clips of the children’s responses could not discriminate lie-tellers from nonliars on the basis of their nonverbal expressive behaviours. However, the children were poor at semantic leakage control and adults could correctly identify most of the lie-tellers based on their verbal statements made in the same context as the lie. The combined results regarding children’s verbal and nonverbal leakage control suggest that children under 8 years of age are not fully skilled lie-tellers.",
"title": ""
},
{
"docid": "7c4cb5f52509ad5a3795e9ce59980fec",
"text": "Line-of-sight stabilization against various disturbances is an essential property of gimbaled imaging systems mounted on mobile platforms. In recent years, the importance of target detection from higher distances has increased. This has raised the need for better stabilization performance. For that reason, stabilization loops are designed such that they have higher gains and larger bandwidths. As these are required for good disturbance attenuation, sufficient loop stability is also needed. However, model uncertainties around structural resonances impose strict restrictions on sufficient loop stability. Therefore, to satisfy high stabilization performance in the presence of model uncertainties, robust control methods are required. In this paper, a robust controller design in LQG/LTR, H∞ , and μ -synthesis framework is described for a two-axis gimbal. First, the performance criteria and weights are determined to minimize the stabilization error with moderate control effort under known platform disturbance profile. Second, model uncertainties are determined by considering locally linearized models at different operating points. Next, robust LQG/LTR, H∞ , and μ controllers are designed. Robust stability and performance of the three designs are investigated and compared. The paper finishes with the experimental performances to validate the designed robust controllers.",
"title": ""
}
] |
scidocsrr
|
09a069e948536bb87bf95f0eeae16142
|
Modeling Strategy for Back-to-Back Three-Level Converters Applied to High-Power Wind Turbines
|
[
{
"docid": "714641a148e9a5f02bb13d5485203d70",
"text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.",
"title": ""
}
] |
[
{
"docid": "a9768bced10c55345f116d7d07d2bc5a",
"text": "In this paper, we propose a variety of distance measures for hesitant fuzzy sets, based on which the corresponding similarity measures can be obtained. We investigate the connections of the aforementioned distance measures and further develop a number of hesitant ordered weighted distance measures and hesitant ordered weighted similarity measures. They can alleviate the influence of unduly large (or small) deviations on the aggregation results by assigning them low (or high) weights. Several numerical examples are provided to illustrate these distance and similarity measures. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "24b5da1d25ee1afed88cbb152f35a7e8",
"text": "Whole-brain neuroimaging studies have demonstrated regional variations in function within human cingulate cortex. At the same time, regional variations in cingulate anatomical connections have been found in animal models. It has, however, been difficult to estimate the relationship between connectivity and function throughout the whole cingulate cortex within the human brain. In this study, magnetic resonance diffusion tractography was used to investigate cingulate probabilistic connectivity in the human brain with two approaches. First, an algorithm was used to search for regional variations in the probabilistic connectivity profiles of all cingulate cortex voxels with the whole of the rest of the brain. Nine subregions with distinctive connectivity profiles were identified. It was possible to characterize several distinct areas in the dorsal cingulate sulcal region. Several distinct regions were also found in subgenual and perigenual cortex. Second, the probabilities of connection between cingulate cortex and 11 predefined target regions of interest were calculated. Cingulate voxels with a high probability of connection with the different targets formed separate clusters within cingulate cortex. Distinct connectivity fingerprints characterized the likelihood of connections between the extracingulate target regions and the nine cingulate subregions. Last, a meta-analysis of 171 functional studies reporting cingulate activation was performed. Seven different cognitive conditions were selected and peak activation coordinates were plotted to create maps of functional localization within the cingulate cortex. Regional functional specialization was found to be related to regional differences in probabilistic anatomical connectivity.",
"title": ""
},
{
"docid": "c10adaa38fd3f832767daf5e0baf07f5",
"text": "Cellular senescence entails essentially irreversible replicative arrest, apoptosis resistance, and frequently acquisition of a pro-inflammatory, tissue-destructive senescence-associated secretory phenotype (SASP). Senescent cells accumulate in various tissues with aging and at sites of pathogenesis in many chronic diseases and conditions. The SASP can contribute to senescence-related inflammation, metabolic dysregulation, stem cell dysfunction, aging phenotypes, chronic diseases, geriatric syndromes, and loss of resilience. Delaying senescent cell accumulation or reducing senescent cell burden is associated with delay, prevention, or alleviation of multiple senescence-associated conditions. We used a hypothesis-driven approach to discover pro-survival Senescent Cell Anti-apoptotic Pathways (SCAPs) and, based on these SCAPs, the first senolytic agents, drugs that cause senescent cells to become susceptible to their own pro-apoptotic microenvironment. Several senolytic agents, which appear to alleviate multiple senescence-related phenotypes in pre-clinical models, are beginning the process of being translated into clinical interventions that could be transformative.",
"title": ""
},
{
"docid": "bbeb6f28ae02876dcce8a4cf205b6194",
"text": "We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and it has an interesting denotational semantics in terms of complete partial orders of superoperators.",
"title": ""
},
{
"docid": "e13fc2c9f5aafc6c8eb1909592c07a70",
"text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].",
"title": ""
},
{
"docid": "2e5800ac4d65ac6556dd5c1be22fd6bf",
"text": "The issues of cyberbullying and online harassment have gained considerable coverage in the last number of years. Social media providers need to be able to detect abusive content both accurately and efficiently in order to protect their users. Our aim is to investigate the application of core text mining techniques for the automatic detection of abusive content across a range of social media sources include blogs, forums, media-sharing, Q&A and chat using datasets from Twitter, YouTube, MySpace, Kongregate, Formspring and Slashdot. Using supervised machine learning, we compare alternative text representations and dimension reduction approaches, including feature selection and feature enhancement, demonstrating the impact of these techniques on detection accuracies. In addition, we investigate the need for sampling on imbalanced datasets. Our conclusions are: (1) Dataset balancing boosts accuracies significantly for social media abusive content detection; (2) Feature reduction, important for large feature sets that are typical of social media datasets, improves efficiency whilst maintaining detection accuracies; (3) The use of generic structural features common across all our datasets proved to be of limited use in the automatic detection of abusive content. Our findings can support practitioners in selecting appropriate text mining strategies in this area.",
"title": ""
},
{
"docid": "76715b342c0b0a475ba6db06a0345c7b",
"text": "Generalized linear mixed models are a widely used tool for modeling longitudinal data. However , their use is typically restricted to few covariates, because the presence of many predictors yields unstable estimates. The presented approach to the fitting of generalized linear mixed models includes an L 1-penalty term that enforces variable selection and shrinkage simultaneously. A gradient ascent algorithm is proposed that allows to maximize the penalized log-likelihood yielding models with reduced complexity. In contrast to common procedures it can be used in high-dimensional settings where a large number of potentially influential explanatory variables is available. The method is investigated in simulation studies and illustrated by use of real data sets.",
"title": ""
},
{
"docid": "cd95374d85611eb5bff9de37fdc763b3",
"text": "In this paper, we focus on multiple-choice reading comprehension which aims to answer a question given a passage and multiple candidate options. We present the hierarchical attention flow to adequately leverage candidate options to model the interactions among passages, questions and candidate options. We observe that leveraging candidate options to boost evidence gathering from the passages play a vital role in this task, which is ignored in previous works. In addition, we explicitly model the option correlations with attention mechanism to obtain better option representations, which are further fed into a bilinear layer to obtain the ranking score for each option. On a large-scale multiple-choice reading comprehension dataset (i.e. the RACE dataset), the proposed model outperforms two previous neural network baselines on both RACE-M and RACE-H subsets and yields the state-of-the-art overall results.",
"title": ""
},
{
"docid": "078cdfda16742c6a2cad8867ddaf8419",
"text": "With the development of mobile Internet, various mobile applications have become increasingly popular. Many people are being benefited from the mobile healthcare services. Compared with the traditional healthcare services, patients’ medical behavior trajectories can be recorded by mobile healthcare services meticulously. They monitor the entire healthcare services process and help to improve the quality and standardization of healthcare services. By tracking and analyzing the patients’ medical records, they provide real-time protection for the patients’ healthcare activities. Therefore, medical fraud can be avoided and the loss of public health funds can be reduced. Although mobile healthcare services can provide a large amount of timely data, an effective real-time online algorithm is needed due to the timeliness of detecting the medical insurance fraud claims. However, because of the complex granularity of medical data, existing fraud detection approaches tend to be less effective in terms of monitoring the healthcare services process. In this paper, we propose an approach to deal with these problems. By means of the proposed SSIsomap activity clustering method, SimLOF outlier detection method, and the Dempster–Shafer theory-based evidence aggregation method, our approach is able to detect unusual categories and frequencies of behaviors simultaneously. Our approach is applied to a real-world data set containing more than 40 million medical insurance claim activities from over 40 000 users. Compared with two state-of-the-art approaches, the extensive experimental results show that our approach is significantly more effective and efficient. Our approach agent which provides decision support for the approval sender during the medical insurance claim approval process is undergoing trial in mobile healthcare services.",
"title": ""
},
{
"docid": "113373d6a9936e192e5c3ad016146777",
"text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.",
"title": ""
},
{
"docid": "f5cb684cfff16812bafd83286a51b71f",
"text": "OBJECTIVES\nTo assess the factors, motivations, and nonacademic influences that affected the choice of major among pharmacy and nonpharmacy undergraduate students.\n\n\nMETHODS\nA survey was administered to 618 pharmacy and nonpharmacy majors to assess background and motivational factors that may have influenced their choice of major. The sample consisted of freshman and sophomore students enrolled in a required speech course.\n\n\nRESULTS\nAfrican-American and Hispanic students were less likely to choose pharmacy as a major than Caucasians, whereas Asian-Americans were more likely to choose pharmacy as a major. Pharmacy students were more likely to be interested in science and math than nonpharmacy students.\n\n\nCONCLUSION\nStudents' self-reported racial/ethnic backgrounds influence their decision of whether to choose pharmacy as their academic major. Results of this survey provide further insight into developing effective recruiting strategies and enhancing the marketing efforts of academic institutions.",
"title": ""
},
{
"docid": "4cc9083bd050969933367166c2245b05",
"text": "Emotion regulation involves the pursuit of desired emotional states (i.e., emotion goals) in the service of superordinate motives. The nature and consequences of emotion regulation, therefore, are likely to depend on the motives it is intended to serve. Nonetheless, limited attention has been devoted to studying what motivates emotion regulation. By mapping the potential benefits of emotion to key human motives, this review identifies key classes of motives in emotion regulation. The proposed taxonomy distinguishes between hedonic motives that target the immediate phenomenology of emotions, and instrumental motives that target other potential benefits of emotions. Instrumental motives include behavioral, epistemic, social, and eudaimonic motives. The proposed taxonomy offers important implications for understanding the mechanism of emotion regulation, variation across individuals and contexts, and psychological function and dysfunction, and points to novel research directions.",
"title": ""
},
{
"docid": "04d286949838098a480e532001117013",
"text": "We propose Stegobot, a new generation botnet that communicates over probabilistically unobservable communication channels. It is designed to spread via social malware attacks and steal information from its victims. Unlike conventional botnets, Stegobot traffic does not introduce new communication endpoints between bots. Instead, it is based on a model of covert communication over a social-network overlay – bot to botmaster communication takes place along the edges of a social network. Further, bots use image steganography to hide the presence of communication within image sharing behavior of user interaction. We show that it is possible to design such a botnet even with a less than optimal routing mechanism such as restricted flooding. We analyzed a real-world dataset of image sharing between members of an online social network. Analysis of Stegobot’s network throughput indicates that stealthy as it is, it is also functionally powerful – capable of channeling fair quantities of sensitive data from its victims to the botmaster at tens of megabytes every month",
"title": ""
},
{
"docid": "e6811f54a04a47a56b2ee77cc4895258",
"text": "This study draws on four waves of the 1997 National Longitudinal Survey of Youth and external data to examine the relationship between adolescent body mass index (BMI) and fast food prices and fast food restaurant availability using panel data estimation methods to account for individual-level unobserved heterogeneity. Analyses also control for contextual factors including general food prices and the availability of full-service restaurants, supermarkets, grocery stores, convenience stores and commercial physical activity-related facilities. The longitudinal individual-level fixed effects results confirm cross-sectional findings that the price of fast food but not the availability of fast food restaurants has a statistically significant effect on teen BMI with an estimated price elasticity of -0.08. The results suggest that the cross-sectional model over-estimates the price of fast food BMI effect by about 25%. There is evidence that the weight of teens in low- to middle-socioeconomic status families is most sensitive to fast food prices.",
"title": ""
},
{
"docid": "27fb2d589c7296a8b7f11c81fd93e8bf",
"text": "Coarse-to-Fine Natural Language Processing",
"title": ""
},
{
"docid": "5fe5cfd499144d07bff394d41d9ef713",
"text": "Securing the sensitive data stored and accessed from mobile devices makes user authentication a problem of paramount importance. The tension between security and usability renders however the task of user authentication on mobile devices a challenging task. This paper introduces FAST (Fingergestures Authentication System using Touchscreen), a novel touchscreen based authentication approach on mobile devices. Besides extracting touch data from touchscreen equipped smartphones, FAST complements and validates this data using a digital sensor glove that we have built using off-the-shelf components. FAST leverages state-of-the-art classification algorithms to provide transparent and continuous mobile system protection. A notable feature is FAST 's continuous, user transparent post-login authentication. We use touch data collected from 40 users to show that FAST achieves a False Accept Rate (FAR) of 4.66% and False Reject Rate of 0.13% for the continuous post-login user authentication. The low FAR and FRR values indicate that FAST provides excellent post-login access security, without disturbing the honest mobile users.",
"title": ""
},
{
"docid": "0c6c5fe1e81451ee5a7b4c7c4a37d423",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.03.028 ⇑ Corresponding author. Tel./fax: +98 2182883637. E-mail addresses: ar_hassanzadeh@modares.ac.ir com (A. Hassanzadeh), ftmh_kanani@yahoo.com (F. K (S. Elahi). 1 Measuring e-learning systems success. In the era of internet, universities and higher education institutions are increasingly tend to provide e-learning. For suitable planning and more enjoying the benefits of this educational approach, a model for measuring success of e-learning systems is essential. So in this paper, we try to survey and present a model for measuring success of e-learning systems in universities. For this purpose, at first, according to literature review, a conceptual model was designed. Then, based on opinions of 33 experts, and assessing their suggestions, research indicators were finalized. After that, to examine the relationships between components and finalize the proposed model, a case study was done in 5 universities: Amir Kabir University, Tehran University, Shahid Beheshti University, Iran University of Science & Technology and Khaje Nasir Toosi University of Technology. Finally, by analyzing questionnaires completed by 369 instructors, students and alumni, which were e-learning systems user, the final model (MELSS Model). 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "50f7b9b21f6006b9e0976b8bf56f0fc3",
"text": "Based on the characteristics of wheeled, tracked and legged movements, a variable parallelogram tracked mobile robot(VPTMR) is proposed and developed to enhance its adaptability and stability in the complex environment. This VPTMR robot consists of two variable parallelogram structures, which are composed of one main tracked arm, two lower tracked arms and a chasis. The variable parallelogram structure is actuated by a DC motor. And another DC motor actuates the track rotation, which enables VPTMR robot to move in wheeled, tracked and legged mode that makes the robot to adapt to all rugged environments. The prototype(VPTMR) is developed to verify its performance on environmental adaptability, obstacle crossing ability and stability.",
"title": ""
},
{
"docid": "e9f28e9bfb0a14a0401ee90dbb2f6894",
"text": "Article history: Received 25 May 2017 Received in revised form 25 July 2017 Accepted 26 July 2017 Available online 1 August 2017",
"title": ""
}
] |
scidocsrr
|
07a79f9f049c1dcd19e12da64793d2c5
|
Occupancy Networks: Learning 3D Reconstruction in Function Space
|
[
{
"docid": "0b50c7a9aba87d9d265fa92f6033701e",
"text": "We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.",
"title": ""
},
{
"docid": "f7a1ecc5bb377961737c37de02953cf1",
"text": "Surface reconstruction from a point cloud is a standard subproblem in many algorithms for dense 3D reconstruction from RGB images or depth maps. Methods, performing only local operations in the vicinity of individual points, are very fast, but reconstructed models typically contain lots of holes. On the other hand, regularized volumetric approaches, formulated as a global optimization, are typically too slow for real-time interactive applications. We propose to use a regression forest based method, which predicts the projection of a grid point to the surface, depending on the spatial configuration of point density in the grid point neighborhood. We designed a suitable feature vector and efficient oct-tree based GPU evaluation, capable of predicting surface of high resolution 3D models in milliseconds. Our method learns and predicts surfaces from an observed point cloud sparser than the evaluation grid, and therefore effectively acts as a regularizer.",
"title": ""
},
{
"docid": "b70716877c23701d0897ab4a42a5beba",
"text": "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.",
"title": ""
},
{
"docid": "60586a519a51cef22cbafe512f1025dd",
"text": "A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability. We introduce a class of neural latent variable models which we call Neural Processes (NPs), combining the best of both worlds. Like GPs, NPs define distributions over functions, are capable of rapid adaptation to new observations, and can estimate the uncertainty in their predictions. Like NNs, NPs are computationally efficient during training and evaluation but also learn to adapt their priors to data. We demonstrate the performance of NPs on a range of learning tasks, including regression and optimisation, and compare and contrast with related models in the literature.",
"title": ""
}
] |
[
{
"docid": "8e520ad94c7555b9bb1546786b532adb",
"text": "We propose Machines Talking To Machines (M2M), a framework combining automation and crowdsourcing to rapidly bootstrap endto-end dialogue agents for goal-oriented dialogues in arbitrary domains. M2M scales to new tasks with just a task schema and an API client from the dialogue system developer, but it is also customizable to cater to task-specific interactions. Compared to the Wizard-of-Oz approach for data collection, M2M achieves greater diversity and coverage of salient dialogue flows while maintaining the naturalness of individual utterances. In the first phase, a simulated user bot and a domain-agnostic system bot converse to exhaustively generate dialogue “outlines”, i.e. sequences of template utterances and their semantic parses. In the second phase, crowd workers provide contextual rewrites of the dialogues to make the utterances more natural while preserving their meaning. The entire process can finish within a few hours. We propose a new corpus of 3,000 dialogues spanning 2 domains collected with M2M, and present comparisons with popular dialogue datasets on the quality and diversity of the surface forms and dialogue flows.",
"title": ""
},
{
"docid": "4e7106a78dcf6995090669b9a25c9551",
"text": "In this paper partial discharges (PD) in disc-shaped cavities in polycarbonate are measured at variable frequency (0.01-100 Hz) of the applied voltage. The advantage of PD measurements at variable frequency is that more information about the insulation system may be extracted than from traditional PD measurements at a single frequency (usually 50/60 Hz). The PD activity in the cavity is seen to depend on the applied frequency. Moreover, the PD frequency dependence changes with the applied voltage amplitude, the cavity diameter, and the cavity location (insulated or electrode bounded). It is suggested that the PD frequency dependence is governed by the statistical time lag of PD and the surface charge decay in the cavity. This is the first of two papers addressing the frequency dependence of PD in a cavity. In the second paper a physical model of PD in a cavity at variable applied frequency is presented.",
"title": ""
},
{
"docid": "982253c9f0c05e50a070a0b2e762abd7",
"text": "In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"title": ""
},
{
"docid": "15ab826ce8bfbd6eb975a6a922fef477",
"text": "Although industrial automation and robots develop rapidly in the era of “Industry 4.0”, the increasing integration of manufacturing processes and the strengthening of the autonomous capabilities of manufacturing systems make investigating the role of humans a primary research objective. This study is among the first to examine the impact of industrial wearable system in industry 4.0. Industrial Wearable System (IWS) is defined as human empowering technology that fit the operators' cognitive and physical needs, while improving human physical-, sensing- and cognitive-capabilities using new generation of industrial infromatics. Three research objectives are as examined, including (1) to illustrate the specific manufacturing and logistics application scenarios enabled by IWS; (2) to construct human-centric IWS technical framework for industry 4.0; and (3) to verify the presented framework and technology via a real-life case study.",
"title": ""
},
{
"docid": "62fea7d8dcdb999ec87c607c47e2d015",
"text": "The role of workplace supervisors in the clinical education of medical students is currently under debate. However, few studies have addressed how supervisors conceptualize workplace learning and how conceptions relate to current sociocultural workplace learning theory. We explored physician conceptions of: (a) medical student learning in the clinical workplace and (b) how they contribute to student learning. The methodology included a combination of a qualitative, inductive (conventional) and deductive (directed) content analysis approach. The study triangulated two types of interview data from 4 focus group interviews and 34 individual interviews. A total of 55 physicians participated. Three overarching themes emerged from the data: learning as membership, learning as partnership and learning as ownership. The themes described how physician conceptions of learning and supervision were guided by the notions of learning-as-participation and learning-as-acquisition. The clinical workplace was either conceptualized as a context in which student learning is based on a learning curriculum, continuity of participation and partnerships with supervisors, or as a temporary source of knowledge within a teaching curriculum. The process of learning was shaped through the reciprocity between different factors in the workplace context and the agency of students and supervising physicians. A systems-thinking approach merged with the \"co-participation\" conceptual framework advocated by Billet proved to be useful for analyzing variations in conceptions. The findings suggest that mapping workplace supervisor conceptions of learning can be a valuable starting point for medical schools and educational developers working with changes in clinical educational and faculty development practices.",
"title": ""
},
{
"docid": "78d1a0f7a66d3533b1a00d865eeb6abd",
"text": "Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy and supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under -edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.",
"title": ""
},
{
"docid": "2cb1c713b8e75e7f2e38be90c1b5a9e6",
"text": "Frequent action video game players often outperform non-gamers on measures of perception and cognition, and some studies find that video game practice enhances those abilities. The possibility that video game training transfers broadly to other aspects of cognition is exciting because training on one task rarely improves performance on others. At first glance, the cumulative evidence suggests a strong relationship between gaming experience and other cognitive abilities, but methodological shortcomings call that conclusion into question. We discuss these pitfalls, identify how existing studies succeed or fail in overcoming them, and provide guidelines for more definitive tests of the effects of gaming on cognition.",
"title": ""
},
{
"docid": "f93b9c9bc2fbaf05c12d47440dfd9f06",
"text": "A patent-pending, energy-based method is presented for controlling a haptic interface system to ensure stable contact under a wide variety of operating conditions. System stability is analyzed in terms of the time-domain definition of passivity. We define a “Passivity Observer” (PO) which measures energy flow in and out of one or more subsystems in real-time software. Active behavior is indicated by a negative value of the PO at any time. We also define the “Passivity Controller” (PC), an adaptive dissipative element which, at each time sample, absorbs exactly the net energy output (if any) measured by the PO. The method is tested with simulation and implementation in the Excalibur haptic interface system. Totally stable operation was achieved under conditions such as stiffness 100 N/mm or time delays of 15 ms. The PO/PC method requires very little additional computation and does not require a dynamical model to be identified.",
"title": ""
},
{
"docid": "6514ddb39c465a8ca207e24e60071e7f",
"text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.",
"title": ""
},
{
"docid": "56205e79e706e05957cb5081d6a8348a",
"text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.",
"title": ""
},
{
"docid": "5f606838b7158075a4b13871c5b6ec89",
"text": "The sentence is a standard textual unit in natural language processing applications. In many languages the punctuation mark that indicates the end-of-sentence boundary is ambiguous; thus the tokenizers of most NLP systems must be equipped with special sentence boundary recognition rules for every new text collection. As an alternative, this article presents an efficient, trainable system for sentence boundary disambiguation. The system, called Satz, makes simple estimates of the parts of speech of the tokens immediately preceding and following each punctuation mark, and uses these estimates as input to a machine learning algorithm that then classifies the punctuation mark. Satz is very fast both in training and sentence analysis, and its combined robustness and accuracy surpass existing techniques. The system needs only a small lexicon and training corpus, and has been shown to transfer quickly and easily from English to other languages, as demonstrated on French and German.",
"title": ""
},
{
"docid": "3886cc26572b2d82c23790ad52342222",
"text": "This paper presents a quantitative human performance model of making single-stroke pen gestures within certain error constraints in terms of production time. Computed from the properties of Curves, Line segments, and Corners (CLC) in a gesture stroke, the model may serve as a foundation for the design and evaluation of existing and future gesture-based user interfaces at the basic motor control efficiency level, similar to the role of previous \"laws of action\" played to pointing, crossing or steering-based user interfaces. We report and discuss our experimental results on establishing and validating the CLC model, together with other basic empirical findings in stroke gesture production.",
"title": ""
},
{
"docid": "ebeed0f16727adff1d6611ba4f48dde1",
"text": "The research reported here integrates computational, visual and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial and temporal dimensions via clustering, sorting and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 contest data set, which contains time-varying, geographically referenced and multivariate data for technology companies in the US",
"title": ""
},
{
"docid": "aa27594e463206033e75c94dc28d4524",
"text": "It is widely assumed among psychologists that people spontaneously form trustworthiness impressions of newly encountered people from their facial appearance. However, most existing studies directly or indirectly induced an impression formation goal, which means that the existing empirical support for spontaneous facial trustworthiness impressions remains insufficient. In particular, it remains an open question whether trustworthiness from facial appearance is encoded in memory. Using the 'who said what' paradigm, we indirectly measured to what extent people encoded the trustworthiness of observed faces. The results of 4 studies demonstrated a reliable tendency toward trustworthiness encoding. This was shown under conditions of varying context-relevance, and salience of trustworthiness. Moreover, evidence for this tendency was obtained using both (experimentally controlled) artificial and (naturalistic varying) real faces. Taken together, these results suggest that there is a spontaneous tendency to form relatively stable trustworthiness impressions from facial appearance, which is relatively independent of the context. As such, our results further underline how widespread influences of facial trustworthiness may be in our everyday life. (PsycINFO Database Record",
"title": ""
},
{
"docid": "4a5ee2e22999f2353e055550f9b4f0c5",
"text": "As the popularity of software-defined networks (SDN) and OpenFlow increases, policy-driven network management has received more attention. Manual configuration of multiple devices is being replaced by an automated approach where a software-based, network-aware controller handles the configuration of all network devices. Software applications running on top of the network controller provide an abstraction of the topology and facilitate the task of operating the network. We propose OpenSec, an OpenFlow-based security framework that allows a network security operator to create and implement security policies written in human-readable language. Using OpenSec, the user can describe a flow in terms of OpenFlow matching fields, define which security services must be applied to that flow (deep packet inspection, intrusion detection, spam detection, etc.) and specify security levels that define how OpenSec reacts if malicious traffic is detected. In this paper, we first provide a more detailed explanation of how OpenSec converts security policies into a series of OpenFlow messages needed to implement such a policy. Second, we describe how the framework automatically reacts to security alerts as specified by the policies. Third, we perform additional experiments on the GENI testbed to evaluate the scalability of the proposed framework using existing datasets of campus networks. Our results show that up to 95% of attacks in an existing data set can be detected and 99% of malicious source nodes can be blocked automatically. Furthermore, we show that our policy specification language is simpler while offering fast translation times compared to existing solutions.",
"title": ""
},
{
"docid": "d7c76b27ca090ad9a7dcbd808a30910e",
"text": "Character recognition provides a solution for processing large volume of data automatically. The purpose of the present work is to recognize different forms of printed Arabic characters written in three different fonts (Times new roman, Arial and Tahoma) using back-propagation neural network. This work was tested on a sample of printed character and the correct average recognition rate was 97%.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "4547c9240418ebd0d8c6c0018b806bc0",
"text": "Although the mammillary bodies were among the first brain regions to be implicated in amnesia, the functional importance of this structure for memory has been questioned over the intervening years. Recent patient studies have, however, re-established the mammillary bodies, and their projections to the anterior thalamus via the mammillothalamic tract, as being crucial for recollective memory. Complementary animal research has also made substantial advances in recent years by determining the electrophysiological, neurochemical, anatomical and functional properties of the mammillary bodies. Mammillary body and mammillothalamic tract lesions in rats impair performance on a number of spatial memory tasks and these deficits are consistent with impoverished spatial encoding. The mammillary bodies have traditionally been considered a hippocampal relay which is consistent with the equivalent deficits seen following lesions of the mammillary bodies or their major efferents, the mammillothalamic tract. However, recent findings suggest that the mammillary bodies may have a role in memory that is independent of their hippocampal formation afferents; instead, the ventral tegmental nucleus of Gudden could be providing critical mammillary body inputs needed to support mnemonic processes. Finally, it is now apparent that the medial and lateral mammillary nuclei should be considered separately and initial research indicates that the medial mammillary nucleus is predominantly responsible for the spatial memory deficits following mammillary body lesions in rats.",
"title": ""
},
{
"docid": "97cc1bbb077bb11613299b0c829eee39",
"text": "Field Programmable Gate Array (FPGA) implementations of sorting algorithms have proven to be efficient, but existing implementations lack portability and maintainability because they are written in low-level hardware description languages that require substantial domain expertise to develop and maintain. To address this problem, we develop a framework that generates sorting architectures for different requirements (speed, area, power, etc.). Our framework provides ten highly optimized basic sorting architectures, easily composes basic architectures to generate hybrid sorting architectures, enables non-hardware experts to quickly design efficient hardware sorters, and facilitates the development of customized heterogeneous FPGA/CPU sorting systems. Experimental results show that our framework generates architectures that perform at least as well as existing RTL implementations for arrays smaller than 16K elements, and are comparable to RTL implementations for sorting larger arrays. We demonstrate a prototype of an end-to-end system using our sorting architectures for large arrays (16K-130K) on a heterogeneous FPGA/CPU system.",
"title": ""
},
{
"docid": "8e03f4410676fb4285596960880263e9",
"text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.",
"title": ""
}
] |
scidocsrr
|
01e6aa6925ad6ae330577b7731256ad8
|
Recurrent and Contextual Models for Visual Question Answering
|
[
{
"docid": "8b998b9f8ea6cfe5f80a5b3a1b87f807",
"text": "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.",
"title": ""
},
{
"docid": "0a625d5f0164f7ed987a96510c1b6092",
"text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.",
"title": ""
}
] |
[
{
"docid": "eb761eb499b2dc82f7f2a8a8a5ff64a7",
"text": "We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.",
"title": ""
},
{
"docid": "4f5c37ec7c2e926126a100a10cccf40e",
"text": "Prior work shows that setting limits on young children's screen time is conducive to healthy development but can be a challenge for families. We investigate children's (age 1 - 5) transitions to and from screen-based activities to understand the boundaries families have set and their experiences living within them. We report on interviews with 27 parents and a diary study with a separate 28 families examining these transitions. These families turn on screens primarily to facilitate parents' independent activities. Parents feel this is appropriate but self-audit and express hesitation, as they feel they are benefiting from an activity that can be detrimental to their child's well-being. We found that families turn off screens when parents are ready to give their child their full attention and technology presents a natural stopping point. Transitioning away from screens is often painful, and predictive factors determine the pain of a transition. Technology-mediated transitions are significantly more successful than parent-mediated transitions, suggesting that the design community has the power to make this experience better for parents and children by creating technologies that facilitate boundary-setting and respect families' self-defined limits.",
"title": ""
},
{
"docid": "946cc2c21b8744e7d59d071cd475d416",
"text": "This paper features a broad discussion on the application of enhanced heat transfer surfaces to compact heat exchangers. The motivation for heat transfer enhancement is discussed, and the principles behind compact heat exchangers are summarized. Next, various methods for evaluating and comparing different types of heat transfer enhancement devices using ftrst and/or second law analysis are presented. Finally, the following plate-fm enhancement geometries are discussed: rectangular and triangular plain ftns, offset strip ftns, louvered fms, and vortex generators. MOTIVATION FOR HEAT TRANSFER ENHANCEMENT For well over a century, efforts have been made to produce more efficient heat exchangers by employing various methods of heat transfer enhancement. The study of enhanced heat transfer has gained serious momentum during recent years, however, due to increased demands by industry for heat exchange equipment that is less expensive to build and operate than standard heat exchange devices. Savings in materials and energy use also provide strong motivation for the development of improved methods of enhancement. When designing cooling systems for automobiles and spacecraft, it is imperative that the heat exchangers are especially compact and lightweight. Also, enhancement devices are necessary for the high heat duty exchangers found in power plants (i. e. air-cooled condensers, nuclear fuel rods). These applications, as well as numerous others, have led to the development of various enhanced heat transfer surfaces. In general, enhanced heat transfer surfaces can be used for three purposes: (1) to make heat exchangers more compact in order to reduce their overall volume, and possibly their cost, (2) to reduce the pumping power required for a given heat transfer process, or (3) to increase the overall UA value of the heat exchanger. A higher UA value can be exploited in either of two ways: (1) to obtain an increased heat exchange rate for ftxed fluid inlet temperatures, or (2) to reduce the mean temperature difference for the heat exchange; this increases the thermodynamic process efficiency, which can result in a saving of operating costs. Enhancement techniques can be separated into two categories: passive and active. Passive methods require no direct application of external power. Instead, passive techniques employ special surface geometries or fluid additives which cause heat transfer enhancement. On the other hand, active schemes such as electromagnetic ftelds and surface vibration do require external power for operation [1]. The majority of commercially interesting enhancement techniques are passive ones. Active techniques have attracted little commercial interest because of the costs involved, and the problems that are associated with vibration or acoustic noise [2]. This paper deals only with gas-side heat transfer enhancement using special surface geometries.",
"title": ""
},
{
"docid": "5208762a8142de095c21824b0a395b52",
"text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.",
"title": ""
},
{
"docid": "417ce84b9a4359ac3fb59b6c6497b7db",
"text": "OBJECTIVE\nWe describe a novel human-machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user's upper-body.\n\n\nAPPROACH\nA calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor.\n\n\nMAIN RESULTS\nOur results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body-machine interface systems as an alternative or complement to brain-machine interfaces for accomplishing cursor control in 2D space.\n\n\nSIGNIFICANCE\nThe present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick.",
"title": ""
},
{
"docid": "7e33af6ec0924681d7d51373ca70b957",
"text": "Total order broadcast is a fundamental communication primitive that plays a central role in bringing cheap software-based high availability to a wide range of services. This article studies the practical performance of such a primitive on a cluster of homogeneous machines.\n We present LCR, the first throughput optimal uniform total order broadcast protocol. LCR is based on a ring topology. It only relies on point-to-point inter-process communication and has a linear latency with respect to the number of processes. LCR is also fair in the sense that each process has an equal opportunity of having its messages delivered by all processes.\n We benchmark a C implementation of LCR against Spread and JGroups, two of the most widely used group communication packages. LCR provides higher throughput than the alternatives, over a large number of scenarios.",
"title": ""
},
{
"docid": "5b430df9e3a1514798e549b1f4f9dce2",
"text": "Nowadays, it is common for one natural person to join multiple social networks to enjoy different services. Linking identical users across different social networks, also known as the User Identity Linkage (UIL), is an important problem of great research challenges and practical value. Most existing UIL models are supervised or semi-supervised and a considerable number of manually matched user identity pairs are required, which is costly in terms of labor and time. In addition, existing methods generally rely heavily on some discriminative common user attributes, and thus are hard to be generalized. Motivated by the isomorphism across social networks, in this paper we consider all the users in a social network as a whole and perform UIL from the user space distribution level. The insight is that we convert the unsupervised UIL problem to the learning of a projection function to minimize the distance between the distributions of user identities in two social networks. We propose to use the earth mover's distance (EMD) as the measure of distribution closeness, and propose two models UUIL$_gan $ and UUIL$_omt $ to efficiently learn the distribution projection function. Empirically, we evaluate the proposed models over multiple social network datasets, and the results demonstrate that our proposal significantly outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "4d91850baa5995bc7d5e3d5e9e11fa58",
"text": "Drug risk management has many tools for minimizing risk and black-boxed warnings (BBWs) are one of those tools. Some serious adverse drug reactions (ADRs) emerge only after a drug is marketed and used in a larger population. In Thailand, additional legal warnings after drug approval, in the form of black-boxed warnings, may be applied. Review of their characteristics can assist in the development of effective risk mitigation. This study was a cross sectional review of all legal warnings imposed in Thailand after drug approval (2003-2012). Any boxed warnings for biological products and revised warnings which were not related to safety were excluded. Nine legal warnings were evaluated. Seven related to drugs classes and two to individual drugs. The warnings involved four main types of predictable ADRs: drug-disease interactions, side effects, overdose and drug-drug interactions. The average time from first ADRs reported to legal warnings implementation was 12 years. The triggers were from both safety signals in Thailand and regulatory measures in other countries outside Thailand.",
"title": ""
},
{
"docid": "e89124e33d7d208fcdd30c5cccc409d6",
"text": "In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.",
"title": ""
},
{
"docid": "363cdcc34c855e712707b5b920fbd113",
"text": "This paper presents the design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator. First, the force transmission design of underactuated fingers is revisited. An optimal geometry of the tendon-driven fingers is then obtained. Then, underactuation between the fingers is addressed using differential mechanisms. Tendon routings are proposed and verified experimentally. Finally, a prototype of a 15-degree-of-freedom hand is built and tested. The results demonstrate the feasibility of a humanoid hand with many degrees of freedom and one single degree of actuation.",
"title": ""
},
{
"docid": "59c16bb2ec81dfb0e27ff47ccae0a169",
"text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.",
"title": ""
},
{
"docid": "5462d51955d2eaaa25fd6ff4d71b3f40",
"text": "2 \"Generations of scientists may yet have to come and go before the question of the origin of life is finally solved. That it will be solved eventually is as certain as anything can ever be amid the uncertainties that surround us.\" 1. Introduction How, where and when did life appear on Earth? Although Charles Darwin was reluctant to address these issues in his books, in a letter sent on February 1st, 1871 to his friend Joseph Dalton Hooker he wrote in a now famous paragraph that \"it is often said that all the conditions for the first production of a living being are now present, which could ever have been present. But if (and oh what a big if) we could conceive in some warm little pond with all sort of ammonia and phosphoric salts,-light, heat, electricity present, that a protein compound was chemically formed, ready to undergo still more complex changes, at the present such matter would be instantly devoured, or absorbed, which would not have been the case before living creatures were formed...\" (Darwin, 1871). Darwin's letter summarizes in a nutshell not only his ideas on the emergence of life, but also provides considerable insights on the views on the chemical nature of the basic biological processes that were prevalent at the time in many scientific circles. Although Friedrich Miescher had discovered nucleic acids (he called them nuclein) in 1869 (Dahm, 2005), the deciphering of their central role in genetic processes would remain unknown for almost another a century. In contrast, the roles played by proteins in manifold biological processes had been established. Equally significant, by the time Darwin wrote his letter major advances had been made in the understanding of the material basis of life, which for a long time had been considered to be fundamentally different from inorganic compounds. The experiments of Friedrich Wöhler, Adolph Strecker and Aleksandr Butlerov, who had demonstrated independently the feasibility of the laboratory synthesis of urea, alanine, and sugars, respectively, from simple 3 starting materials were recognized as a demonstration that the chemical gap separating organisms from the non-living was not insurmountable. But how had this gap first been bridged? The idea that life was an emergent feature of nature has been widespread since the nineteenth century. The major breakthrough that transformed the origin of life from pure speculation into workable and testable research models were proposals, suggested independently, in …",
"title": ""
},
{
"docid": "ec85dafd4c0f04d3e573941b397b3f10",
"text": "The future of communication resides in Internet of Things, which is certainly the most sought after technology today. The applications of IoT are diverse, and range from ordinary voice recognition to critical space programmes. Recently, a lot of efforts have been made to design operating systems for IoT devices because neither traditional Windows/Unix, nor the existing Real Time Operating Systems are able to meet the demands of heterogeneous IoT applications. This paper presents a survey of operating systems that have been designed so far for IoT devices and also outlines a generic framework that brings out the essential features desired in an OS tailored for IoT devices.",
"title": ""
},
{
"docid": "e917b6af07821cb834555fa7a19fca0c",
"text": "Conversational interfaces recently gained a lot of attention. One of the reasons for the current hype is the fact that chatbots (one particularly popular form of conversational interfaces) nowadays can be created without any programming knowledge, thanks to different toolkits and socalled Natural Language Understanding (NLU) services. While these NLU services are already widely used in both, industry and science, so far, they have not been analysed systematically. In this paper, we present a method to evaluate the classification performance of NLU services. Moreover, we present two new corpora, one consisting of annotated questions and one consisting of annotated questions with the corresponding answers. Based on these corpora, we conduct an evaluation of some of the most popular NLU services. Thereby we want to enable both, researchers and companies to make more educated decisions about which service they should use.",
"title": ""
},
{
"docid": "8917629470087a3b7a03b99d461cb63c",
"text": "In this paper, the crucial ingredients for our submission to SemEval-2014 Task 4 “Aspect Level Sentiment Analysis” are discussed. We present a simple aspect detection algorithm, a co-occurrence based method for category detection and a dictionary based sentiment classification algorithm. The dictionary for the latter is based on co-occurrences as well. The failure analysis and related work section focus mainly on the category detection method as it is most distinctive for our work.",
"title": ""
},
{
"docid": "477ab18817f247b9f17fb78b5ac08dbf",
"text": "Ray marching, also known as sphere tracing, is an efficient empirical method for rendering implicit surfaces using distance fields. The method marches along the ray with step lengths, provided by the distance field, that are guaranteed not to penetrate the scene. As a result, it provides an efficient method of rendering implicit surfaces, such as constructive solid geometry, recursive shapes, and fractals, as well as producing cheap empirical visual effects, such as ambient occlusion, subsurface scattering, and soft shadows. The goal of this project is to bring interactive ray marching to the web platform. The project will focus on the robustness of the render itself. It should run with reasonable performance in real-time and provide an interface where the user can interactively change the viewing angle and modify rendering options. It is also expected to run on the latest WebGL supported browser, on any machine. CR Categories: I.3.3 [Computer Graphics]: Three-Dimensional Graphics and Realism—Display Algorithms",
"title": ""
},
{
"docid": "63f2caff9f598cf493d6c8a044000aa3",
"text": "There are both public health and food industry initiatives aimed at increasing breakfast consumption among children, particularly the consumption of ready-to-eat cereals. The purpose of this study was to determine whether there were identifiable differences in nutritional quality between cereals that are primarily marketed to children and cereals that are not marketed to children. Of the 161 cereals identified between January and February 2006, 46% were classified as being marketed to children (eg, packaging contained a licensed character or contained an activity directed at children). Multivariate analyses of variance were used to compare children's cereals and nonchildren's cereals with respect to their nutritional content, focusing on nutrients required to be reported on the Nutrition Facts panel (including energy). Compared to nonchildren's cereals, children's cereals were denser in energy, sugar, and sodium, but were less dense in fiber and protein. The proportion of children's and nonchildren's cereals that did and did not meet national nutritional guidelines for foods served in schools were compared using chi2analysis. The majority of children's cereals (66%) failed to meet national nutrition standards, particularly with respect to sugar content. t tests were used to compare the nutritional quality of children's cereals with nutrient-content claims and health claims to those without such claims. Although the specific claims were generally justified by the nutritional content of the product, there were few differences with respect to the overall nutrition profile. Overall, there were important differences in nutritional quality between children's cereals and nonchildren's cereals. Dietary advice for children to increase consumption of ready-to-eat breakfast cereals should identify and recommend those cereals with the best nutrient profiles.",
"title": ""
},
{
"docid": "bf654fbfb6a7c6b8697c93595c8f772a",
"text": "Media attention and the literature on lesbian, gay, and bisexual youth overwhelmingly focus on violence involving hate crimes and bullying, while ignoring the fact that vulnerable youth also may be at increased risk of violence in their dating relationships. In this study, we examine physical, psychological, sexual, and cyber dating violence experiences among lesbian, gay, and bisexual youth--as compared to those of heterosexual youth, and we explore variations in the likelihood of help-seeking behavior and the presence of particular risk factors among both types of dating violence victims. A total of 5,647 youth (51 % female, 74 % White) from 10 schools participated in a cross-sectional anonymous survey, of which 3,745 reported currently being in a dating relationship or having been in one during the prior year. Results indicated that lesbian, gay, and bisexual youth are at higher risk for all types of dating violence victimization (and nearly all types of dating violence perpetration), compared to heterosexual youth. Further, when looking at gender identity, transgender and female youth are at highest risk of most types of victimization, and are the most likely perpetrators of all forms of dating violence but sexual coercion, which begs further exploration. The findings support the development of dating violence prevention programs that specifically target the needs and vulnerabilities of lesbian, gay, and bisexual youth, in addition to those of female and transgender youth.",
"title": ""
},
{
"docid": "2ef6e4f1aca010a75d3e078491e40cbe",
"text": "In the last several years hundreds of thousands of SSDs have been deployed in the data centers of Baidu, China's largest Internet search company. Currently only 40\\% or less of the raw bandwidth of the flash memory in the SSDs is delivered by the storage system to the applications. Moreover, because of space over-provisioning in the SSD to accommodate non-sequential or random writes, and additionally, parity coding across flash channels, typically only 50-70\\% of the raw capacity of a commodity SSD can be used for user data. Given the large scale of Baidu's data center, making the most effective use of its SSDs is of great importance. Specifically, we seek to maximize both bandwidth and usable capacity.\n To achieve this goal we propose {\\em software-defined flash} (SDF), a hardware/software co-designed storage system to maximally exploit the performance characteristics of flash memory in the context of our workloads. SDF exposes individual flash channels to the host software and eliminates space over-provisioning. The host software, given direct access to the raw flash channels of the SSD, can effectively organize its data and schedule its data access to better realize the SSD's raw performance potential.\n Currently more than 3000 SDFs have been deployed in Baidu's storage system that supports its web page and image repository services. Our measurements show that SDF can deliver approximately 95% of the raw flash bandwidth and provide 99% of the flash capacity for user data. SDF increases I/O bandwidth by 300\\% and reduces per-GB hardware cost by 50% on average compared with the commodity-SSD-based system used at Baidu.",
"title": ""
},
{
"docid": "58156df07590448d89c2b8d4a46696ad",
"text": "Gene PmAF7DS confers resistance to wheat powdery mildew (isolate Bgt#211 ); it was mapped to a 14.6-cM interval ( Xgwm350 a– Xbarc184 ) on chromosome 7DS. The flanking markers could be applied in MAS breeding. Wheat powdery mildew (Pm) is caused by the biotrophic pathogen Blumeria graminis tritici (DC.) (Bgt). An ongoing threat of breakdown of race-specific resistance to Pm requires a continuous effort to discover new alleles in the wheat gene pool. Developing new cultivars with improved disease resistance is an economically and environmentally safe approach to reduce yield losses. To identify and characterize genes for resistance against Pm in bread wheat we used the (Arina × Forno) RILs population. Initially, the two parental lines were screened with a collection of 61 isolates of Bgt from Israel. Three Pm isolates Bgt#210 , Bgt#211 and Bgt#213 showed differential reactions in the parents: Arina was resistant (IT = 0), whereas Forno was moderately susceptible (IT = −3). Isolate Bgt#211 was then used to inoculate the RIL population. The segregation pattern of plant reactions among the RILs indicates that a single dominant gene controls the conferred resistance. A genetic map of the region containing this gene was assembled with DNA markers and assigned to the 7D physical bin map. The gene, temporarily designated PmAF7DS, was located in the distal region of chromosome arm 7DS. The RILs were also inoculated with Bgt#210 and Bgt#213. The plant reactions to these isolates showed high identity with the reaction to Bgt#211, indicating the involvement of the same gene or closely linked, but distinct single genes. The genomic location of PmAF7DS, in light of other Pm genes on 7DS is discussed.",
"title": ""
}
] |
scidocsrr
|
53e77a1977bf2b92bc5ac4c70698eb05
|
Vectorization of Line Drawings via Polyvector Fields
|
[
{
"docid": "78fc46165449f94e75e70a2654abf518",
"text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.",
"title": ""
},
{
"docid": "85f2968965abdb336793958d193f4eb8",
"text": "Vector drawing is a popular representation in graphic design because of the precision, compactness and editability offered by parametric curves. However, prior work on line drawing vectorization focused solely on faithfully capturing input bitmaps, and largely overlooked the problem of producing a compact and editable curve network. As a result, existing algorithms tend to produce overly-complex drawings composed of many short curves and control points, especially in the presence of thick or sketchy lines that yield spurious curves at junctions. We propose the first vectorization algorithm that explicitly balances fidelity to the input bitmap with simplicity of the output, as measured by the number of curves and their degree. By casting this trade-off as a global optimization, our algorithm generates few yet accurate curves, and also disambiguates curve topology at junctions by favoring the simplest interpretations overall. We demonstrate the robustness of our algorithm on a variety of drawings, sketchy cartoons and rough design sketches.",
"title": ""
},
{
"docid": "a62c1426e09ab304075e70b61773914f",
"text": "Converting a scanned or shot line drawing image into a vector graph can facilitate further editand reuse, making it a hot research topic in computer animation and image processing. Besides avoiding noiseinfluence, its main challenge is to preserve the topological structures of the original line drawings, such as linejunctions, in the procedure of obtaining a smooth vector graph from a rough line drawing. In this paper, wepropose a vectorization method of line drawings based on junction analysis, which retains the original structureunlike done by existing methods. We first combine central line tracking and contour tracking, which allowsus to detect the encounter of line junctions when tracing a single path. Then, a junction analysis approachbased on intensity polar mapping is proposed to compute the number and orientations of junction branches.Finally, we make use of bending degrees of contour paths to compute the smoothness between adjacent branches,which allows us to obtain the topological structures corresponding to the respective ones in the input image.We also introduce a correction mechanism for line tracking based on a quadratic surface fitting, which avoidsaccumulating errors of traditional line tracking and improves the robustness for vectorizing rough line drawings.We demonstrate the validity of our method through comparisons with existing methods, and a large amount ofexperiments on both professional and amateurish line drawing images. 本文提出一种基于交叉点分析的线条矢量化方法, 克服了现有方法难以保持拓扑结构的不足。通过中心路径跟踪和轮廓路径跟踪相结合的方式, 准确检测交叉点的出现提出一种基于极坐标亮度映射的交叉点分析方法, 计算交叉点的分支数量和朝向; 利用轮廓路径的弯曲角度判断交叉点相邻分支间的光顺度, 从而获得与原图一致的拓扑结构。",
"title": ""
}
] |
[
{
"docid": "5b07f0ec2af3bec3f53f3cff17177490",
"text": "In multi-database mining, there can be many local patterns (frequent itemsets or association rules) in each database. At the end of multi-database mining, it is necessary to analyze these local patterns to gain global patterns, when putting all the data from the databases into a single dataset can destroy important information that reflect the distribution of global patterns. This paper develops an algorithm for synthesizing local patterns in multi-database is proposed. This approach is particularly fit to find potentially useful exceptions. The proposed method has been evaluated experimentally. The experimental results have shown that this method is efficient and appropriate to identifying exceptional patterns.",
"title": ""
},
{
"docid": "df1e281417844a0641c3b89659e18102",
"text": "In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a deep primal-dual network. The joint network computes a noise-free, highresolution estimate from a noisy, low-resolution input depth map. Additionally, a highresolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.",
"title": ""
},
{
"docid": "bb9fd3e54d8d5ce32147b437ed5f52d4",
"text": "OBJECTIVE\nTo assess the association between bullying (both directly and indirectly) and indicators of psychosocial health for boys and girls separately.\n\n\nSTUDY DESIGN\nA school-based questionnaire survey of bullying, depression, suicidal ideation, and delinquent behavior.\n\n\nSETTING\nPrimary schools in Amsterdam, The Netherlands.\n\n\nPARTICIPANTS\nA total of 4811 children aged 9 to 13.\n\n\nRESULTS\nDepression and suicidal ideation are common outcomes of being bullied in both boys and girls. These associations are stronger for indirect than direct bullying. After correction, direct bullying had a significant effect on depression and suicidal ideation in girls, but not in boys. Boy and girl offenders of bullying far more often reported delinquent behavior. Bullying others directly is a much greater risk factor for delinquent behavior than bullying others indirectly. This was true for both boys and girls. Boy and girl offenders of bullying also more often reported depressive symptoms and suicidal ideation. However, after correction for both sexes only a significant association still existed between bullying others directly and suicidal ideation.\n\n\nCONCLUSIONS\nThe association between bullying and psychosocial health differs notably between girls and boys as well as between direct and indirect forms of bullying. Interventions to stop bullying must pay attention to these differences to enhance effectiveness.",
"title": ""
},
{
"docid": "67e6ec33b2afb4cf0c363d99869496bf",
"text": "This and the following two papers describe event-related potentials (ERPs) evoked by visual stimuli in 98 patients in whom electrodes were placed directly upon the cortical surface to monitor medically intractable seizures. Patients viewed pictures of faces, scrambled faces, letter-strings, number-strings, and animate and inanimate objects. This paper describes ERPs generated in striate and peristriate cortex, evoked by faces, and evoked by sinusoidal gratings, objects and letter-strings. Short-latency ERPs generated in striate and peristriate cortex were sensitive to elementary stimulus features such as luminance. Three types of face-specific ERPs were found: (i) a surface-negative potential with a peak latency of approximately 200 ms (N200) recorded from ventral occipitotemporal cortex, (ii) a lateral surface N200 recorded primarily from the middle temporal gyrus, and (iii) a late positive potential (P350) recorded from posterior ventral occipitotemporal, posterior lateral temporal and anterior ventral temporal cortex. Face-specific N200s were preceded by P150 and followed by P290 and N700 ERPs. N200 reflects initial face-specific processing, while P290, N700 and P350 reflect later face processing at or near N200 sites and in anterior ventral temporal cortex. Face-specific N200 amplitude was not significantly different in males and females, in the normal and abnormal hemisphere, or in the right and left hemisphere. However, cortical patches generating ventral face-specific N200s were larger in the right hemisphere. Other cortical patches in the same region of extrastriate cortex generated grating-sensitive N180s and object-specific or letter-string-specific N200s, suggesting that the human ventral object recognition system is segregated into functionally discrete regions.",
"title": ""
},
{
"docid": "7afe5c6affbaf30b4af03f87a018a5b3",
"text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.",
"title": ""
},
{
"docid": "dd6b50a56b740d07f3d02139d16eeec4",
"text": "Mitochondria play a central role in the aging process. Studies in model organisms have started to integrate mitochondrial effects on aging with the maintenance of protein homeostasis. These findings center on the mitochondrial unfolded protein response (UPR(mt)), which has been implicated in lifespan extension in worms, flies, and mice, suggesting a conserved role in the long-term maintenance of cellular homeostasis. Here, we review current knowledge of the UPR(mt) and discuss its integration with cellular pathways known to regulate lifespan. We highlight how insight into the UPR(mt) is revolutionizing our understanding of mitochondrial lifespan extension and of the aging process.",
"title": ""
},
{
"docid": "10b9516ef7302db13dcf46e038b3f744",
"text": "A new fake iris detection method based on 3D feature of iris pattern is proposed. In pervious researches, they did not consider 3D structure of iris pattern, but only used 2D features of iris image. However, in our method, by using four near infra-red (NIR) illuminators attached on the left and right sides of iris camera, we could obtain the iris image in which the 3D structure of iris pattern could be shown distinctively. Based on that, we could determine the live or fake iris by wavelet analysis of the 3D feature of iris pattern. Experimental result showed that the Equal Error Rate (EER) of determining the live or fake iris was 0.33%. VC 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 162–166, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20227",
"title": ""
},
{
"docid": "496bdd85a0aebb64d2f2b36c2050eb3a",
"text": "This research derives, implements, tunes and compares selected path tracking methods for controlling a car-like robot along a predetermined path. The scope includes commonly used m ethods found in practice as well as some theoretical methods found in various literature from other areas of rese arch. This work reviews literature and identifies important path tracking models and control algorithms from the vast back ground and resources. This paper augments the literature with a comprehensive collection of important path tracking idea s, a guide to their implementations and, most importantly, an independent and realistic comparison of the perfor mance of these various approaches. This document does not catalog all of the work in vehicle modeling and control; only a selection that is perceived to be important ideas when considering practical system identification, ease of implementation/tuning and computational efficiency. There are several other methods that meet this criteria, ho wever they are deemed similar to one or more of the approaches presented and are not included. The performance r esults, analysis and comparison of tracking methods ultimately reveal that none of the approaches work well in all applications a nd that they have some complementary characteristics. These complementary characteristics lead to an idea that a combination of methods may be useful for more general applications. Additionally, applications for which the methods in this paper do not provide adequate solutions are identified.",
"title": ""
},
{
"docid": "af23545d003a71d49f9665a7a3a5f3a1",
"text": "A parametric study of a wide-band Vivaldi antenna is presented. Four models were simulated using a finite element method design and analysis package Ansoft HFSS v 10.1. The simulated return loss and realized gain of each model for a frequency range of 12 to 20GHz is studied. The location of the phase centre, represented as the distance d (in cm) from the bottom of the antenna, with respect to which the phase of the respective far field copolar patterns (for a scan angle θ of 0 to 60°) in the E and H-planes, constrains to a specified maximum tolerable phase difference Δφ is calculated.",
"title": ""
},
{
"docid": "cd81144613e8cf081dbf1cab40e48268",
"text": "Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas.",
"title": ""
},
{
"docid": "e16fbf0917103601a3cda01fab6dbc1b",
"text": "In recent years L-functions and their analytic properties have assumed a central role in number theory and automorphic forms. In this expository article, we describe the two major methods for proving the analytic continuation and functional equations of L-functions: the method of integral representations, and the method of Fourier expansions of Eisenstein series. Special attention is paid to technical properties, such as boundedness in vertical strips; these are essential in applying the converse theorem, a powerful tool that uses analytic properties of L-functions to establish cases of Langlands functoriality conjectures. We conclude by describing striking recent results which rest upon the analytic properties of L-functions.",
"title": ""
},
{
"docid": "a8605e3af3cf308c7a741824cb661822",
"text": "Recently, bidirectional recurrent neural network (BRNN) has been widely used for question answering (QA) tasks with promising performance. However, most existing BRNN models extract the information of questions and answers by directly using a pooling operation to generate the representation for loss or similarity calculation. Hence, these existing models don’t put supervision (loss or similarity calculation) at every time step, which will lose some useful information. In this paper, we propose a novel BRNN model called full-time supervision based BRNN (FTS-BRNN), which can put supervision at every time step. Experiments on the factoid QA task show that our FTS-BRNN can outperform other baselines to achieve the state-of-the-art accuracy.",
"title": ""
},
{
"docid": "4da99c6895dcde2889c6d5b41c673f41",
"text": "Social media have attracted considerable attention because their open-ended nature allows users to create lightweight semantic scaffolding to organize and share content. To date, the interplay of the social and topical components of social media has been only partially explored. Here, we study the presence of homophily in three systems that combine tagging social media with online social networks. We find a substantial level of topical similarity among users who are close to each other in the social network. We introduce a null model that preserves user activity while removing local correlations, allowing us to disentangle the actual local similarity between users from statistical effects due to the assortative mixing of user activity and centrality in the social network. This analysis suggests that users with similar interests are more likely to be friends, and therefore topical similarity measures among users based solely on their annotation metadata should be predictive of social links. We test this hypothesis on several datasets, confirming that social networks constructed from topical similarity capture actual friendship accurately. When combined with topological features, topical similarity achieves a link prediction accuracy of about 92%.",
"title": ""
},
{
"docid": "e4944af5f589107d1b42a661458fcab5",
"text": "This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014. Mobile Augmented Reality has largely evolved over the last decade, as well as the interpretation itself of what is Mobile Augmented Reality. The first instance of Mobile AR can certainly be associated with the development of wearable AR, in a sense of experiencing AR during locomotion (mobile as a motion). With the transformation and miniaturization of physical devices and displays, the concept of mobile AR evolved towards the notion of ”mobile device”, aka AR on a mobile device. In this history of mobile AR we considered both definitions and the evolution of the term over time. Major parts of the list were initially compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2009 (author list in alphabetical order) for the ISMAR society. More recent work was added in 2013 and during preparation of this report. Permission is granted to copy and modify. Please email the first author if you find any errors.",
"title": ""
},
{
"docid": "a48309ea49caa504cdc14bf77ec57472",
"text": "We propose a new algorithm for the classical assignment problem. The algorithm resembles in some ways the Hungarian method but differs substantially in other respects. The average computational complexity of an efficient implementation of the algorithm seems to be considerably better than the one of the Hungarian method. In a large number of randomly generated problems the algorithm has consistently outperformed an efficiently coded version of the Hungarian method by a broad margin. The factor of improvement increases with the problem dimension N and reaches an order of magnitude for N equal to several hundreds.",
"title": ""
},
{
"docid": "0988297cfd3aaeb077e2be71f4106c81",
"text": "HadoopDB is a hybrid of MapReduce and DBMS technologies, designed to meet the growing demand of analyzing massive datasets on very large clusters of machines. Our previous work has shown that HadoopDB approaches parallel databases in performance and still yields the scalability and fault tolerance of MapReduce-based systems. In this demonstration, we focus on HadoopDB's flexible architecture and versatility with two real world application scenarios: a semantic web data application for protein sequence analysis and a business data warehousing application based on TPC-H. The demonstration offers a thorough walk-through of how to easily build applications on top of HadoopDB.",
"title": ""
},
{
"docid": "2be9c1580e78d4c3f9c1e2fe115a89bc",
"text": "Robotic devices have been shown to be efficacious in the delivery of therapy to treat upper limb motor impairment following stroke. However, the application of this technology to other types of neurological injury has been limited to case studies. In this paper, we present a multi degree of freedom robotic exoskeleton, the MAHI Exo II, intended for rehabilitation of the upper limb following incomplete spinal cord injury (SCI). We present details about the MAHI Exo II and initial findings from a clinical evaluation of the device with eight subjects with incomplete SCI who completed a multi-session training protocol. Clinical assessments show significant gains when comparing pre- and post-training performance in functional tasks. This paper explores a range of robotic measures capturing movement quality and smoothness that may be useful in tracking performance, providing as feedback to the subject, or incorporating into an adaptive training protocol. Advantages and disadvantages of the various investigated measures are discussed with regard to the type of movement segmentation that can be applied to the data collected during unassisted movements where the robot is backdriven and encoder data is recorded for post-processing.",
"title": ""
},
{
"docid": "2d37b2ed7e5805692a5aa9a910f61df5",
"text": "In order to enable the widespread use of robots in home and office environments, systems with natural interaction capabilities have to be developed. A prerequisite for natural interaction is the robot's ability to automatically recognize when and how long a person's attention is directed towards it for communication. As in open environments several persons can be present simultaneously, the detection of the communication partner is of particular importance. In this paper we present an attention system for a mobile robot which enables the robot to shift its attention to the person of interest and to maintain attention during interaction. Our approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection. Shifting of attention is realized by turning the camera into the direction of the person which is currently speaking. From the orientation of the head it is decided whether the speaker addresses the robot. The performance of the proposed approach is demonstrated with an evaluation. In addition, qualitative results from the performance of the robot at the exhibition part of the ICVS'03 are provided.",
"title": ""
},
{
"docid": "2b8ca8be8d5e468d4cd285ecc726eceb",
"text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "b956a2c11e6e818529df72157365a1df",
"text": "BACKGROUND\nLateral epicondylitis (LE) is a painful condition that affects the tendinous tissue of the lateral epicondyle of the humerus and leads to loss of function of the affected limb. Therefore it can have a major impact on the patient's social and personal life. Many treatments are recommended for lateral epicondylitis; unfortunately the evidence is limited.\n\n\nOBJECTIVES\nThe aim of study was to investigate the effect of kinesio taping (KT) on pain, grip strength and function in patients LE.\n\n\nMETHODS\nThirty-one (23 females, 8 males) patients with LE were included. KT was applied twice a week for 2 weeks. Pain at rest, activity of daily living (ADL), night and palpation on lateral epicondyle was evaluated with the visual analog scale (VAS 0-10 cm), and the grip strength was measured with a hand held dynamometer. The stage of the disease was evaluated by the Nirschl score and the functional status was assessed with Patient-Rated Forearm Evaluation Questionnaire (PRTEQ) score. These parameters were evaluated before, at 2 weeks and 6 weeks after treatment. Patients' satisfaction was also recorded on a Likert scale after treatment at 2 weeks and 6 weeks.\n\n\nRESULTS\nThe average age of the patients was 43.58 ± 9.02. The dominant limb was affected in 64.5% (20) of the patients. After the application of KT on lateral epicondyle, there was a significant improvement in all parameters in terms of pain, Nirschl score, hand grip strength, patient satisfaction, and PRTEQ scores at 2 and 6 weeks.\n\n\nCONCLUSIONS\nKinesio taping can be an effective treatment method in LE. This application improves pain, grip strength and functional status of the patients with LE.",
"title": ""
}
] |
scidocsrr
|
5e084fbdb5a5adbd2137db770be841d1
|
Cost-Efficient Data Redundancy in the Cloud
|
[
{
"docid": "f5f6036fa3f8c16ad36b3c65794fc86b",
"text": "Cloud computing has become the buzzword in the industry today. Though, it is not an entirely new concept but in today’s digital age, it has become ubiquitous due to the proliferation of Internet, broadband, mobile devices, better bandwidth and mobility requirements for end-users (be it consumers, SMEs or enterprises). In this paper, the focus is on the perceived inclination of micro and small businesses (SMEs or SMBs) toward cloud computing and the benefits reaped by them. This paper presents five factors nfrastructure-as-a-Service (IaaS) mall and medium enterprises (SMEs’) mall and medium businesses (SMBs’) influencing the cloud usage by this business community, whose needs and business requirements are very different from large enterprises. Firstly, ease of use and convenience is the biggest favorable factor followed by security and privacy and then comes the cost reduction. The fourth factor reliability is ignored as SMEs do not consider cloud as reliable. Lastly but not the least, SMEs do not want to use cloud for sharing and collaboration and prefer their old conventional methods for sharing and collaborating with their stakeholders.",
"title": ""
}
] |
[
{
"docid": "72d0590c4eacb6b4d6e3ca543bc53fd0",
"text": "Modal logic has a good claim to being the logic of choice for describing the reactive behaviour of systems modelled as coalgebras. Logics with modal operators obtained from so-called predicate liftings have been shown to be invariant under behavioural equivalence. Expressivity results stating that, conversely, logically indistinguishable states are behaviourally equivalent depend on the existence of separating sets of predicate liftings for the signature functor at hand. Here, we provide a classification result for predicate liftings which leads to an easy criterion for the existence of such separating sets, and we give simple examples of functors that fail to admit expressive normal or monotone modal logics, respectively, or in fact an expressive (unary) modal logic at all. We then move on to polyadic modal logic, where modal operators may take more than one argument formula. We show that every accessible functor admits an expressive polyadic modal logic. Moreover, expressive polyadic modal logics are, unlike unary modal logics, compositional. c © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "52da82decb732b3782ad1e3877fe6568",
"text": "Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.",
"title": ""
},
{
"docid": "d09e4f8c58f9ff0760addfe1e313d5f6",
"text": "Currently, color image encryption is important to ensure its confidentiality during its transmission on insecure networks or its storage. The fact that chaotic properties are related with cryptography properties in confusion, diffusion, pseudorandom, etc., researchers around the world have presented several image (gray and color) encryption algorithms based on chaos, but almost all them with serious security problems have been broken with the powerful chosen/known plain image attack. In this work, we present a color image encryption algorithm based on total plain image characteristics (to resist a chosen/known plain image attack), and 1D logistic map with optimized distribution (for fast encryption process) based on Murillo-Escobar's algorithm (Murillo-Escobar et al. (2014) [38]). The security analysis confirms that the RGB image encryption is fast and secure against several known attacks; therefore, it can be implemented in real-time applications where a high security is required. & 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "7de84d62d8fdc0dc466417ed36c6ec66",
"text": "Sensing current is a fundamental function in power supply circuits, especially as it generally applies to protection and feedback control. Emerging state-of-the-art switching supplies, in fact, are now exploring ways to use this sensed-current information to improve transient response, power efficiency, and compensation performance by appropriately self-adjusting, on the fly, frequency, inductor ripple current, switching configuration (e.g., synchronous to/from asynchronous), and other operating parameters. The discontinuous, non-integrated, and inaccurate nature of existing lossless current-sensing schemes, however, impedes their widespread adoption, and lossy solutions are not acceptable. Lossless, filter-based techniques are continuous, but inaccurate when integrated on-chip because of the inherent mismatches between the filter and the power inductor. The proposed GM-C filter-based, fully integrated current-sensing CMOS scheme circumvents this accuracy limitation by introducing a self-learning sequence to start-up and power-on-reset. During these seldom-occurring events, the gain and bandwidth of the internal filter are matched to the response of the power inductor and its equivalent series resistance (ESR), effectively measuring their values. A 0.5 mum CMOS realization of the proposed scheme was fabricated and applied to a current-mode buck switching supply, achieving overall DC and AC current-gain errors of 8% and 9%, respectively, at 0.8 A DC load and 0.2 A ripple currents for 3.5 muH-14 muH inductors with ESRs ranging from 48 mOmega to 384 mOmega (other lossless, state-of-the-art solutions achieve 20%-40% error, and only when the nominal specifications of the power MOSFET and/or inductor are known). Since the self-learning sequence is non-recurring, the power losses associated with the foregoing solution are minimal, translating to a 2.6% power efficiency savings when compared to the more traditional but accurate series-sense resistor (e.g., 50 mOmega) technique.",
"title": ""
},
{
"docid": "872d06c4d3702d79cb1c7bcbc140881a",
"text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.",
"title": ""
},
{
"docid": "b629ae23b7351c59c55ee9e9f1a33117",
"text": "75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 Tthe treatment of chronic hepatitis C virus (HCV) infection has been nothing short of remarkable with the prospect of elimination never more within reach. Attention has shifted to the safety and efficacy of DAAs in special populations, such as hepatitis B virus (HBV)/HCV coinfected individuals. Although the true prevalence of coinfection is unknown, studies from the United States report that 1.4% to 5.8% of HCV-infected individuals are hepatitis B surface antigen (HBsAg) positive compared with 1.4% to 4.1% in China. Coinfection is associated with higher rates of cirrhosis, decompensation, and hepatocellular carcinoma compared with monoinfected individuals. Because HBsAgpositive individuals were excluded from clinical trials of DAAs, HBV reactivation after HCV clearance was only reported after DAAs entered clinical use. Reports of severe and even fatal cases led the US Food and Drug Administration (FDA) to issue a strong directive regarding the risk of HBV reactivation with DAA treatment. The FDA boxed warning was based on 29 cases of HBV reactivation, including 2 fatal events and one that led to liver transplantation. However, owing to the nature of postapproval reporting, critical data were often missing, including baseline HBV serology, making it difficult to truly assess the risk. To err on the safe side, the FDA recommended screening all individuals scheduled to receive DAAs for evidence of current or past HBV infection with follow-up HBV DNA testing for any positive serology. Differing recommendations from international guidelines left clinicians unsure of how to proceed. The study by Liu et al in this issue of Gastroenterology provides much-needed data regarding the risk of HBV reactivation in coinfected individuals treated with DAAs. This prospective study enrolled 111 patients with HBV/ HCV coinfection who received sofosbuvir/ledipasvir for 12 weeks. Notably, although 61% were infected with HCV genotype 1, 39% had genotype 2 infection, a group for whom sofosbuvir/ledipasvir is not currently recommended. All patients achieved sustained virologic response (SVR). More important, the authors carefully evaluated what happened to HBV during and after HCV therapy. Patients were divided into 2 groups: those with undetectable HBV DNA and those with an HBV DNA of >20 IU/mL at baseline. Increases in HBV DNA levels were common in both groups. DNA increased to quantifiable levels in 31 of 37 initially",
"title": ""
},
{
"docid": "38aba50fc1512bc48773df729c8305cf",
"text": "In this study, we explore various natural language processing (NLP) methods to perform sentiment analysis. We look at two different datasets, one with binary labels, and one with multi-class labels. For the binary classification we applied the bag of words, and skip-gram word2vec models followed by various classifiers, including random forest, SVM, and logistic regression. For the multi-class case, we implemented the recursive neural tensor networks (RNTN). To overcome the high computational cost of training the standard RNTN we introduce the lowrank RNTN, in which the matrices involved in the quadratic term of RNTN are substituted by symmetric low-rank matrices. We show that the low-rank RNTN leads to significant saving in computational cost, while having similar a accuracy as that of RNTN.",
"title": ""
},
{
"docid": "62c6050db8e42b1de54f8d1d54fd861f",
"text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.",
"title": ""
},
{
"docid": "e7c848d4661bab87e39243834be80046",
"text": "2048 is an engaging single-player nondeterministic video puzzle game, which, thanks to the simple rules and hard-to-master gameplay, has gained massive popularity in recent years. As 2048 can be conveniently embedded into the discrete-state Markov decision processes framework, we treat it as a testbed for evaluating existing and new methods in reinforcement learning. With the aim to develop a strong 2048 playing program, we employ temporal difference learning with systematic n-tuple networks. We show that this basic method can be significantly improved with temporal coherence learning, multi-stage function approximator with weight promotion, carousel shaping, and redundant encoding. In addition, we demonstrate how to take advantage of the characteristics of the n-tuple network, to improve the algorithmic effectiveness of the learning process by delaying the (decayed) update and applying lock-free optimistic parallelism to effortlessly make advantage of multiple CPU cores. This way, we were able to develop the best known 2048 playing program to date, which confirms the effectiveness of the introduced methods for discrete-state Markov decision problems.",
"title": ""
},
{
"docid": "9c0985d157970a1eb0ee82311cdb8b93",
"text": "Many search engine users attempt to satisfy an information need by issuing multiple queries, with the expectation that each result will contribute some portion of the required information. Previous research has shown that structured or semi-structured descriptive knowledge bases (such as Wikipedia) can be used to improve search quality and experience for general or entity-centric queries. However, such resources do not have sufficient coverage of procedural knowledge, i.e. what actions should be performed and what factors should be considered to achieve some goal; such procedural knowledge is crucial when responding to task-oriented search queries. This paper provides a first attempt to bridge the gap between two evolving research areas: development of procedural knowledge bases (such as wikiHow) and task-oriented search. We investigate whether task-oriented search can benefit from existing procedural knowledge (search task suggestion) and whether automatic procedural knowledge construction can benefit from users' search activities (automatic procedural knowledge base construction). We propose to create a three-way parallel corpus of queries, query contexts, and task descriptions, and reduce both problems to sequence labeling tasks. We propose a set of textual features and structural features to identify key search phrases from task descriptions, and then adapt similar features to extract wikiHow-style procedural knowledge descriptions from search queries and relevant text snippets. We compare our proposed solution with baseline algorithms, commercial search engines, and the (manually-curated) wikiHow procedural knowledge; experimental results show an improvement of +0.28 to +0.41 in terms of Precision@8 and mean average precision (MAP).",
"title": ""
},
{
"docid": "9a5b1bca71308fb66c4e982b9ac0df6c",
"text": "The resource-constrained project scheduling problem (RCPSP) consists of activities that must be scheduled subject to precedence and resource constraints such that the makespan is minimized. It has become a well-known standard problem in the context of project scheduling which has attracted numerous researchers who developed both exact and heuristic scheduling procedures. However, it is a rather basic model with assumptions that are too restrictive for many practical applications. Consequently, various extensions of the basic RCPSP have been developed. This paper gives an overview over these extensions. The extensions are classified according to the structure of the RCPSP. We summarize generalizations of the activity concept, of the precedence relations and of the resource constraints. Alternative objectives and approaches for scheduling multiple projects are discussed as well. In addition to popular variants and extensions such as multiple modes, minimal and maximal time lags, and net present value-based objectives, the paper also provides a survey of many less known concepts.",
"title": ""
},
{
"docid": "289b1eaf4535374f339b683983a655f9",
"text": "© The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/ publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. P156 Multiscale modeling of ischemic stroke with the NEURON reaction‐diffusion module Adam J. H. Newton, Alexandra H. Seidenstein, Robert A. McDougal, William W. Lytton Department of Neuroscience, Yale University, New Haven, CT 06520, USA; Department Physiology & Pharmacology, SUNY Downstate, Brooklyn, NY 11203, USA; NYU School of Engineering, 6 MetroTech Center, Brooklyn, NY 11201, USA; Kings County Hospital Center, Brooklyn, NY 11203, USA Correspondence: Adam J. H. Newton (adam.newton@yale.edu) BMC Neuroscience 2017, 18 (Suppl 1):P156",
"title": ""
},
{
"docid": "12a34678fa46825e11944f317fdd4977",
"text": "The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each of the connected computers. This paper establishes a viewpoint that emphasizes the dispersed structure and decentralization of both data and control in the design of such systems. It defines the concepts of transparency, fault tolerance, and scalability and discusses them in the context of DFSs. The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design. It also presents alternatives for the semantics of sharing and methods for providing access to remote files. A survey of contemporary UNIX-based systems, namely, UNIX United, Locus, Sprite, Sun's Network File System, and ITC's Andrew, illustrates the concepts and demonstrates various implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design.",
"title": ""
},
{
"docid": "a161b0fe0b38381a96f02694fd84c3bf",
"text": "We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.",
"title": ""
},
{
"docid": "655f855531360c035f0dc59f70299302",
"text": "Introduction 1Motivation, diagnosis of features inside CNNs: In recent years, real applications usually propose new demands for deep learning beyond the accuracy. The CNN needs to earn trust from people for safety issues, because a high accuracy on testing images cannot always ensure that the CNN encodes correct features. Instead, the CNN sometimes uses unreliable reasons for prediction. Therefore, this study aim to provide a generic tool to examine middle-layer features of a CNN to ensure the safety in critical applications. Unlike previous visualization (Zeiler and Fergus 2014) and diagnosis (Bau et al. 2017; Ribeiro, Singh, and Guestrin 2016) of CNN representations, we focus on the following two new issues, which are of special values in feature diagnosis. • Disentanglement of interpretable and uninterpretable feature information is necessary for a rigorous and trustworthy examination of CNN features. Each filter of a conv-layer usually encodes a mixture of various semantics and noises (see Fig. 1). As discussed in (Bau et al. 2017), filters in high conv-layers mainly represent “object parts”2, and “material” and “color” information in high layers is not salient enough for trustworthy analysis. In particular, part features are usually more localized and thus is more helpful in feature diagnosis. Therefore, in this paper, we propose to disentangle part features from another signals and noises. For example, we may quantitatively disentangle 90% information of CNN features as object parts and interpret the rest 10% as textures and noises. • Semantic explanations: Given an input image, we aim to use clear visual concepts (here, object parts) to interpret chaotic CNN features. In comparisons, network visualization and diagnosis mainly illustrate the appearance corresponding to a network output/filter, without physically",
"title": ""
},
{
"docid": "e7d36dc01a3e20c3fb6d2b5245e46705",
"text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.",
"title": ""
},
{
"docid": "7c23d90cd8e7e5223a13882833fa7c66",
"text": "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.",
"title": ""
},
{
"docid": "155938bc107c7e7cfca22758937f4d32",
"text": "A general theory of addictions is proposed, using the compulsive gambler as the prototype. Addiction is defined as a dependent state acquired over time to relieve stress. Two interrelated sets of factors predispose persons to addictions: an abnormal physiological resting state, and childhood experiences producing a deep sense of inadequacy. All addictions are hypothesized to follow a similar three-stage course. A matrix strategy is outlined to collect similar information from different kinds of addicts and normals. The ultimate objective is to identify high risk youth and prevent the development of addictions.",
"title": ""
},
{
"docid": "1258939378850f7d89f6fa860be27c39",
"text": "Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs. Sparse Winograd merges these two and thus has the potential to offer a combined performance benefit. Nevertheless, training convolution layers so that the resulting Winograd kernels are sparse has not hitherto been very successful. By introducing a Winograd layer in place of a standard convolution layer, we can learn and prune Winograd coefficients “natively” and obtain sparsity level beyond 90% with only 0.1% accuracy loss with AlexNet on ImageNet dataset. Furthermore, we present a sparse Winograd convolution algorithm and implementation that exploits the sparsity, achieving up to 31.7 effective TFLOP/s in 32-bit precision on a latest Intel Xeon CPU, which corresponds to a 5.4× speedup over a state-of-the-art dense convolution implementation.",
"title": ""
},
{
"docid": "4690fbbaa412557e3b1c516e9355c9f8",
"text": "JCO/APRIL 2004 M distalization in Class II cases has been accomplished with various functional appliances, including fixed interarch appliances, such as the Herbst* and Jasper Jumper,** and fixed intra-arch appliances. The Twin Force Bite Corrector (TFBC)*** is a new fixed intermaxillary appliance with a built-in constant force for Class II correction. This article presents two patients who were part of a long-term prospective study currently in progress at the University of Connecticut Department of Orthodontics. Each patient was treated with the TFBC to correct a skeletal Class II malocclusion due to a retrognathic mandible.",
"title": ""
}
] |
scidocsrr
|
5241cb61b04f1e378cecf841b481071a
|
Model-based ego-motion and vehicle parameter estimation using visual odometry
|
[
{
"docid": "772b3f74b6eecf82099b2e5b3709e507",
"text": "A common prerequisite for many vision-based driver assistance systems is the knowledge of the vehicle's own movement. In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images. Our method is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. The only assumption we make is a known camera geometry, where the calibration may also vary over time. We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments. A high-accuracy inertial navigation system is used to evaluate our results on challenging real-world video sequences. Experiments show that our approach is clearly superior compared to other filtering techniques in terms of both, accuracy and run-time.",
"title": ""
}
] |
[
{
"docid": "0cbf28bb902e857a7819417147bc8be4",
"text": "We describe an improved algorithm for signal reconstruction based on the Orthogonal Matching Pursuit (OMP) algorithm. In contrast with the traditional implementation of OMP in compressive sensing (CS) we introduce a preprocessing step that converts the signal into a distribution that can be more easily reconstructed. This preprocessing introduces negligible additional complexity, but enables a significant performance improvement in the reconstruction accuracy.",
"title": ""
},
{
"docid": "c0e34e98d8f6044ea5ae1914647dff93",
"text": "Edible plant-derived exosome-like nanoparticles (EPDELNs) are novel naturally occurring plant ultrastructures that are structurally similar to exosomes. Many EPDELNs have anti-inflammatory properties. MicroRNAs (miRNAs) play a critical role in mediating physiological and pathological processes in animals and plants. Although miRNAs can be selectively encapsulated in extracellular vesicles, little is known about their expression and function in EPDELNs. In this study, we isolated nanovesicles from 11 edible fruits and vegetables and subjected the corresponding EPDELN small RNA libraries to Illumina sequencing. We identified a total of 418 miRNAs-32 to 127 per species-from the 11 EPDELN samples. Target prediction and functional analyses revealed that highly expressed miRNAs were closely associated with the inflammatory response and cancer-related pathways. The 418 miRNAs could be divided into three classes according to their EPDELN distributions: 26 \"frequent\" miRNAs (FMs), 39 \"moderately present\" miRNAs (MPMs), and 353 \"rare\" miRNAs (RMs). FMs were represented by fewer miRNA species than RMs but had a significantly higher cumulative expression level. Taken together, our in vitro results indicate that miRNAs in EPDELNs have the potential to regulate human mRNA.",
"title": ""
},
{
"docid": "4a0421f9d82a06891ee2816f94cc550e",
"text": "Sexism toward women in online video game environments has become a pervasive and divisive issue in the gaming community. In this study, we sought to determine what personality traits, demographic variables, and levels of game play predicted sexist attitudes towards women who play video games. Male and female participants (N = 301) who were players of networked video games were invited to participate in an anonymous online survey. Social dominance orientation and conformity to some types of masculine norms (desire for power over women and the need for heterosexual self-presentation) predicted higher scores on the Video Game Sexism Scale (i.e., greater sexist beliefs about women and gaming). Implications for the social gaming environment and female gamers are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "948257544ca485b689d8663aaba63c5d",
"text": "This paper presents a new single-pass shadow mapping technique that achieves better quality than the approaches based on perspective warping, such as perspective, light-space, and trapezoidal shadow maps. The proposed technique is appropriate for real-time rendering of large virtual environments that include dynamic objects. By performing operations in camera space, this solution successfully handles the general and the dueling frustum cases and produces high-quality shadows even for extremely large scenes. This paper also presents a fast nonlinear projection technique for shadow map stretching that enables complete utilization of the shadow map by eliminating wastage. The application of stretching results in a significant reduction in unwanted perspective aliasing, commonly found in all shadow mapping techniques. Technique is compared with other shadow mapping techniques, and the benefits of the proposed method are presented. The proposed shadow mapping technique is simple and flexible enough to handle most of the special scenarios. An API for a generic shadow mapping solution is presented. This API simplifies the generation of fast and high-quality shadows.",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "00575265d0a6338e3eeb23d234107206",
"text": "We introduce the concept of mode-k generalized eigenvalues and eigenvectors of a tensor and prove some properties of such eigenpairs. In particular, we derive an upper bound for the number of equivalence classes of generalized tensor eigenpairs using mixed volume. Based on this bound and the structures of tensor eigenvalue problems, we propose two homotopy continuation type algorithms to solve tensor eigenproblems. With proper implementation, these methods can find all equivalence classes of isolated generalized eigenpairs and some generalized eigenpairs contained in the positive dimensional components (if there are any). We also introduce an algorithm that combines a heuristic approach and a Newton homotopy method to extract real generalized eigenpairs from the found complex generalized eigenpairs. A MATLAB software package TenEig has been developed to implement these methods. Numerical results are presented to illustrate the effectiveness and efficiency of TenEig for computing complex or real generalized eigenpairs.",
"title": ""
},
{
"docid": "947665b0950b0bb24cc246758474266f",
"text": "Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. At the moment, however, the information retrieval support is limited. We present a formal model and a new search algorithm for folksonomies, calledFolkRank, that exploits the structure of the folksonomy. The proposed algorithm is also applied to find communities within the folksonomy and is used to structure search results. All findings are demonstrated on a large scale dataset.",
"title": ""
},
{
"docid": "00e60176eca7d86261c614196849a946",
"text": "This paper proposes a novel low-profile dual polarized antenna for 2.4 GHz application. The proposed antenna consists of a circular patch with four curved T-stubs and a differential feeding network. Due to the parasitic loading of the curved T-stubs, the bandwidth has been improved. Good impedance matching and dual-polarization with low cross polarization have been achieved within 2.4–2.5 GHz, which is sufficient for WLAN application. The total thickness of the antenna is only 0.031A,o, which is low-profile when compared with its counterparts.",
"title": ""
},
{
"docid": "8f9c8119c55e2ac905528e21388b71ab",
"text": "Over the past 20 years Web browsers have changed considerably from being a simple text display to now supporting complex multimedia applications. The client can now enjoy chatting, playing games and Internet banking. All these applications have something in common, they can be run on multiple platforms and in some cases they will run offline. With the introduction of HTML5 this evolution will continue, with browsers offering greater levels of functionality. This paper outlines the background study and the importance of new technologies, such as HTML5's new browser based storage called IndexedDB. We will show how the technology of storing data on the client side has changed over the time and how the technologies for storing data on the client will be used in future when considering known security issues. Further, we propose a solution to IndexedDB's known security issues in form of a security model, which will extend the current model.",
"title": ""
},
{
"docid": "5d673f5297919e6307dc2861d10ddfe6",
"text": "Given the increased testing of school-aged children in the United States there is a need for a current and valid scale to measure the effects of test anxiety in children. The domain of children’s test anxiety was theorized to be comprised of three dimensions: thoughts, autonomic reactions, and off-task behaviors. Four stages are described in the evolution of the Children’s Test Anxiety Scale (CTAS): planning, construction, quantitative evaluation, and validation. A 50-item scale was administered to a development sample (N /230) of children in grades 3 /6 to obtain item analysis and reliability estimates which resulted in a refined 30-item scale. The reduced scale was administered to a validation sample (N /261) to obtain construct validity evidence. A three-factor structure fit the data reasonably well. Recommendations for future research with the scale are described.",
"title": ""
},
{
"docid": "347278d002cdea4fe830b5d1a6b7bc62",
"text": "The question of what function is served by the cortical column has occupied neuroscientists since its original description some 60years ago. The answer seems tractable in the somatosensory cortex when considering the inputs to the cortical column and the early stages of information processing, but quickly breaks down once the multiplicity of output streams and their sub-circuits are brought into consideration. This article describes the early stages of information processing in the barrel cortex, through generation of the center and surround receptive field components of neurons that subserve integration of multi whisker information, before going on to consider the diversity of properties exhibited by the layer 5 output neurons. The layer 5 regular spiking (RS) neurons differ from intrinsic bursting (IB) neurons in having different input connections, plasticity mechanisms and corticofugal projections. In particular, layer 5 RS cells employ noise reduction and homeostatic plasticity mechanism to preserve and even increase information transfer, while IB cells use more conventional Hebbian mechanisms to achieve a similar outcome. It is proposed that the rodent analog of the dorsal and ventral streams, a division reasonably well established in primate cortex, might provide a further level of organization for RS cell function and hence sub-circuit specialization.",
"title": ""
},
{
"docid": "7ca43cfa9af9e40a5b53c60a2b2fb67f",
"text": "In this paper we have proposed a control technique for the automatic generation control of multi generating power unit of the interconnected power system. This technique established the relationship between the economic load dispatch and load forecasting mechanism to the classical concepts of the load frequency control (LFC). The LFC system monitors to keep the power system frequency at nominal value, generator output according to the load demand and net interchange scheduled tie line power flows within prescribed limit among the different control area of the power system. Due to relatively fast area load demand fluctuations and accordingly slow response of instantaneous estimate of area control error (ACE), we need some load forecasting technique for better dynamic system response as well as improved & effective load frequency control to the power system. Load prediction technique has been accomplished using the klaman filter prediction recursive algorithms and a bank of hourly predicted load data is obtained and then the concepts of 5 minute look ahead forecasting technique is applied and finally total load is shared among the different generating units according to the calculation of economic load dispatch via participation factor’s. Results and Discussion section of this paper of simulated interconnected system’s graphs support this new technique wisely.",
"title": ""
},
{
"docid": "b81a28179d547f9f7b26a94da74166ea",
"text": "Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on down sampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM, therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against over fitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-the-art classifiers and can be used in the CHM to improve object segmentation performance.",
"title": ""
},
{
"docid": "824920b0b2a3deebf1a6692cdc72b019",
"text": "Neuropathology involving TAR DNA binding protein-43 (TDP-43) has been identified in a wide spectrum of neurodegenerative diseases collectively named as TDP-43 proteinopathy, including amyotrophic lateral sclerosis (ALS) and frontotemporal lobar dementia (FTLD). To test whether increased expression of wide-type human TDP-43 (hTDP-43) may cause neurotoxicity in vivo, we generated transgenic flies expressing hTDP-43 in various neuronal subpopulations. Expression in the fly eyes of the full-length hTDP-43, but not a mutant lacking its amino-terminal domain, led to progressive loss of ommatidia with remarkable signs of neurodegeneration. Expressing hTDP-43 in mushroom bodies (MBs) resulted in dramatic axon losses and neuronal death. Furthermore, hTDP-43 expression in motor neurons led to axon swelling, reduction in axon branches and bouton numbers, and motor neuron loss together with functional deficits. Thus, our transgenic flies expressing hTDP-43 recapitulate important neuropathological and clinical features of human TDP-43 proteinopathy, providing a powerful animal model for this group of devastating diseases. Our study indicates that simply increasing hTDP-43 expression is sufficient to cause neurotoxicity in vivo, suggesting that aberrant regulation of TDP-43 expression or decreased clearance of hTDP-43 may contribute to the pathogenesis of TDP-43 proteinopathy.",
"title": ""
},
{
"docid": "2578607ec2e7ae0d2e34936ec352ff6e",
"text": "AI Innovation in Industry is a new department for IEEE Intelligent Systems, and this paper examines some of the basic concerns and uses of AI for big data (AI has been used in several different ways to facilitate capturing and structuring big data, and it has been used to analyze big data for key insights).",
"title": ""
},
{
"docid": "76e62af2971de3d11d684f1dd7100475",
"text": "Recent advances in memory research suggest methods that can be applied to enhance educational practices. We outline four principles of memory improvement that have emerged from research: 1) process material actively, 2) practice retrieval, 3) use distributed practice, and 4) use metamemory. Our discussion of each principle describes current experimental research underlying the principle and explains how people can take advantage of the principle to improve their learning. The techniques that we suggest are designed to increase efficiency—that is, to allow a person to learn more, in the same unit of study time, than someone using less efficient memory strategies. A common thread uniting all four principles is that people learn best when they are active participants in their own learning.",
"title": ""
},
{
"docid": "36c73f8dd9940b2071ad55ae1dd83c27",
"text": "Current music recommender systems rely on techniques like collaborative filtering on user-provided information in order to generate relevant recommendations based upon users’ music collections or listening habits. In this paper, we examine whether better recommendations can be obtained by taking into account the music preferences of the user’s social contacts. We assume that music is naturally diffused through the social network of its listeners, and that we can propagate automatic recommendations in the same way through the network. In order to test this statement, we developed a music recommender application called Starnet on a Social Networking Service. It generated recommendations based either on positive ratings of friends (social recommendations), positive ratings of others in the network (nonsocial recommendations), or not based on ratings (random recommendations). The user responses to each type of recommendation indicate that social recommendations are better than non-social recommendations, which are in turn better than random recommendations. Likewise, the discovery of novel and relevant music is more likely via social recommendations than non-social. Social shuffle recommendations enable people to discover music through a serendipitous process powered by human relationships and tastes, exploiting the user’s social network to share cultural experiences.",
"title": ""
},
{
"docid": "e67b9b48507dcabae92debdb9df9cb08",
"text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.",
"title": ""
},
{
"docid": "95a3cc864c5f63b87df9c216856dbdb8",
"text": "Web Content Management Systems (WCMS) play an increasingly important role in the Internet’s evolution. They are software platforms that facilitate the implementation of a web site or an e-commerce and are gaining popularity due to its flexibility and ease of use. In this work, we explain from a tutorial perspective how to manage WCMS and what can be achieved by using them. With this aim, we select the most popular open-source WCMS; namely, Joomla!, WordPress, and Drupal. Then, we implement three websites that are equal in terms of requirements, visual aspect, and functionality, one for each WCMS. Through a qualitative comparative analysis, we show the advantages and drawbacks of each solution, and the complexity associated. On the other hand, security concerns can arise if WCMS are not appropriately used. Due to the key position that they occupy in today’s Internet, we perform a basic security analysis of the three implement websites in the second part of this work. Specifically, we explain vulnerabilities, security enhancements, which errors should not be done, and which WCMS is initially safer.",
"title": ""
},
{
"docid": "c5ca7be10aec26359f27350494821cd7",
"text": "When moving through a tracked immersive virtual environment, it is sometimes useful to deviate from the normal one-to-one mapping of real to virtual motion. One option is the application of rotation gain, where the virtual rotation of a user around the vertical axis is amplified or reduced by a factor. Previous research in head-mounted display environments has shown that rotation gain can go unnoticed to a certain extent, which is exploited in redirected walking techniques. Furthermore, it can be used to increase the effective field of regard in projection systems. However, rotation gain has never been studied in CAVE systems, yet. In this work, we present an experiment with 87 participants examining the effects of rotation gain in a CAVE-like virtual environment. The results show no significant effects of rotation gain on simulator sickness, presence, or user performance in a cognitive task, but indicate that there is a negative influence on spatial knowledge especially for inexperienced users. In secondary results, we could confirm results of previous work and demonstrate that they also hold for CAVE environments, showing a negative correlation between simulator sickness and presence, cognitive performance and spatial knowledge, a positive correlation between presence and spatial knowledge, a mitigating influence of experience with 3D applications and previous CAVE exposure on simulator sickness, and a higher incidence of simulator sickness in women.",
"title": ""
}
] |
scidocsrr
|
75efc265cc6cf400edf09c3b305b0939
|
Supply Chain Object Discovery with Semantic-enhanced Blockchain
|
[
{
"docid": "ce871576011a3dfc99bc613e86fddc80",
"text": "Digital supply chain integration is becoming increasingly dynamic. Access to customer demand needs to be shared effectively, and product and service deliveries must be tracked to provide visibility in the supply chain. Business process integration is based on standards and reference architectures, which should offer end-to-end integration of product data. Companies operating in supply chains establish process and data integration through the specialized intermediate companies, whose role is to establish interoperability by mapping and integrating companyspecific data for various organizations and systems. This has typically caused high integration costs, and diffusion is slow. This paper investigates the requirements and functionalities of supply chain integration. Cloud integration can be expected to offer a cost-effective business model for interoperable digital supply chains. We explain how supply chain integration through the blockchain technology can achieve disruptive transformation in digital supply chains and networks.",
"title": ""
},
{
"docid": "4a811a48f913e1529f70937c771d01da",
"text": "An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods--e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods--has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.",
"title": ""
}
] |
[
{
"docid": "ca906d18fca3f4ee83224b7728cbd379",
"text": "AIM\nTo investigate the effect of some psychosocial variables on nurses' job satisfaction.\n\n\nBACKGROUND\nNurses' job satisfaction is one of the most important factors in determining individuals' intention to stay or leave a health-care organisation. Literature shows a predictive role of work climate, professional commitment and work values on job satisfaction, but their conjoint effect has rarely been considered.\n\n\nMETHODS\nA cross-sectional questionnaire survey was adopted. Participants were hospital nurses and data were collected in 2011.\n\n\nRESULTS\nProfessional commitment and work climate positively predicted nurses' job satisfaction. The effect of intrinsic vs. extrinsic work value orientation on job satisfaction was completely mediated by professional commitment.\n\n\nCONCLUSIONS\nNurses' job satisfaction is influenced by both contextual and personal variables, in particular work climate and professional commitment. According to a more recent theoretical framework, work climate, work values and professional commitment interact with each other in determining nurses' job satisfaction.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nNursing management must be careful to keep the context of work tuned to individuals' attitude and vice versa. Improving the work climate can have a positive effect on job satisfaction, but its effect may be enhanced by favouring strong professional commitment and by promoting intrinsic more than extrinsic work values.",
"title": ""
},
{
"docid": "1c4fc20b2cfda58d9c3e22ecf97af506",
"text": "Cognitive function requires the coordination of neural activity across many scales, from neurons and circuits to large-scale networks. As such, it is unlikely that an explanatory framework focused upon any single scale will yield a comprehensive theory of brain activity and cognitive function. Modelling and analysis methods for neuroscience should aim to accommodate multiscale phenomena. Emerging research now suggests that multi-scale processes in the brain arise from so-called critical phenomena that occur very broadly in the natural world. Criticality arises in complex systems perched between order and disorder, and is marked by fluctuations that do not have any privileged spatial or temporal scale. We review the core nature of criticality, the evidence supporting its role in neural systems and its explanatory potential in brain health and disease.",
"title": ""
},
{
"docid": "9e5eead043459905bd9c4af981c5d587",
"text": "The chapter gives general information about graphene, namely its structure, properties and methods of preparation, and highlights the methods for the preparation of graphene-based polymer nanocomposites.",
"title": ""
},
{
"docid": "223252b8bf99671eedd622c99bc99aaf",
"text": "We present a novel dataset for natural language generation (NLG) in spoken dialogue systems which includes preceding context (user utterance) along with each system response to be generated, i.e., each pair of source meaning representation and target natural language paraphrase. We expect this to allow an NLG system to adapt (entrain) to the user’s way of speaking, thus creating more natural and potentially more successful responses. The dataset has been collected using crowdsourcing, with several stages to obtain natural user utterances and corresponding relevant, natural, and contextually bound system responses. The dataset is available for download under the Creative Commons 4.0 BY-SA license.",
"title": ""
},
{
"docid": "99982ebadc1913bfb0ee99270dedfae7",
"text": "As a consequence of optimal investment choices, a firm’s assets and growth options change in predictable ways. Using a dynamic model, we show that this imparts predictability to changes in a firm’s systematic risk, and its expected return. Simulations show that the model simultaneously reproduces: ~i! the time-series relation between the book-to-market ratio and asset returns; ~ii! the cross-sectional relation between book-to-market, market value, and return; ~iii! contrarian effects at short horizons; ~iv! momentum effects at longer horizons; and ~v! the inverse relation between interest rates and the market risk premium. RECENT EMPIRICAL RESEARCH IN FINANCE has focused on regularities in the cross section of expected returns that appear anomalous relative to traditional models. Stock returns are related to book-to-market, and market value.1 Past returns have also been shown to predict relative performance, through the documented success of contrarian and momentum strategies.2 Existing explanations for these results are that they are due to behavioral biases or risk premia for omitted state variables.3 These competing explanations are difficult to evaluate without models that explicitly tie the characteristics of interest to risks and risk premia. For example, with respect to book-to-market, Lakonishok et al. ~1994! argue: “The point here is simple: although the returns to the B0M strategy are impressive, B0M is not a ‘clean’ variable uniquely associated with eco* Berk is at the University of California, Berkeley, and NBER; Green is at Carnegie Mellon University; and Naik is with the University of British Columbia. We acknowledge the research assistance of Robert Mitchell and Dave Peterson. We have benefited from and are grateful for comments by seminar participants at Berkeley, British Columbia, Carnegie Mellon, Dartmouth, Duke, Michigan, Minnesota, North Carolina, Northwestern, Rochester, Utah, Washington at St. Louis, Washington, Wharton, Wisconsin, Yale, the 1996 meetings of the Western Finance Association, and the 1997 Utah Winter Finance Conference and the suggestions from an anonymous referee and from the editor, René Stulz. We also acknowledge financial support for this research from the Social Sciences and Humanities Research Council of Canada and the Bureau of Asset Management at University of British Columbia. The computer programs used in this paper are available on this journal’s web page: http:00www.afajof.org 1 See Fama and French ~1992! for summary evidence. 2 See Conrad and Kaul ~1998! for a recent summary of evidence on this subject. 3 See Lakonishok, Shleifer, and Vishny ~1994! for arguments in favor of behavioral biases and Fama and French ~1993! for an interpretation in terms of state variable risks. THE JOURNAL OF FINANCE • VOL. LIV, NO. 5 • OCTOBER 1999",
"title": ""
},
{
"docid": "1e5202850748b0f613807b0452eb89a2",
"text": "This paper introduces a hierarchical image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated.",
"title": ""
},
{
"docid": "dd1f7671025d79dead0a87fef6cec409",
"text": "PURPOSE This article summarizes prior work in the learning sciences and discusses one perspective—situative learning—in depth. Situativity refers to the central role of context, including the physical and social aspects of the environment, on learning. Furthermore, it emphasizes the socially and culturally negotiated nature of thought and action of persons in interaction. The aim of the article is to provide a foundation for future work on engineering learning and to suggest ways in which the learning sciences and engineering education research communities might work to their mutual benefit.",
"title": ""
},
{
"docid": "6e60d6b878c35051ab939a03bdd09574",
"text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.",
"title": ""
},
{
"docid": "bc5c008b5e443b83b2a66775c849fffb",
"text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.",
"title": ""
},
{
"docid": "ddc3241c09a33bde1346623cf74e6866",
"text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.",
"title": ""
},
{
"docid": "12b115e3b759fcb87956680d6e89d7aa",
"text": "The calibration system presented in this article enables to calculate optical parameters i.e. intrinsic and extrinsic of both thermal and visual cameras used for 3D reconstruction of thermal images. Visual cameras are in stereoscopic set and provide a pair of stereo images of the same object which are used to perform 3D reconstruction of the examined object [8]. The thermal camera provides information about temperature distribution on the surface of an examined object. In this case the term of 3D reconstruction refers to assigning to each pixel of one of the stereo images (called later reference image) a 3D coordinate in the respective camera reference frame [8]. The computed 3D coordinate is then re-projected on to the thermograph and thus to the known 3D position specific temperature is assigned. In order to remap the 3D coordinates on to thermal image it is necessary to know the position of thermal camera against visual camera and therefore a calibration of the set of the three cameras must be performed. The presented calibration system includes special calibration board (fig.1) whose characteristic points of well known position are recognizable both by thermal and visual cameras. In order to detect calibration board characteristic points’ image coordinates, especially in thermal camera, a new procedure was designed.",
"title": ""
},
{
"docid": "e83873daee4f8dae40c210987d9158e8",
"text": "Domain ontologies are important information sources for knowledge-based systems. Yet, building domain ontologies from scratch is known to be a very labor-intensive process. In this study, we present our semi-automatic approach to building an ontology for the domain of wind energy which is an important type of renewable energy with a growing share in electricity generation all over the world. Related Wikipedia articles are first processed in an automated manner to determine the basic concepts of the domain together with their properties and next the concepts, properties, and relationships are organized to arrive at the ultimate ontology. We also provide pointers to other engineering ontologies which could be utilized together with the proposed wind energy ontology in addition to its prospective application areas. The current study is significant as, to the best of our knowledge, it proposes the first considerably wide-coverage ontology for the wind energy domain and the ontology is built through a semi-automatic process which makes use of the related Web resources, thereby reducing the overall cost of the ontology building process.",
"title": ""
},
{
"docid": "d6ffefe59311865aab98dede1cc2c602",
"text": "We develop a 3D object detection algorithm that uses latent support surfaces to capture contextual relationships in indoor scenes. Existing 3D representations for RGB-D images capture the local shape and appearance of object categories, but have limited power to represent objects with different visual styles. The detection of small objects is also challenging because the search space is very large in 3D scenes. However, we observe that much of the shape variation within 3D object categories can be explained by the location of a latent support surface, and smaller objects are often supported by larger objects. Therefore, we explicitly use latent support surfaces to better represent the 3D appearance of large objects, and provide contextual cues to improve the detection of small objects. We evaluate our model with 19 object categories from the SUN RGB-D database, and demonstrate state-of-the-art performance.",
"title": ""
},
{
"docid": "efd87c8a9570944a0cd2bff16d75ffc5",
"text": "Deep neural networks show very good performance in phoneme and speech recognition applications when compared to previously used GMM (Gaussian Mixture Model)-based ones. However, efficient implementation of deep neural networks is difficult because the network size needs to be very large when high recognition accuracy is demanded. In this work, we develop a digital VLSI for phoneme recognition using deep neural networks and assess the design in terms of throughput, chip size, and power consumption. The developed VLSI employs a fixed-point optimization method that only uses +Δ, 0, and -Δ for representing each of the weight. The design employs 1,024 simple processing units in each layer, which however can be scaled easily according to the needed throughput, and the throughput of the architecture varies from 62.5 to 1,000 times of the real-time processing speed.",
"title": ""
},
{
"docid": "1b34ce669b77895322ee677605b9880a",
"text": "This paper presents a series of new augmented reality user interaction techniques to support the capture and creation of 3D geometry of large outdoor structures, part of an overall concept we have named construction at a distance. We use information about the user's physical presence, along with hand and head gestures, to allow the user to capture and create the geometry of objects that are orders of magnitude larger than themselves, with no prior information or assistance. Using augmented reality and these new techniques, users can enter geometry and verify its accuracy in real time. This paper includes a number of examples showing objects that have been modelled in the physical world, demonstrating the usefulness of the techniques.",
"title": ""
},
{
"docid": "66b088871549d5ec924dbe500522d6f8",
"text": "Being able to effectively measure similarity between patents in a complex patent citation network is a crucial task in understanding patent relatedness. In the past, techniques such as text mining and keyword analysis have been applied for patent similarity calculation. The drawback of these approaches is that they depend on word choice and writing style of authors. Most existing graph-based approaches use common neighbor-based measures, which only consider direct adjacency. In this work we propose new similarity measures for patents in a patent citation network using only the patent citation network structure. The proposed similarity measures leverage direct and indirect co-citation links between patents. A challenge is when some patents receive a large number of citations, thus are considered more similar to many other patents in the patent citation network. To overcome this challenge, we propose a normalization technique to account for the case where some pairs are ranked very similar to each other because they both are cited by many other patents. We validate our proposed similarity measures using US class codes for US patents and the well-known Jaccard similarity index. Experiments show that the proposed methods perform well when compared to the Jaccard similarity index.",
"title": ""
},
{
"docid": "abbb210122d470215c5a1d0420d9db06",
"text": "Ensemble clustering, also known as consensus clustering, is emerging as a promising solution for multi-source and/or heterogeneous data clustering. The co-association matrix based method, which redefines the ensemble clustering problem as a classical graph partition problem, is a landmark method in this area. Nevertheless, the relatively high time and space complexity preclude it from real-life large-scale data clustering. We therefore propose SEC, an efficient Spectral Ensemble Clustering method based on co-association matrix. We show that SEC has theoretical equivalence to weighted K-means clustering and results in vastly reduced algorithmic complexity. We then derive the latent consensus function of SEC, which to our best knowledge is among the first to bridge co-association matrix based method to the methods with explicit object functions. The robustness and generalizability of SEC are then investigated to prove the superiority of SEC in theory. We finally extend SEC to meet the challenge rising from incomplete basic partitions, based on which a scheme for big data clustering can be formed. Experimental results on various real-world data sets demonstrate that SEC is an effective and efficient competitor to some state-of-the-art ensemble clustering methods and is also suitable for big data clustering.",
"title": ""
},
{
"docid": "03bf4029ef68b58162abc15d0a0d702c",
"text": "In searching for a general \"zero-current-Switching\" technique for DC-DC converters, the concept of resonant switches is developed. As a combination of switching device and LC network, the resonant switch offers advantages of quasi-sinusoidal current waveforms, zero switching stresses, zero switching losses, self-commutation, and reduced EMI. Furthermore, application of the resonant switch concept to conventional converters leads to the discovery of a host of new converter circuits.",
"title": ""
},
{
"docid": "314722d112f5520f601ed6917f519466",
"text": "In this work we propose an online multi person pose tracking approach which works on two consecutive frames It−1 and It . The general formulation of our temporal network allows to rely on any multi person pose estimation approach as spatial network. From the spatial network we extract image features and pose features for both frames. These features serve as input for our temporal model that predicts Temporal Flow Fields (TFF). These TFF are vector fields which indicate the direction in which each body joint is going to move from frame It−1 to frame It . This novel representation allows to formulate a similarity measure of detected joints. These similarities are used as binary potentials in a bipartite graph optimization problem in order to perform tracking of multiple poses. We show that these TFF can be learned by a relative small CNN network whilst achieving state-of-the-art multi person pose tracking results.",
"title": ""
},
{
"docid": "6e7a43826490fe80692da334ef38f5a4",
"text": "We present a modular system for detection and correction of errors made by nonnative (English as a Second Language = ESL) writers. We focus on two error types: the incorrect use of determiners and the choice of prepositions. We use a decisiontree approach inspired by contextual spelling systems for detection and correction suggestions, and a large language model trained on the Gigaword corpus to provide additional information to filter out spurious suggestions. We show how this system performs on a corpus of non-native English text and discuss strategies for future enhancements.",
"title": ""
}
] |
scidocsrr
|
72a0fe013c31e64df0df7cad67c2941a
|
Understanding user's query intent with wikipedia
|
[
{
"docid": "422564b9cd5b6766213baaca1ff110ef",
"text": "We take the category system in Wikipedia as a conceptual network. We label the semantic relations between categories using methods based on connectivity in the network and lexicosyntactic matching. As a result we are able to derive a large scale taxonomy containing a large amount of subsumption, i.e. isa, relations. We evaluate the quality of the created resource by comparing it with ResearchCyc, one of the largest manually annotated ontologies, as well as computing semantic similarity between words in benchmarking datasets.",
"title": ""
}
] |
[
{
"docid": "4aa7f553c8a36978c8c036b2b729ee0b",
"text": "The purpose of this paper is to propose an unsupervised approach for measuring the similarity of texts that can compete with supervised approaches. Finding the inherent properties of similarity between texts using a corpus in the form of a word n-gram data set is competitive with other text similarity techniques in terms of performance and practicality. Experimental results on a standard data set show that the proposed unsupervised method outperforms the state-of-the-art supervised method and the improvement achieved is statistically significant at 0.05 level. The approach is language-independent; it can be applied to other languages as long as n-grams are available.",
"title": ""
},
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
},
{
"docid": "f35cc20c079df040de008ce1ca7ece83",
"text": "The lack of training data is a common challenge in many machine learning problems, which is often tackled by semi-supervised learning methods or transfer learning methods. The former requires unlabeled images from the same distribution as the labeled ones and the latter leverages labeled images from related homogenous tasks. However, these restrictions often cannot be satisfied. To address this, we propose a novel robust and discriminative self-taught learning approach to utilize any unlabeled data without the above restrictions. Our new approach employs a robust loss function to learn the dictionary, and enforces the structured sparse regularization to automatically select the optimal dictionary basis vectors and incorporate the supervision information contained in the labeled data. We derive an efficient iterative algorithm to solve the optimization problem and rigorously prove its convergence. Promising results in extensive experiments have validated the proposed approach.",
"title": ""
},
{
"docid": "26fb308cdcb530751ec04654f5527ebd",
"text": "Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research.",
"title": ""
},
{
"docid": "14111f3c7d802e346a823c85b64478d4",
"text": "This study examines the effect of an increase in product quality information to consumers on rms’ choices of product quality. In 1998 Los Angeles County introduced hygiene quality grade cards to be displayed in restaurant windows. We show that the grade cards cause (i) restaurant health inspection scores to increase, (ii) consumer demand to become sensitive to changes in restaurants’ hygiene quality, and (iii) the number of foodborne illness hospitalizations to decrease. We also provide evidence that this improvement in health outcomes is not fully explained by consumers substituting from poor hygiene restaurants to good hygiene restaurants. These results imply that the grade cards cause restaurants to make hygiene quality improvements.",
"title": ""
},
{
"docid": "c2a7e20e9e0ce2e4c4bad58461c85c7d",
"text": "This paper develops an estimation technique for analyzing the impact of technological change on the dynamics of consumer demand in a differentiated durable products industry. The paper presents a dynamic model of consumer demand for differentiated durable products that explicitly accounts for consumers’ expectations of future product quality and consumers’ outflow from the market, arising endogenously from their purchase decisions. The timing of consumers’ purchases is formalized as an optimal stopping problem. A solution to that problem defines the hazard rate of product adoptions, while the nested discrete choice model determines the alternativespecific purchase probabilities. Integrating individual decisions over the population distribution generates rich dynamics of aggregate and product level sales. The empirical part of the paper takes the model to data on the U.S. computer printer market. The estimates support the hypothesis of consumers’ forward-looking behavior, allowing for better demand forecasts and improved measures of welfare gains from introducing new products. ∗I would like to thank Patrick Bayer, John Rust, Christopher Timmins, and especially my advisors, Steven Berry, Ariel Pakes and Martin Pesendorfer for valuable advice and general encouragement. I have greatly benefitted from discussions with Eugene Choo, Philip Haile, Jerry Hausman, Günter Hitsch, Nickolay Moshkin, Katja Seim and Nadia Soboleva. Seminar participants at Harvard and Yale provided many helpful suggestions. I am indebted to Mark Bates of PC Data, Inc. for providing me with the data without which this research would not be possible. I am grateful to Susan Olmsted for her help with administrative issues. All errors are my own. †Contact information: e-mail oleg.melnikov@yale.edu, homepage http://www.econ.yale.edu/ ̃melnikov, phone (203) 432-3563, fax (203) 432-5779.",
"title": ""
},
{
"docid": "2dee5823e4faf7f1cc99460d87439012",
"text": "This letter presents a novel metamaterial-inspired planar monopole antenna. The proposed structure consists of a monopole loaded with a composite right/left-handed (CRLH) unit cell. It operates at two narrow bands, 0.925 and 1.227 GHz, and one wide band, 1.56-2.7 GHz, i.e., it covers several communication standards. The CRLH-loaded monopole occupies the same Chu's sphere as a conventional monopole that operates at 2.4 GHz. The radiation patterns at the different operating frequencies are still quasi-omnidirectional. Measurements and EM simulations are in a good agreement with the theoretical predictions.",
"title": ""
},
{
"docid": "e72872277a33dcf6d5c1f7e31f68a632",
"text": "Tilt rotor unmanned aerial vehicle (TRUAV) with ability of hovering and high-speed cruise has attached much attention, but its transition control is still a difficult point because of varying dynamics. This paper proposes a multi-model adaptive control (MMAC) method for a quad-TRUAV, and the stability in the transition procedure could be ensured by considering corresponding dynamics. For safe transition, tilt corridor is considered firstly, and actual flight status should locate within it. Then, the MMAC controller is constructed according to mode probabilities, which are calculated by solving a quadratic programming problem based on a set of input- output plant models. Compared with typical gain scheduling control, this method could ensure transition stability more effectively.",
"title": ""
},
{
"docid": "67beb9dbd03ae20d4e45a928fdb61f47",
"text": "representation of the game. It was programmed in LI SP. Further use of abstraction was also studied by Friedenbach (1980). The combination of s earch, heuristics, and expert systems led to the best programs in the eighties. At the end of the eighties a new type of Go programs emerged. Th ese programs made an intensive use of pattern recognition. This approach was dis cussed in detail by Boon (1990). In the following years, different AI techniques, such as Rei nforcement Learning (Schraudolph, Dayan, and Sejnowski, 1993), Monte Carlo (Br ügmann, 1993), and Neural Networks (Richards, Moriarty, and Miikkulainen, 1998), were tested in Go. However, programs applying these techniques were not able to surpass the level of the best programs. The combination of search, heuristics, expert systems, and pattern r ecognition remained the winning methodology. Brügmann (1993) proposed to use Monte-Carlo evaluations as an lter ative technique for Computer Go. His idea did not got many followers in the 199 0s. In the following decade, Bouzy and Helmstetter (2003) and Bouzy (2006) combined Mont e-Carlo evaluations and search in Indigo. The program won three bronze medals at the O lympiads of 2004, 2005, and 2006. Their pioneering research inspired the developme nt of Monte-Carlo Tree Search (MCTS) (Coulom, 2006; Kocsis and Szepesv ári, 2006; Chaslot et al., 2006a). Since 2007, MCTS programs are dominating the Computer Go field. MCTS will be explained in the next chapter. 2.6 Go Programs MANGO and MOGO In this subsection, we briefly describe the Go programs M ANGO and MOGO that we use for the experiments in the thesis. Their performance in vari ous tournaments is discussed as well.4",
"title": ""
},
{
"docid": "098a094546bf7c9918e47077dfbce2da",
"text": "From the Department of Pediatric Endocrinology and Diabetology, INSERM Unité 690, and Centre de Référence des Maladies Endocriniennes de la Croissance, Robert Debré Hospital and University of Paris 7 — Denis Diderot, Paris (J.-C.C., J.L.). Address reprint requests to Dr. Carel at Endocrinologie Diabétologie Pédiatrique and INSERM U690, Hôpital Robert Debré, 48, Blvd. Sérurier, 75935 Paris CEDEX 19, France, or at jean-claude. carel@inserm.fr.",
"title": ""
},
{
"docid": "6c1c3bc94314ce1efae62ac3ec605d4a",
"text": "Solar energy is an abundant renewable energy source (RES) which is available without any price from the Sun to the earth. It can be a good alternative of energy source in place of non-renewable sources (NRES) of energy like as fossil fuels and petroleum articles. Sun light can be utilized through solar cells which fulfills the need of energy of the utilizer instead of energy generation by NRES. The development of solar cells has crossed by a number of modifications from one age to another. The cost and efficiency of solar cells are the obstacles in the advancement. In order to select suitable solar photovoltaic (PV) cells for a particular area, operators are needed to sense the basic mechanisms and topologies of diverse solar PV with maximum power point tracking (MPPT) methodologies that are checked to a great degree. In this article, authors reviewed and analyzed a successive growth in the solar PV cell research from one decade to other, and explained about their coming fashions and behaviors. This article also attempts to emphasize on many experiments and technologies to contribute the perks of solar energy.",
"title": ""
},
{
"docid": "3194a0dd979b668bb25afb10260c30d2",
"text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.",
"title": ""
},
{
"docid": "887a80309231e055fd46b9341a4ab83b",
"text": "This paper presents radar cross section (RCS) measurement for pedestrian detection in 79GHz-band radar system. For a human standing at 6.2 meters, the RCS distribution's median value is -11.1 dBsm and the 90 % of RCS fluctuation is between -20.7 dBsm and -4.8 dBsm. Other measurement results (human body poses beside front) are shown. And we calculated the coefficient values of the Weibull distribution fitting to the human body RCS distribution.",
"title": ""
},
{
"docid": "256e0fbf94b373f137253a27eac75860",
"text": "Ambient intelligence opens up a world of unprecedented experiences. The interaction of people with electronic devices will change as context awareness, natural interfaces, and ubiquitous availability of information come to fruition. Ambient intelligence is going to impose major challenges on multimedia research. Distributed multimedia applications and their processing on embedded static and mobile platforms will play a major role in the development of ambient-intelligent environments. The requirements that ambient-intelligent multimedia applications impose on the mechanisms users apply to interact with media call for paradigms substantially different from contemporary interaction concepts. The complexity of media will continually increase in terms of volume and functionality, thus introducing a need for simplicity and ease of use. Therefore, the massively distributed, integrated use of media will require replacing well-known interaction vehicles, such as remote control and menu-driven search and control, with novel more intuitive, and natural concepts. This article reviews the concept of ambient intelligence and elaborates on its relation with multimedia. (The \"Advances in media processing\" sidebar gives insight into the developments that have set the stage for this new step forward.) The emphasis is on qualitative aspects, highlighting those elements that play a role in realizing ambient intelligence. Multimedia processing techniques and applications are key to realizing ambient intelligence, and they introduce major challenges to the design and implementation of both media-processing platforms and multimedia applications. Technology will not be the limiting factor in realizing ambient intelligence. The ingredients to let the computers disappear are already available, but the true success of the paradigm will depend on the ability to develop concepts that allow natural interaction with digital environments. We must build these digital environments with the invisible technology of the forthcoming century. The role of intelligent algorithms in this respect is apparent because it is the key enabling factor for realizing natural interaction.",
"title": ""
},
{
"docid": "34a85e0b0ad75794eed8fba081af98ab",
"text": "Melanomas are the most aggressive form of skin cancer. Due to observer bias, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion areas in the dermoscopy images. In this paper, we present a deep learning method for automatic skin lesion segmentation. We use a subset of the International Skin Imaging Collaboration (ISIC) Archive dataset, which contains dermoscopic images paired with their corresponding lesion binary masks, provided by IEEE International Symposium on Biomedical Imaging (ISBI) 2017 challenge for Skin Lesion Analysis Towards Melanoma Detection, and compare against the benchmark results submitted by other participants. The experimental results show that our proposed method can outperform the submissions in terms of segmentation accuracy.",
"title": ""
},
{
"docid": "241fd5f03bbe92c9ce9006333fac4f3e",
"text": "This article presents a comprehensive survey of research concerning interactions between associative learning and attention in humans. Four main findings are described. First, attention is biased toward stimuli that predict their consequences reliably (learned predictiveness). This finding is consistent with the approach taken by Mackintosh (1975) in his attentional model of associative learning in nonhuman animals. Second, the strength of this attentional bias is modulated by the value of the outcome (learned value). That is, predictors of high-value outcomes receive especially high levels of attention. Third, the related but opposing idea that uncertainty may result in increased attention to stimuli (Pearce & Hall, 1980), receives less support. This suggests that hybrid models of associative learning, incorporating the mechanisms of both the Mackintosh and Pearce-Hall theories, may not be required to explain data from human participants. Rather, a simpler model, in which attention to stimuli is determined by how strongly they are associated with significant outcomes, goes a long way to account for the data on human attentional learning. The last main finding, and an exciting area for future research and theorizing, is that learned predictiveness and learned value modulate both deliberate attentional focus, and more automatic attentional capture. The automatic influence of learning on attention does not appear to fit the traditional view of attention as being either goal-directed or stimulus-driven. Rather, it suggests a new kind of “derived” attention.",
"title": ""
},
{
"docid": "dc297b1e32fdc4597d1ec9f1d56aa743",
"text": "Although joint inference is an effective approach to avoid cascading of errors when inferring multiple natural language tasks, its application to information extraction has been limited to modeling only two tasks at a time, leading to modest improvements. In this paper, we focus on the three crucial tasks of automated extraction pipelines: entity tagging, relation extraction, and coreference. We propose a single, joint graphical model that represents the various dependencies between the tasks, allowing flow of uncertainty across task boundaries. Since the resulting model has a high tree-width and contains a large number of variables, we present a novel extension to belief propagation that sparsifies the domains of variables during inference. Experimental results show that our joint model consistently improves results on all three tasks as we represent more dependencies. In particular, our joint model obtains 12% error reduction on tagging over the isolated models.",
"title": ""
},
{
"docid": "b48d9053c70f51aa766a3f4706912654",
"text": "Social tags are free text labels that are applied to items such as artists, albums and songs. Captured in these tags is a great deal of information that is highly relevant to Music Information Retrieval (MIR) researchers including information about genre, mood, instrumentation, and quality. Unfortunately there is also a great deal of irrelevant information and noise in the tags. Imperfect as they may be, social tags are a source of human-generated contextual knowledge about music that may become an essential part of the solution to many MIR problems. In this article, we describe the state of the art in commercial and research social tagging systems for music. We describe how tags are collected and used in current systems. We explore some of the issues that are encountered when using tags, and we suggest possible areas of exploration for future research.",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
}
] |
scidocsrr
|
ea207bd2af06dbc266a9f1b47db75449
|
Learning and Enjoyment in Serious Gaming - Contradiction or Complement?
|
[
{
"docid": "a78970aa2ef32f9a898c5eb612433166",
"text": "The concept of intrinsic motivation has been considered to lie at the heart of the user engagement created by digital games. Yet despite this, educational software has traditionally attem pted to harness games as extrinsic motivation by using them as a sugar-coating for learning content. This paper tests the concept of intrinsic integration as a way of creating a more productive relationship between educational gam es and their learning content. Two studies assessed this approach by designing and evaluating an ed ucational game for teaching mathematics to seven to eleven year olds called Zombie Division. Study 1 examined learning gains of 58 children who played either the intrinsic, extrinsic or control variants of Zombie Divisio n for two hours, supported by their classroom teacher. Study 2 compared timeon-task for intrinsic and extrinsic variants of the game when 16 children had free choice between them. The results of these studies showed that childr en learned more from the intrinsic version of the game under fixed time limits and spent seven times lon ger playing it in free time situations. Together they offer evidence for the genuine value of an intrinsic approach f or creating effective educational games. The theoretical and commercial implications of these findings are discussed. 1) School of Psychology and Learning Sciences Research Institute University of Nottingham, University Park, Nottingham, NG7 2RD, UK Email: Shaaron.Ainsworth@nottingham.ac.uk Phone +44 115 9515314 2) Department of Computing Sheffield Hallam University, Furnival Building, Sheffield, S1 1WB Email: J.Habgood@shu.ac.uk Phone +44 114 225 6709 Evaluating Intrinsic Integration in Educational Games 2 Evaluating Intrinsic Integration in Educational Games 3 The use of computer games and simulations in education dates back to the 1950‟s (Cullingford, Mawdesley, & Davies, 1979) when computing was still in its infancy and the commercial videogame industry had yet to emerge . Nonetheless it was the raw engagement power of 80‟s videogames like PacMan that inspired a new generation of educationalists to consider the learning potential of this exciting new medium (Bowman, 1982). These early protagonists were quick to identify the motivational power of videogames as their key asset (e.g. Lepper & Malone, 1987; Loftus & Loftus, 1983) and were able to apply a range of existing motivational (e.g. Csikszentmihalyi, 1975; Deci, 1975; Lepper & Greene, 1975) and behavioral (Ferster & Skinner, 1957) theories to their rationales. However, despite this romising start, the resulting gener ations of „edutainment‟ products have been widely recognized as failing to effectively harness the engagement power of digital games (e.g. Hogle, 1996; Kerawalla & Crook, 2005; Papert, 1998; Trushell, Burrell, & Maitland, 2001). So while the mainstream games industry boomed throughout the 1990s, the educational sector was left behind in terms of technology, revenues and commercial interest. However, the turn of the millennium has seen a rejuvenation of interest gamebased learning with a number of texts extolling the potential of games (e.g. Aldrich, 2004; Gee, 2003; Shaffer, Squire, Halverson & Gee, 2006), paralleled by commercial success of „self-improvement‟ titles such as „Brain Training‟ and „Big-Brain Academy‟ (Nintendo). This paper offers empirical evidence for the value of a design approach which may help to explain the failure of edutainment to fulfill its educational promise. This approac h hinges upon the ability of learning games to effectively harness the intrinsic motivation (Deci, 1975) of a game for educational goals by creating an i trinsic integration (Kafai, 1996) between a game and its learning content. Furthermore, we suggest that such an integration is created through an intrinsic link between a game‟s core mechanics (Lundgren & Björk, 2003) and its learning content. Zombie Division is a computer game specifically created to empirically examine the concept of intrinsic integration. The game integrates mathematics into the core-mechanic of a 3D advent ure through a combat system in which opponents are mathematically divided in order to defeat them. Three variat ions of this game were created for evaluation: an intrinsic version which integrated mathematics into combat, an extrinsic version which had non-mathematical combat and placed identical mathematical multiple choice question s between levels instead, and a control version which contained no mathematics at all. The first study compared learning gains between all three versions as a measure of the relative e ducational effectiveness of the intrinsic approach. The second study compared time-on-task between t he intri sic and extrinsic versions of the game as a measure of the relative motivational appeal of the intrinsi c approach. Evaluating Intrinsic Integration in Educational Games 4 Defining Intrinsic Integration The concept of intrinsic integration in educational games is rooted in the more familiar concept of „intrinsic motivation‟. It is commonly surmised that a person is intrinsically motivated to perform an activity when he receives no apparent rewards except the activity itself (Deci, 1975). Alt hough modern videogames can provide external rewards (such as those produced by farming virtual game resources: see Steinkuehler, 2006) they are largely autonomous pursuits which create their own internal motivations for continuing the activity. Game designers can create these internal motivations through the in clusion of aspects such as challenge, control, fantasy and curiosity, while inter-personal motivations can be added through factors such as competition, co-operation and recognition (Malone & Lepper, 1987). The inclusion of challenge in this taxonomy is derived from the work of Csikszentmihalyi (1988) into flow theory and optimal experience. This proposes that clear goals, achievable challenges and accurate feedback are all required to achieve a state of flow in an activity which requires “a balance between the challenges perceived in a given situation and the s kills a person brings to it ”, suggesting that “no activity can sustain it for long unless both the challenges and the skills become more complex ” (p.30). There are clear parallels between this and the way that game designers carefully structure the diffi culty curves of their games to provide the optimal level of challenge as a player‟s skills develop (Habgood & Overmars, 2006, p.158). It is perhaps unsurprising then that feelings of total concentration, distorted sense of time, and extension of self are as common experiences to game players as they are o Csikszentmihalyi‟s (1988) rock climbers and surgeons. There is also emerging evidence that, when measured correctly, flow is predictive of learning (e.g. Engeser & Rheinberg, 2008) The gaming literature provides an overwhelming number of different approaches to defining the essence of a game (Caillois, 1961; Crawford, 1982; Huizinga, 1950; Juul, 2005; Koster, 2005; Salen & Zimmerman, 2004). Yet these differences only serve to highlight Wittgenstein‟s (1953) observation on games that “you will not see something that is common to all, b ut similarities, relationships, and a whole series of them at that” (aphorism 66). Therefore we in the interests of practicality we use a definition of a game which seeks to highlight the main differences between games and other forms of entertainment, rather than all the similarities between things we might refer to as a g me. This pragmatic definition defines a game as simply an “interactive challenge”, suggesting that games contain an interactive element that distinguishes them from films, and prescribed challenges that distinguish them from t oys (Habgood & Overmars, 2006, p87 ). We therefore see games as something which encompass es wide spectrum of digital and non-digital applications – including many simulations – and we hope this research could potentially have relevance to all of them. It should also be noted that our definition deliberately avoids assigning a motivational aspect to the definition of a game as the experience of intrinsic mot ivation is Evaluating Intrinsic Integration in Educational Games 5 subjective (does a game stop being a game if it stops being fun?). Nonetheless, the ability of games r simulations to create intrinsic motivation is clearly central to this argument and uninspi ri g games are not a good model for creating motivating learning games either. Although digital games may be capable of providing activities which are intrinsically motivating in their own right, it is critical to consider the effect of adding learning content to a intrinsically motivating game. Game designers have come to recognize the role of learning in good game design (e.g. Crawford, 1982; Gee, 2003; Habgood & Overmars, 2006; Koster, 2005). This is not about commercial games containing educational content, but how the enjoyment of games derives from the process of learning itself: i.e. “the fundamental motivation for all game-playing is to learn” (Crawford, 1982, p.17). Unfortunately edutainment products have traditionally taken a “chocolate-covered broccoli” (Bruckman, 1999) approach when combining learning content with gameplay. This is where the gaming element of the product is used as a separate reward or sugar-coating for completing the educational content. It was Malone‟s (1980) and Malone and Lepper‟s (1987) seminal work on videogames which first considered the problem of creating a more integrated approach to designing educational games. This originally proposed the concept of an intrinsic fantasy as providing “an integral and continuing relationship between the fantasy context and the in structional content being presented ” (1987, p.240). This was contrasted with an extrinsic fantasy where “the fantasy depends on the skill being learned, but not vice versa” and it was suggested that the learning content of extrinsic fantasies could ",
"title": ""
}
] |
[
{
"docid": "0a73371e0912425a3a7ca9ba70a22309",
"text": "Deep convolutional neural networks take GPU-days of computation to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3 3 filters. We introduce a new class of fast algorithms for convolutional neural networks using Winograd's minimal filtering algorithms. The algorithms compute minimal complexity convolution over small tiles, which makes them fast with small filters and small batch sizes. We benchmark a GPU implementation of our algorithm with the VGG network and show state of the art throughput at batch sizes from 1 to 64.",
"title": ""
},
{
"docid": "58c0456c8ae9045898aca67de9954659",
"text": "Channel sensing and spectrum allocation has long been of interest as a prospective addition to cognitive radios for wireless communications systems occupying license-free bands. Conventional approaches to cyclic spectral analysis have been proposed as a method for classifying signals for applications where the carrier frequency and bandwidths are unknown, but is, however, computationally complex and requires a significant amount of observation time for adequate performance. Neural networks have been used for signal classification, but only for situations where the baseband signal is present. By combining these techniques a more efficient and reliable classifier can be developed where a significant amount of processing is performed offline, thus reducing online computation. In this paper we take a renewed look at signal classification using spectral coherence and neural networks, the performance of which is characterized by Monte Carlo simulations",
"title": ""
},
{
"docid": "b917ec2f16939a819625b6750597c40c",
"text": "In an increasing number of scientific disciplines, large data collections are emerging as important community resources. In domains as diverse as global climate change, high energy physics, and computational genomics, the volume of interesting data is already measured in terabytes and will soon total petabytes. The communities of researchers that need to access and analyze this data (often using sophisticated and computationally expensive techniques) are often large and are almost always geographically distributed, as are the computing and storage resources that these communities rely upon to store and analyze their data [17]. This combination of large dataset size, geographic distribution of users and resources, and computationally intensive analysis results in complex and stringent performance demands that are not satisfied by any existing data management infrastructure. A large scientific collaboration may generate many queries, each involving access to—or supercomputer-class computations on—gigabytes or terabytes of data. Efficient and reliable execution of these queries may require careful management of terabyte caches, gigabit/s data transfer over wide area networks, coscheduling of data transfers and supercomputer computation, accurate performance estimations to guide the selection of dataset replicas, and other advanced techniques that collectively maximize use of scarce storage, networking, and computing resources. The literature offers numerous point solutions that address these issues (e.g., see [17, 14, 19, 3]). But no integrating architecture exists that allows us to identify requirements and components common to different systems and hence apply different technologies in a coordinated fashion to a range of dataintensive petabyte-scale application domains. Motivated by these considerations, we have launched a collaborative effort to design and produce such an integrating architecture. We call this architecture the data grid, to emphasize its role as a specialization and extension of the “Grid” that has emerged recently as an integrating infrastructure for distributed computation [10, 20, 15]. Our goal in this effort is to define the requirements that a data grid must satisfy and the components and APIs that will be required in its implementation. We hope that the definition of such an architecture will accelerate progress on petascale data-intensive computing by enabling the integration of currently disjoint approaches, encouraging the deployment of basic enabling technologies, and revealing technology gaps that require further research and development. In addition, we plan to construct a reference implementation for this architecture so as to enable large-scale experimentation.",
"title": ""
},
{
"docid": "98f704cf1ea1247c8c4087af23b6ebe5",
"text": "We introduce BAG, the Berkeley Analog Generator, an integrated framework for the development of generators of Analog and Mixed Signal (AMS) circuits. Such generators are parameterized design procedures that produce sized schematics and correct layouts optimized to meet a set of input specifications. BAG extends previous work by implementing interfaces to integrate all steps of the design flow into a single environment and by providing helper classes -- both at the schematic and layout level -- to aid the designer in developing truly parameterized and technology-independent circuit generators. This simplifies the codification of common tasks including technology characterization, schematic and testbench translation, simulator interfacing, physical verification and extraction, and parameterized layout creation for common styles of layout. We believe that this approach will foster design reuse, ease technology migration, and shorten time-to-market, while remaining close to the classical design flow to ease adoption. We have used BAG to design generators for several circuits, including a Voltage Controlled Oscillator (VCO) and a Switched-Capacitor (SC) voltage regulator in a CMOS 65nm process. We also present results from automatic migration of our designs to a 40nm process.",
"title": ""
},
{
"docid": "af952f9368761c201c5dfe4832686e87",
"text": "The field of service design is expanding rapidly in practice, and a body of formal research is beginning to appear to which the present article makes an important contribution. As innovations in services develop, there is an increasing need not only for research into emerging practices and developments but also into the methods that enable, support and promote such unfolding changes. This article tackles this need directly by referring to a large design research project, and performing a related practicebased inquiry into the co-design and development of methods for fostering service design in organizations wishing to improve their service offerings to customers. In particular, with reference to a funded four-year research project, one aspect is elaborated on that uses cards as a method to focus on the importance and potential of touch-points in service innovation. Touch-points are one of five aspects in the project that comprise a wider, integrated model and means for implementing innovations in service design. Touch-points are the points of contact between a service provider and customers. A customer might utilise many different touch-points as part of a use scenario (often called a customer journey). For example, a bank’s touch points include its physical buildings, web-site, physical print-outs, self-service machines, bank-cards, customer assistants, call-centres, telephone assistance etc. Each time a person relates to, or interacts with, a touch-point, they have a service-encounter. This gives an experience and adds something to the person’s relationship with the service and the service provider. The sum of all experiences from touch-point interactions colours their opinion of the service (and the service provider). Touch-points are one of the central aspects of service design. A commonly used definition of service design is “Design for experiences that happen over time and across different touchpoints” (ServiceDesign.org). As this definition shows, touchpoints are often cited as one of the major elements of service",
"title": ""
},
{
"docid": "fb162c94248297f35825ff1022ad2c59",
"text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "2c3ab7e0f49dc4575c77a712e8184ce0",
"text": "The cubature Kalman filter (CKF), which is based on the third degree spherical–radial cubature rule, is numericallymore stable than the unscented Kalman filter (UKF) but less accurate than theGauss–Hermite quadrature filter (GHQF). To improve the performance of the CKF, a new class of CKFs with arbitrary degrees of accuracy in computing the spherical and radial integrals is proposed. The third-degree CKF is a special case of the class. The high-degree CKFs of the class can achieve the accuracy and stability performances close to those of the GHQF but at lower computational cost. A numerical integration problem and a target tracking problem are utilized to demonstrate the necessity of using the high-degree cubature rules to improve the performance. The target tracking simulation shows that the fifth-degree CKF can achieve higher accuracy than the extended Kalman filter, the UKF, the third-degree CKF, and the particle filter, and is computationally much more efficient than the GHQF. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "57ccd593f1be27463f9e609d700452dd",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sustainable supply chain network design: An optimization-oriented review Majid Eskandarpour, Pierre Dejax, Joe Miemczyk, Olivier Péton",
"title": ""
},
{
"docid": "568317c1f18c476de5029d0a1e91438e",
"text": "Plant volatiles (PVs) are lipophilic molecules with high vapor pressure that serve various ecological roles. The synthesis of PVs involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions. Some PV biosynthetic enzymes produce multiple products from a single substrate or act on multiple substrates. Genes for PV biosynthesis evolve by duplication of genes that direct other aspects of plant metabolism; these duplicated genes then diverge from each other over time. Changes in the preferred substrate or resultant product of PV enzymes may occur through minimal changes of critical residues. Convergent evolution is often responsible for the ability of distally related species to synthesize the same volatile.",
"title": ""
},
{
"docid": "67f70a5df5f2b1f7854753af54c62621",
"text": "Promoting computational thinking is a priority in CS education and other STEM and non-STEM disciplines. Our innovative, NSF-funded IC2Think project blends computational and creative thinking. In Spring 2013, we deployed Computational Creativity Exercises (CCE) designed to engage creative competencies (Surrounding, Capturing, Challenging and Broadening) in an introductory CSI course for engineering students. We compared this CCE implementation semester (80 students, 95% completing 3 or 4 CCEs) to the Fall 2013 semester of the same course (55 students) without CCEs. CCE implementation students had significantly higher scores on a CS concepts and skills knowledge test (F(1, 132) = 7.72, p <; 01, partial Eta2 = .055; M=7.47 to M=6.13; 13 items) and significantly higher self-efficacy for applying CS knowledge in their field (F(1, 153) = 12.22, p <; .01, partial Eta2 = .074; M=70.64 to M=61.47; 100-point scale). CCE implementation students had significantly higher study time (t(1, 136) = 2.08, p = .04; M=3.88 to M=3.29; 7-point scale) and significantly lower lack of regulation, which measures difficulties with studying (t(1, 136) = 2.82, p = .006; M=2.80 to M=3.21; 5-point scale). The addition of computational creativity exercises to CS courses may improve computational thinking and learning of CS knowledge and skills.",
"title": ""
},
{
"docid": "7a3b5ab64e9ef5cd0f0b89391bb8bee2",
"text": "Quality enhancement of humanitarian assistance is far from a technical task. It is interwoven with debates on politics of principles and people are intensely committed to the various outcomes these debates might have. It is a field of strongly competing truths, each with their own rationale and appeal. The last few years have seen a rapid increase in discussions, policy paper and organisational initiatives regarding the quality of humanitarian assistance. This paper takes stock of the present initiatives and of the questions raised with regard to the quality of humanitarian assistance.",
"title": ""
},
{
"docid": "372f137098bd5817896d82ed0cb0c771",
"text": "Under today's bursty web traffic, the fine-grained per-container control promises more efficient resource provisioning for web services and better resource utilization in cloud datacenters. In this paper, we present Two-stage Stochastic Programming Resource A llocator (2SPRA). It optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency. In particular, 2SPRA is capable of minimizing resource over-provisioning by addressing dynamics of web traffic as workload uncertainty in a native stochastic optimization model. Using special-purpose OpenOpt optimization framework, we fully implement 2SPRA in Python and evaluate it against three other existing allocation schemes, in a Docker-based CoreOS Linux VMs on Amazon EC2. We generate workloads based on four real-world web traces of various traffic variations: AOL, WorldCup98, ClarkNet, and NASA. Our experimental results demonstrate that 2SPRA achieves the minimum resource over-provisioning outperforming other schemes. In particular, 2SPRA allocates only 6.16 percent more than application's actual demand on average and at most 7.75 percent in the worst case. It achieves 3x further reduction in total resources provisioned compared to other schemes delivering overall cost-savings of 53.6 percent on average and up to 66.8 percent. Furthermore, 2SPRA demonstrates consistency in its provisioning decisions and robust responsiveness against workload fluctuations.",
"title": ""
},
{
"docid": "2a86c4904ef8059295f1f0a2efa546d8",
"text": "3D shape is a crucial but heavily underutilized cue in today’s computer vision system, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape model in the loop. Apart from object recognition on 2.5D depth maps, recovering these incomplete 3D shapes to full 3D is critical for analyzing shape variations. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses. It naturally supports joint object recognition and shape reconstruction from 2.5D depth maps, and further, as an additional application it allows active object recognition through view planning. We construct a largescale 3D CAD model dataset to train our model, and conduct extensive experiments to study our new representation.",
"title": ""
},
{
"docid": "d7fd9c273c0b26a309b84e0d99143557",
"text": "Remote sensing is one of the most common ways to extract relevant information about Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR), and material content (multispectral and hyperspectral) of the objects in the image. Once considered together their complementarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion), among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the data fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications, and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges?",
"title": ""
},
{
"docid": "408e7d0a33bf2ab5f570543f4a8d7aba",
"text": "Instrumental behavior can be controlled by goal-directed action-outcome and habitual stimulus-response processes that are supported by anatomically distinct brain systems. Based on previous findings showing that stress modulates the interaction of \"cognitive\" and \"habit\" memory systems, we asked in the presented study whether stress may coordinate goal-directed and habit processes in instrumental learning. For this purpose, participants were exposed to stress (socially evaluated cold pressor test) or a control condition before they were trained to perform two instrumental actions that were associated with two distinct food outcomes. After training, one of these food outcomes was selectively devalued as subjects were saturated with that food. Next, subjects were presented the two instrumental actions in extinction. Stress before training in the instrumental task rendered participants' behavior insensitive to the change in the value of the food outcomes, that is stress led to habit performance. Moreover, stress reduced subjects' explicit knowledge of the action-outcome contingencies. These results demonstrate for the first time that stress promotes habits at the expense of goal-directed performance in humans.",
"title": ""
},
{
"docid": "28574c82a49b096b11f1b78b5d62e425",
"text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.",
"title": ""
},
{
"docid": "e0d685e05dd705169029b8ea387f007b",
"text": "In the last fifteen years, functional neuroimaging techniques have been used to investigate the neuroanatomical correlates of sexual arousal in healthy human subjects. In most studies, subjects have been requested to watch visual sexual stimuli and control stimuli. Our review and meta-analysis found that in heterosexual men, sites of cortical activation consistently reported across studies are the lateral occipitotemporal, inferotemporal, parietal, orbitofrontal, medial prefrontal, insular, anterior cingulate, and frontal premotor cortices as well as, for subcortical regions, the amygdalas, claustrum, hypothalamus, caudate nucleus, thalami, cerebellum, and substantia nigra. Heterosexual and gay men show a similar pattern of activation. Visual sexual stimuli activate the amygdalas and thalami more in men than in women. Ejaculation is associated with decreased activation throughout the prefrontal cortex. We present a neurophenomenological model to understand how these multiple regional brain responses could account for the varied facets of the subjective experience of sexual arousal. Further research should shift from passive to active paradigms, focus on functional connectivity and use subliminal presentation of stimuli.",
"title": ""
},
{
"docid": "d3f7c2514d17631962276edbfc6a63a8",
"text": "This paper describes Marvin, a planner that competed in the F ourth International Planning Competition (IPC 4). Marvin uses action-sequence-memoisa ti n techniques to generate macroactions, which are then used during search for a solution pla . We provide an overview of its architecture and search behaviour, detailing the algorith ms used. We also empirically demonstrate the effectiveness of its features in various planning domai ns; in particular, the effects on performance due to the use of macro-actions, the novel features of i t search behaviour, and the native support of ADL and Derived Predicates.",
"title": ""
},
{
"docid": "799904b20f1174f01c0d2dd87c57e097",
"text": "ix",
"title": ""
},
{
"docid": "c894deedbdbd6aee3cf3955d1c463577",
"text": "Vast collections of documents available in image format need to be indexed for information retrieval purposes. In this framework, word spotting is an alternative solution to optical character recognition (OCR), which is rather inefficient for recognizing text of degraded quality and unknown fonts usually appearing in printed text, or writing style variations in handwritten documents. Over the past decade there has been a growing interest in addressing document indexing using word spotting which is reflected by the continuously increasing number of approaches. However, there exist very few comprehensive studies which analyze the various aspects of a word spotting system. This work aims to review the recent approaches as well as fill the gaps in several topics with respect to the related works. The nature of texts and inherent challenges addressed by word spotting methods are thoroughly examined. After presenting the core steps which compose a word spotting system, we investigate the use of retrieval enhancement techniques based on relevance feedback which improve the retrieved results. Finally, we present the datasets which are widely used for word spotting, we describe the evaluation standards and measures applied for performance assessment and discuss the results achieved by the state of the art. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
40255e6454dede1d0524f8f91ee254ff
|
A Randomized, Controlled Trial of Virtual Reality-Graded Exposure Therapy for Post-Traumatic Stress Disorder in Active Duty Service Members with Combat-Related Post-Traumatic Stress Disorder
|
[
{
"docid": "48dfee242d5daf501c72e14e6b05c3ba",
"text": "One possible alternative to standard in vivo exposure may be virtual reality exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure (VRE) is potentially an efficient and cost-effective treatment of anxiety disorders. VRE therapy has been successful in reducing the fear of heights in the first known controlled study of virtual reality in the treatment of a psychological disorder. Outcome was assessed on measures of anxiety, avoidance, attitudes, and distress. Significant group differences were found on all measures such that the VRE group was significantly improved at posttreatment but the control group was unchanged. The efficacy of virtual reality exposure therapy was also supported for the fear of flying in a case study. The potential for virtual reality exposure treatment for these and other disorders is explored.",
"title": ""
}
] |
[
{
"docid": "62e900f89427e4b97f64919a3cb0d537",
"text": "This paper introduces the SpamBayes classification engine and outlines the most important features and techniques which contribute to its success. The importance of using the indeterminate ‘unsure’ classification produced by the chi-squared combining technique is explained. It outlines a Robinson/Woodhead/Peters technique of ‘tiling’ unigrams and bigrams to produce better results than relying solely on either or other methods of using both unigrams and bigrams. It discusses methods of training the classifier, and evaluates the success of different methods. The paper focuses on highlighting techniques that might aid other classification systems rather than attempting to demonstrate the effectiveness of the SpamBayes classification engine.",
"title": ""
},
{
"docid": "f3d2d9248a91d75c10038717b12629e5",
"text": "LIMITED DISTRIBUTION NOTICE This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for e a rly dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and speciic requests. After outside publication, requests should be lled only by reprints or legally obtained copies of the article e.g., payment of royalties. Abstract NetDispatcher is a software router of TCP connections that supports load sharing across multiple TCP servers. It consists of the Executor, an operating system kernel extension that supports fast IP packet forwarding, and a user level Manager process that controls it. The Manager implements a novel dynamic load-sharing algorithm for allocation of TCP connections among servers according to their real-time load and responsiveness. This algorithm produces weights that are used by the Executor to quickly select a server for each new connection request. This allocation method was shown to be highly eecient in real tests, for large Internet sites serving millions of TCP connections per day. The Executor forwards client TCP packets to the servers without performing any TCPPIP header translations. Outgoing server-to-client packets are not handled by NetDispatcher and can follow a separate network route to the clients. Depending on the workload traac, the performance beneet of this half-connection method can be signiicant. Prototypes of NetDispatcher were used to scale up several large and high-load Internet sites.",
"title": ""
},
{
"docid": "14049dd7ee7a07107702c531fec4ff61",
"text": "Reducing errors and improving quality are an integral part of Pathology and Laboratory Medicine. The rate of errors is reviewed for the pre-analytical, analytical, and post-analytical phases for a specimen. The quality systems in place in pathology today are identified and compared with benchmarks for quality. The types and frequency of errors and quality systems are reviewed for surgical pathology, cytopathology, clinical chemistry, hematology, microbiology, molecular biology, and transfusion medicine. Seven recommendations are made to reduce errors in future for Pathology and Laboratory Medicine.",
"title": ""
},
{
"docid": "cb086fa252f4db172b9c7ac7e1081955",
"text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving",
"title": ""
},
{
"docid": "916e10c8bd9f5aa443fa4d8316511c94",
"text": "A full-bridge LLC resonant converter with series-parallel connected transformers for an onboard battery charger of electric vehicles is proposed, which can realize zero voltage switching turn-on of power switches and zero current switching turn-off of rectifier diodes. In this converter, two same small transformers are employed instead of the single transformer in the traditional LLC resonant converter. The primary windings of these two transformers are series-connected to obtain equal primary current, while the secondary windings are parallel-connected to be provided with the same secondary voltage, so the power can be automatically balanced. Series-connection can reduce the turns of primary windings. Parallel-connection can reduce the current stress of the secondary windings and the conduction loss of rectifier diodes. Compared with the traditional LLC resonant converter with single transformer under same power level, the smaller low-profile cores can be used to reduce the transformers loss and improve heat dissipation. In this paper, the operating principle, steady state analysis, and design of the proposed converter are described, simulation and experimental prototype of the proposed LLC converter is established to verify the effectiveness of the proposed converter.",
"title": ""
},
{
"docid": "c81ad743ab41e4601cc4f33631ee3f93",
"text": "We present a technique to enhance control-flow analysis of business process models. The technique considerably speeds up the analysis an d improves the diagnostic information that is given to the user to fix control-flow errors . The technique consists of two parts: Firstly, the process model is decomp osed into single-entry-single-exit (SESE) fragments, which are usually subs tantially smaller than the original process. This decomposition is done in linear time. S econdly, each fragment is analyzed in isolation using a fast heuristic that ca n an lyze many of the fragments occurring in practice. Any remaining fragme nts that are not covered by the heuristic can then be analyzed using any known c omplete analysis technique. We used our technique in a case study with more than 340 real business pr ocesses modeled with the IBM WebSphere Business Modeler. The results s uggest that control-flow analysis of many real process models is feasible withou t significant delay (less than a second). Therefore, control-flow analysis co uld be used frequently during editing time, which allows errors to be caught at earliest possible time.",
"title": ""
},
{
"docid": "3ef1f1c71ff244f0f6d5f1a649366528",
"text": "For the task of generating complex outputs such as source code, editing existing outputs can be easier than generating complex outputs from scratch. With this motivation, we propose an approach that first retrieves a training example based on the input (e.g., natural language description) and then edits it to the desired output (e.g., code). Our contribution is a computationally efficient method for learning a retrieval model that embeds the input in a task-dependent way without relying on a hand-crafted metric or incurring the expense of jointly training the retriever with the editor. Our retrieve-and-edit framework can be applied on top of any base model. We show that on a new autocomplete task for GitHub Python code and the Hearthstone cards benchmark, retrieve-and-edit significantly boosts the performance of a vanilla sequence-to-sequence model on both tasks.",
"title": ""
},
{
"docid": "da03427eb4874bd90903674b6ffe9897",
"text": "The network provides a method of communication to distribute information to the masses. With the growth of data communication over computer network, the security of information has become a major issue. Steganography and cryptography are two different data hiding techniques. Steganography hides messages inside some other digital media. Cryptography, on the other hand obscures the content of the message. We propose a high capacity data embedding approach by the combination of Steganography and cryptography. In the process a message is first encrypted using transposition cipher method and then the encrypted message is embedded inside an image using LSB insertion method. The combination of these two methods will enhance the security of the data embedded. This combinational methodology will satisfy the requirements such as capacity, security and robustness for secure data transmission over an open channel. A comparative analysis is made to demonstrate the effectiveness of the proposed method by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR). We analyzed the data hiding technique using the image performance parameters like Entropy, Mean and Standard Deviation. The stego images are tested by transmitting them and the embedded data are successfully extracted by the receiver. The main objective in this paper is to provide resistance against visual and statistical attacks as well as high capacity.",
"title": ""
},
{
"docid": "bf8addd95940f9c7617720fbcae97fe0",
"text": "Data-parallel accelerators have emerged as highperformance alternatives to general-purpose processors for many applications. The Cell BE, GPUs from NVIDIA and ATI, and the like can outperform conventional superscalar architectures, but only for applications that can take advantage of these accelerators’ SIMD architectures, large number of cores, and local memories. Coupled with the SIMD extensions on general-purpose processors, these heterogeneous computing architectures provide a powerful platform to accelerate data-parallel programs. Unfortunately, each accelerator provides its own programming model, and programmers are often forced to confront issues of distributed memory, multithreading, load-balancing and computation scheduling. This necessitates a framework which can exploit different types of parallelism across heterogeneous functional units and supports multiple types of high-level programming languages including stream programming or traditional shared or distributed memory programming framework or prototyping languages such as MATLAB. Towards this goal, in this paper, we present PLASMA, a programming framework that enables the writing of portable SIMD programs. The main component of PLASMA is an intermediate representation (IR), which provides succinct and clean abstractions to enable programs to be compiled to different accelerators. With the assistance of a runtime, these programs can then be automatically multithreaded, run on multiple heterogeneous accelerators transparently and are oblivious of distributed memory. We demonstrate a prototype compiler and runtime that targets PLASMA programs to scalar processors, processors with SIMD extensions and GPUs.",
"title": ""
},
{
"docid": "601318db5ca75c76cd44da78db9f4147",
"text": "Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning and vehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and other signals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles. Keywords—Dedicated short range communication, GPS, Control area network, Collision avoidance warning system.",
"title": ""
},
{
"docid": "151b007871b8e3c763d1a7feedaf0060",
"text": "3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenges for learning-based approaches, as 3D object annotations are scarce in real images. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. In this work, we propose MarrNet, an end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image; models that recover 2.5D sketches are also more likely to transfer from synthetic to real data. Second, for 3D reconstruction from 2.5D sketches, systems can learn purely from synthetic data. This is because we can easily render realistic 2.5D sketches without modeling object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches; the framework is therefore end-to-end trainable on real images, requiring no human annotations. Our model achieves state-of-the-art performance on 3D shape reconstruction.",
"title": ""
},
{
"docid": "0397e2a4109e154d83a8b7e02264a4c5",
"text": "The certificateless-based signature system allows people to verify the signature without the certificate. For this reason, we do not need the certificate authority (CA) to store and manage users’ certificates and public keys. Certificateless-based signature can also overcome the certificate management problem and the key escrow problem of the traditional signature system. In 2012, Zhang and Mao first designed the certificateless-based signature scheme based on RSA operations; however, their scheme still has latent vulnerabilities. To overcome these shortcomings, we propose an improved version to make the RSA-based certificateless scheme stronger and more secure. Besides, we reduce the computational cost to make our scheme more efficient.",
"title": ""
},
{
"docid": "0bc66d6ad8bfbbc27b74f2a580f13a23",
"text": "During throwing motion the athlete puts enormous stress on both the dynamic and the static stabilisers of the shoulder. Repetitive forces cause adaptive soft tissue and bone changes that initially improve performance but ultimately may lead to shoulder pathologies. Although a broad range of theories have been suggested for the pathophysiology of internal impingement, the reasons are obviously multifactorial. This review aims to critically analyse the current literature and to summarise clinically important information. The cardinal lesions of internal impingement, articular-sided rotator cuff tears and posterosuperior labral lesions, have been shown to occur in association with a number of other findings, most importantly glenohumeral internal rotation deficit and SICK scapula syndrome, but also with posterior humeral head lesions, posterior glenoid bony injury and, rarely, with Bankart and inferior glenohumeral ligament lesions. Extensive biomechanical and clinical research is necessary before a complete understanding and reconciliation of the varying theories of the pathomechanisms of injury can be developed.",
"title": ""
},
{
"docid": "59d5f800aa8d89c36ac941ae0e6913cc",
"text": "The high variance issue in unbiased policy-gradient methods such as VPG and REINFORCE is typically mitigated by adding a baseline. However, the baseline fitting itself suffers from the underfitting or the overfitting problem. In this paper, we develop a K-fold method for baseline estimation in policy gradient algorithms. The parameter K is the baseline estimation hyperparameter that can adjust the bias-variance trade-off in the baseline estimates. We demonstrate the usefulness of our approach via two state-of-the-art policy gradient algorithms on three MuJoCo locomotive control tasks.",
"title": ""
},
{
"docid": "8db6d5115156ebd347577dd81cf916f1",
"text": "Measurement of chlorophyll concentration is gaining more-and-more importance in evaluating the status of the marine ecosystem. For wide areas monitoring a reliable architecture of wireless sensors network is required. In this paper, we present a network of smart sensors, based on ISO/IEC/IEEE 21451 suite of standards, for in situ and in continuous space-time monitoring of surface water bodies, in particular for seawater. The system is meant to be an important tool for evaluating water quality and a valid support to strategic decisions concerning critical environment issues. The aim of the proposed system is to capture possible extreme events and collect long-term periods of data.",
"title": ""
},
{
"docid": "e6640dc272e4142a2ddad8291cfaead7",
"text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.",
"title": ""
},
{
"docid": "0b22d7708437c47d5e83ea9fc5f24406",
"text": "The American Association for Respiratory Care has declared a benchmark for competency in mechanical ventilation that includes the ability to \"apply to practice all ventilation modes currently available on all invasive and noninvasive mechanical ventilators.\" This level of competency presupposes the ability to identify, classify, compare, and contrast all modes of ventilation. Unfortunately, current educational paradigms do not supply the tools to achieve such goals. To fill this gap, we expand and refine a previously described taxonomy for classifying modes of ventilation and explain how it can be understood in terms of 10 fundamental constructs of ventilator technology: (1) defining a breath, (2) defining an assisted breath, (3) specifying the means of assisting breaths based on control variables specified by the equation of motion, (4) classifying breaths in terms of how inspiration is started and stopped, (5) identifying ventilator-initiated versus patient-initiated start and stop events, (6) defining spontaneous and mandatory breaths, (7) defining breath sequences (8), combining control variables and breath sequences into ventilatory patterns, (9) describing targeting schemes, and (10) constructing a formal taxonomy for modes of ventilation composed of control variable, breath sequence, and targeting schemes. Having established the theoretical basis of the taxonomy, we demonstrate a step-by-step procedure to classify any mode on any mechanical ventilator.",
"title": ""
},
{
"docid": "0c4f31c562307ec555d1080357a81167",
"text": "Social media activity in different geographic regions can expose a varied set of temporal patterns. We study and characterize diurnal patterns in social media data for different urban areas, with the goal of providing context and framing for reasoning about such patterns at different scales. Using one of the largest datasets to date of Twitter content associated with different locations, we examine within-day variability and across-day variability of diurnal keyword patterns for different locations. We show that only a few cities currently provide the magnitude of content needed to support such acrossday variability analysis for more than a few keywords. Nevertheless, within-day diurnal variability can help in comparing activities and finding similarities between cities. Introduction Social media activity in different geographic regions expose a varied set of temporal patterns. In particular, Social Awareness Streams (SAS) (Naaman, Boase, and Lai 2010), available from social media services such as Facebook, Twitter, FourSquare, Flickr, and others, allow users to post streams of lightweight content artifacts, from short status messages to links, pictures, and videos, in a highly connected social environment. The vast amounts of SAS data reflect, in new ways, people’s attitudes, attention, and interests, offering unique opportunities to understand and draw insights about social trends and habits. In this paper, we focus on characterizing social media patterns in different urban areas (US cities), with the goal of providing a framework for reasoning about activities and diurnal patterns in different cities. Using Twitter as a typical SAS, previous research studied specific temporal patterns that are similar across geographies, in particular in respect to expression of mood (Golder and Macy 2011; Dodds et al. 2011). We aim to provide insights for reasoning about diurnal patterns in different geographic (urban) areas that can be used in studying activity patterns in these areas, going beyond previous work that had mostly examined topical differences between posts in different geographic areas (Eisenstein et al. 2010; Hecht et al. 2011) or briefly examined broad diurnal differences (Cheng et al. 2011) in vol∗Amy and Sam were at Rutgers at the time of this work. Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ume between cities. Such study can contribute to urban studies, with implications for diverse social challenges such as public health, emergency response, community safety, transportation, and resource planning as well as Internet advertising, providing insights and information that cannot readily be extracted from other sources. Developing such a framework presents a number of challenges, both technical and practical. First, SAS data (and in particular Twitter) has been shown to be quite noisy. Users of SAS post different type of content, from information and link sharing, to personal updates, to social interactions, and many others (Naaman, Boase, and Lai 2010). Can stable patterns be reliably extracted given this noisy environment? Second, reliably extracting the location associated with Twitter content is still an open problem, as we discuss below. Finally, Twitter content volume shifts over time as more users join the service, and fluctuates widely in response to breaking events and other happenings, from Valentine’s Day to the news about Bin Laden’s capture and demise. Such temporal volume fluctuations might distort otherwise stable patterns and make them difficult to extract. In this paper, therefore, we report on a study that extracts and reasons about stable temporal patterns from Twitter data. In particular, we: 1) use large scale data with manual coding to get a wide sample of tweets for different cities; 2) study within-day and across-day variability of patterns in cities; and 3) reason about differences between cities with respect to overall patterns as well as individual ones. Related Work Broadly speaking, this work is informed by two key areas of related work: the use of new technologies and data sources for urban studies, and studies of social media to extract “real world” insights, or temporal dynamics. Here we broadly address these areas, before discussing other recent research that directly informed our work. The related research area sometimes dubbed “urban sensing” (Cuff, Hansen, and Kang 2008) analyzes various new datasets to understand the dynamics and patterns of urban activity. Most prominently, mobile phone data, mainly proprietary data from wireless carriers (e.g., calls made and positioning data) help expose travel patterns and broad spatio-temporal dynamics, e.g., in (Gonzalez, Hidalgo, and Barabasi 2008). Social media was also used to augment 258 Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media",
"title": ""
},
{
"docid": "917c065df63312d222053246fc5d14c2",
"text": "Current pervasive games are mostly location-aware applications, played on handheld computing devices. Considering pervasive games for children, it is argued that the interaction paradigm existing games support limits essential aspects of outdoor play like spontaneous social interaction, physical movement, and rich face-to-face communication. We present a new genre of pervasive games conceived to address this problem, that we call “Head Up Games” (HUGs) to underline that they liberate players from facing down to attend to screen-based interactions. The article discusses characteristics of HUG and relates them to existing genres of pervasive games. We present lessons learned during the design and evaluation of three HUG and chart future challenges.",
"title": ""
},
{
"docid": "12a89641dd93939be587b2bcf1b26939",
"text": "Drug-drug interaction (DDI) is a vital information when physicians and pharmacists prepare for the combined use of two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly medicine administering. In recent years, automatically extracting DDIs from biomedical text has drawn researchers’ attention. However, the existing work need either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recurrent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning method.",
"title": ""
}
] |
scidocsrr
|
5d4ef60f5c176fe1a56604d85d864358
|
AI assisted ethics
|
[
{
"docid": "d502d0c14b332f9847902a2b7a087eba",
"text": "The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.",
"title": ""
},
{
"docid": "a55a5785375031a7a967b0d65a2afd4e",
"text": "Successful negotiation of everyday life would seem to require people to possess insight about deficiencies in their intellectual and social skills. However, people tend to be blissfully unaware of their incompetence. This lack of awareness arises because poor performers are doubly cursed: Their lack of skill deprives them not only of the ability to produce correct responses, but also of the expertise necessary to surmise that they are not producing them. People base their perceptions of performance, in part, on their preconceived notions about their skills. Because these notions often do not correlate with objective performance, they can lead people to make judgments about their performance that have little to do with actual accomplishment.",
"title": ""
}
] |
[
{
"docid": "8a59e2b140eaf91a4a5fd8c109682543",
"text": "A search-based procedural content generation (SBPCG) algorithm for strategy game maps is proposed. Two representations for strategy game maps are devised, along with a number of objectives relating to predicted player experience. A multiobjective evolutionary algorithm is used for searching the space of maps for candidates that satisfy pairs of these objectives. As the objectives are inherently partially conflicting, the algorithm generates Pareto fronts showing how these objectives can be balanced. Such fronts are argued to be a valuable tool for designers looking to balance various design needs. Choosing appropriate points (manually or automatically) on the Pareto fronts, maps can be found that exhibit good map design according to specified criteria, and could either be used directly in e.g. an RTS game or form the basis for further human design.",
"title": ""
},
{
"docid": "a053cd00ba745fb5cc3b81ea2e79d319",
"text": "Business organization sheds the lights on the development in marketing to be able to accompaniment with the last even in marketing and to handle market management. The organizations create their own business decisions and operations through using business intelligence justification. Therefore organization can do that through knowledge, and convey the correct information. As a result, business intelligence becomes the main criterion and the strategic performance in the modern organization to achieve the dominant character. This study will show the impact of using business intelligence strategy on the decision making process by showing a study of the Jordanian customs department.",
"title": ""
},
{
"docid": "961a8cf2fb8f4faae44014449df0ee7e",
"text": "BACKGROUND\nLowering of LDL cholesterol reduces major vascular events, but whether more intensive therapy safely produces extra benefits is uncertain. We aimed to establish efficacy and safety of more intensive statin treatment in patients at high cardiovascular risk.\n\n\nMETHODS\nWe undertook a double-blind randomised trial in 12,064 men and women aged 18-80 years with a history of myocardial infarction. Participants were either currently on or had clear indication for statin therapy, and had a total cholesterol concentration of at least 3·5 mmol/L if already on a statin or 4·5 mmol/L if not. Randomisation to either 80 mg or 20 mg simvastatin daily was done centrally using a minimisation algorithm. Participants were assessed at 2, 4, 8, and 12 months after randomisation and then every 6 months until final follow-up. The primary endpoint was major vascular events, defined as coronary death, myocardial infarction, stroke, or arterial revascularisation. Analysis was by intention to treat. This study is registered, number ISRCTN74348595.\n\n\nFINDINGS\n6031 participants were allocated 80 mg simvastatin daily, and 6033 allocated 20 mg simvastatin daily. During a mean follow-up of 6·7 (SD 1·5) years, allocation to 80 mg simvastatin produced an average 0·35 (SE 0·01) mmol/L greater reduction in LDL cholesterol compared with allocation to 20 mg. Major vascular events occurred in 1477 (24·5%) participants allocated 80 mg simvastatin versus 1553 (25·7%) of those allocated 20 mg, corresponding to a 6% proportional reduction (risk ratio 0·94, 95% CI 0·88-1·01; p=0·10). There were no apparent differences in numbers of haemorrhagic strokes (24 [0·4%] vs 25 [0·4%]) or deaths attributed to vascular (565 [9·4%] vs 572 [9·5%]) or non-vascular (399 [6·6%] vs 398 [6·6%]) causes. Compared with two (0·03%) cases of myopathy in patients taking 20 mg simvastatin daily, there were 53 (0·9%) cases in the 80 mg group.\n\n\nINTERPRETATION\nThe 6% (SE 3·5%) reduction in major vascular events with a further 0·35 mmol/L reduction in LDL cholesterol in our trial is consistent with previous trials. Myopathy was increased with 80 mg simvastatin daily, but intensive lowering of LDL cholesterol can be achieved safely with other regimens.\n\n\nFUNDING\nMerck; The Clinical Trial Service Unit also receives funding from the UK Medical Research Council and the British Heart Foundation.",
"title": ""
},
{
"docid": "89e8c2f2722f7aaaad77c0a3099d629e",
"text": "In this paper we present a generative latent variable model for rating-based collaborative filtering called the User Rating Profile model (URP). The generative process which underlies URP is designed to produce complete user rating profiles, an assignment of one rating to each item for each user. Our model represents each user as a mixture of user attitudes, and the mixing proportions are distributed according to a Dirichlet random variable. The rating for each item is generated by selecting a user attitude for the item, and then selecting a rating according to the preference pattern associated with that attitude. URP is related to several models including a multinomial mixture model, the aspect model [7], and LDA [1], but has clear advantages over each.",
"title": ""
},
{
"docid": "7908e315d84cf916fb4a61a083be7fe6",
"text": "A base station antenna with dual-broadband and dual-polarization characteristics is presented in this letter. The proposed antenna contains four parts: a lower-band element, an upper-band element, arc-shaped baffle plates, and a box-shaped reflector. The lower-band element consists of two pairs of dipoles with additional branches for bandwidth enhancement. The upper-band element embraces two crossed hollow dipoles and is nested inside the lower-band element. Four arc-shaped baffle plates are symmetrically arranged on the reflector for isolating the lower- and upper-band elements and improving the radiation performance of upper-band element. As a result, the antenna can achieve a bandwidth of 50.6% for the lower band and 48.2% for the upper band when the return loss is larger than 15 dB, fully covering the frequency ranges 704–960 and 1710–2690 MHz for 2G/3G/4G applications. Measured port isolation larger than 27.5 dB in both the lower and upper bands is also obtained. At last, an array that consists of two lower-band elements and five upper-band elements is discussed for giving an insight into the future array design.",
"title": ""
},
{
"docid": "1d6e20debb1fc89079e0c5e4861e3ca4",
"text": "BACKGROUND\nThe aims of this study were to identify the independent factors associated with intermittent addiction and addiction to the Internet and to examine the psychiatric symptoms in Korean adolescents when the demographic and Internet-related factors were controlled.\n\n\nMETHODS\nMale and female students (N = 912) in the 7th-12th grades were recruited from 2 junior high schools and 2 academic senior high schools located in Seoul, South Korea. Data were collected from November to December 2004 using the Internet-Related Addiction Scale and the Symptom Checklist-90-Revision. A total of 851 subjects were analyzed after excluding the subjects who provided incomplete data.\n\n\nRESULTS\nApproximately 30% (n = 258) and 4.3% (n = 37) of subjects showed intermittent Internet addiction and Internet addiction, respectively. Multivariate logistic regression analysis showed that junior high school students and students having a longer period of Internet use were significantly associated with intermittent addiction. In addition, male gender, chatting, and longer Internet use per day were significantly associated with Internet addiction. When the demographic and Internet-related factors were controlled, obsessive-compulsive and depressive symptoms were found to be independently associated factors for intermittent addiction and addiction to the Internet, respectively.\n\n\nCONCLUSIONS\nStaff working in junior or senior high schools should pay closer attention to those students who have the risk factors for intermittent addiction and addiction to the Internet. Early preventive intervention programs are needed that consider the individual severity level of Internet addiction.",
"title": ""
},
{
"docid": "633c906446a11252c3ab9e0aad20189c",
"text": "The term \" gamification \" is generally used to denote the application of game mechanisms in non‐gaming environments with the aim of enhancing the processes enacted and the experience of those involved. In recent years, gamification has become a catchword throughout the fields of education and training, thanks to its perceived potential to make learning more motivating and engaging. This paper is an attempt to shed light on the emergence and consolidation of gamification in education/training. It reports the results of a literature review that collected and analysed around 120 papers on the topic published between 2011 and 2014. These originate from different countries and deal with gamification both in training contexts and in formal educational, from primary school to higher education. The collected papers were analysed and classified according to various criteria, including target population, type of research (theoretical vs experimental), kind of educational contents delivered, and the tools deployed. The results that emerge from this study point to the increasing popularity of gamification techniques applied in a wide range of educational settings. At the same time, it appears that over the last few years the concept of gamification has become more clearly defined in the minds of researchers and practitioners. Indeed, until fairly recently the term was used by many to denote the adoption of game artefacts (especially digital ones) as educational tools for learning a specific subject such as algebra. In other words, it was used as a synonym of Game Based Learning (GBL) rather than to identify an educational strategy informing the overall learning process, which is treated globally as a game or competition. However, this terminological confusion appears only in a few isolated cases in this literature review, suggesting that a certain level of taxonomic and epistemological convergence is underway.",
"title": ""
},
{
"docid": "616d20b1359cc1cf4fcfb1a0318d721e",
"text": "The Burj Khalifa Project is the tallest structure ever built by man; the tower is 828 meters tall and compromise of 162 floors above grade and 3 basement levels. Early integration of aerodynamic shaping and wind engineering played a major role in the architectural massing and design of this multi-use tower, where mitigating and taming the dynamic wind effects was one of the most important design criteria set forth at the onset of the project design. This paper provides brief description of the tower structural systems, focuses on the key issues considered in construction planning of the key structural components, and briefly outlines the execution of one of the most comprehensive structural health monitoring program in tall buildings.",
"title": ""
},
{
"docid": "485cda7203863d2ff0b2070ca61b1126",
"text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.",
"title": ""
},
{
"docid": "9a70c1dbd61029482dbfa8d39238c407",
"text": "Background: Advertisers optimization is one of the most fundamental tasks in paid search, which is a multi-billion industry as a major part of the growing online advertising market. As paid search is a three-player game (advertisers, search users and publishers), how to optimize large-scale advertisers to achieve their expected performance becomes a new challenge, for which adaptive models have been widely used.",
"title": ""
},
{
"docid": "f3f2184b1fd6a62540f8547df3014b44",
"text": "Social Media Analytics is an emerging interdisciplinary research field that aims on combining, extending, and adapting methods for analysis of social media data. On the one hand it can support IS and other research disciplines to answer their research questions and on the other hand it helps to provide architectural designs as well as solution frameworks for new social media-based applications and information systems. The authors suggest that IS should contribute to this field and help to develop and process an interdisciplinary research agenda.",
"title": ""
},
{
"docid": "62c49155e92350a0420fb215f0a92f78",
"text": "Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is perhaps the key problem of the discipline of Distributed Artificial Intelligence (DAI). In order to make advances it is important that the theories and principles which guide this central activity are uncovered and analysed in a systematic and rigourous manner. To this end, this paper models agent communities using a distributed goal search formalism, and argues that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in all DAI systems. 1. The Coordination Problem Participation in any social situation should be both simultaneously constraining, in that agents must make a contribution to it, and yet enriching, in that participation provides resources and opportunities which would otherwise be unavailable (Gerson, 1976). Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is the key to achieving this objective. Without coordination the benefits of decentralised problem solving vanish and the community may quickly degenerate into a collection of chaotic, incohesive individuals. In more detail, the objectives of the coordination process are to ensure: that all necessary portions of the overall problem are included in the activities of at least one agent, that agents interact in a manner which permits their activities to be developed and integrated into an overall solution, that team members act in a purposeful and consistent manner, and that all of these objectives are achievable within the available computational and resource limitations (Lesser and Corkill, 1987). Specific examples of coordination activities include supplying timely information to needy agents, ensuring the actions of multiple actors are synchronised and avoiding redundant problem solving. There are three main reasons why the actions of multiple agents need to be coordinated: • because there are dependencies between agents’ actions Interdependence occurs when goals undertaken by individual agents are related either because local decisions made by one agent have an impact on the decisions of other community members (eg when building a house, decisions about the size and location of rooms impacts upon the wiring and plumbing) or because of the possibility of harmful interactions amongst agents (eg two mobile robots may attempt to pass through a narrow exit simultaneously, resulting in a collision, damage to the robots and blockage of the exit). Contribution to Foundations of DAI 2 • because there is a need to meet global constraints Global constraints exist when the solution being developed by a group of agents must satisfy certain conditions if it is to be deemed successful. For instance, a house building team may have a budget of £250,000, a distributed monitoring system may have to react to critical events within 30 seconds and a distributed air traffic control system may have to control the planes with a fixed communication bandwidth. If individual agents acted in isolation and merely tried to optimise their local performance, then such overarching constraints are unlikely to be satisfied. Only through coordinated action will acceptable solutions be developed. • because no one individual has sufficient competence, resources or information to solve the entire problem Many problems cannot be solved by individuals working in isolation because they do not possess the necessary expertise, resources or information. Relevant examples include the tasks of lifting a heavy object, driving in a convoy and playing a symphony. It may be impractical or undesirable to permanently synthesize the necessary components into a single entity because of historical, political, physical or social constraints, therefore temporary alliances through cooperative problem solving may be the only way to proceed. Differing expertise may need to be combined to produce a result outside of the scope of any of the individual constituents (eg in medical diagnosis, knowledge about heart disease, blood disorders and respiratory problems may need to be combined to diagnose a patient’s illness). Different agents may have different resources (eg processing power, memory and communications) which all need to be harnessed to solve a complex problem. Finally, different agents may have different information or viewpoints of a problem (eg in concurrent engineering systems, the same product may be viewed from a design, manufacturing and marketing perspective). Even when individuals can work independently, meaning coordination is not essential, information discovered by one agent can be of sufficient use to another that the two agents can solve the problem more than twice as fast. For example, when searching for a lost object in a large area it is often better, though not essential, to do so as a team. Analysis of this “combinatorial implosion” phenomena (Kornfield and Hewitt, 1981) has resulted in the postulation that cooperative search, when sufficiently large, can display universal characteristics which are independent of the nature of either the individual processes or the particular domain being tackled (Clearwater et al., 1991). If all the agents in the system could have complete knowledge of the goals, actions and interactions of their fellow community members and could also have infinite processing power, it would be possible to know exactly what each agent was doing at present and what it is intending to do in the future. In such instances, it would be possible to avoid conflicting and redundant efforts and systems could be perfectly coordinated (Malone, 1987). However such complete knowledge is infeasible, in any community of reasonable complexity, because bandwidth limitations make it impossible for agents to be constantly informed of all developments. Even in modestly sized communities, a complete analysis to determine the detailed activities of each agent is impractical the computation and communication costs of determining the optimal set and allocation of activities far outweighs the improvement in problem solving performance (Corkill and Lesser, 1986). Contribution to Foundations of DAI 3 As all community members cannot have a complete and accurate perspective of the overall system, the next easiest way of ensuring coherent behaviour is to have one agent with a wider picture. This global controller could then direct the activities of the others, assign agents to tasks and focus problem solving to ensure coherent behaviour. However such an approach is often impractical in realistic applications because even keeping one agent informed of all the actions in the community would swamp the available bandwidth. Also the controller would become a severe communication bottleneck and would render the remaining components unusable if it failed. To produce systems without bottlenecks and which exhibit graceful degradation of performance, most DAI research has concentrated on developing communities in which both control and data are distributed. Distributed control means that individuals have a degree of autonomy in generating new actions and in deciding which tasks to do next. When designing such systems it is important to ensure that agents spend the bulk of their time engaged on solving the domain level problems for which they were built, rather than in communication and coordination activities. To this end, the community should be decomposed into the most modular units possible. However the designer should ensure that these units are of sufficient granularity to warrant the overhead inherent in goal distribution distributing small tasks can prove more expensive than performing them in one place (Durfee et al., 1987). The disadvantage of distributing control and data is that knowledge of the system’s overall state is dispersed throughout the community and each individual has only a partial and imprecise perspective. Thus there is an increased degree of uncertainty about each agent’s actions, meaning that it more difficult to attain coherent global behaviour for example, agents may spread misleading and distracting information, multiple agents may compete for unshareable resources simultaneously, agents may unwittingly undo the results of each others activities and the same actions may be carried out redundantly. Also the dynamics of such systems can become extremely complex, giving rise to nonlinear oscillations and chaos (Huberman and Hogg, 1988). In such cases the coordination process becomes correspondingly more difficult as well as more important1. To develop better and more integrated models of coordination, and hence improve the efficiency and utility of DAI systems, it is necessary to obtain a deeper understanding of the fundamental concepts which underpin agent interactions. The first step in this analysis is to determine the perspective from which coordination should be described. When viewing agents from a purely behaviouristic (external) perspective, it is, in general, impossible to determine whether they have coordinated their actions. Firstly, actions may be incoherent even if the agents tried to coordinate their behaviour. This may occur, for instance, because their models of each other or of the environment are incorrect. For example, robot1 may see robot2 heading for exit2 and, based on this observation and the subsequent deduction that it will use this exit, decide to use exit1. However if robot2 is heading towards exit2 to pick up a particular item and actually intends to use exit1 then there may be incoherent behaviour (both agents attempting to use the same exit) although there was coordination. Secondly, even if there is coherent action, it may not",
"title": ""
},
{
"docid": "08e6dfcc9122a8116a82b292a75757f0",
"text": "Shape structure is about the arrangement and relations between shape parts. Structure-aware shape processing goes beyond local geometry and low level processing, and analyzes and processes shapes at a high level. It focuses more on the global inter and intra semantic relations among the parts of shape rather than on their local geometry.\n With recent developments in easy shape acquisition, access to vast repositories of 3D models, and simple-to-use desktop fabrication possibilities, the study of structure in shapes has become a central research topic in shape analysis, editing, and modeling. A whole new line of structure-aware shape processing algorithms has emerged that base their operation on an attempt to understand such structure in shapes. The algorithms broadly consist of two key phases: an analysis phase, which extracts structural information from input data; and a (smart) processing phase, which utilizes the extracted information for exploration, editing, and synthesis of novel shapes.\n In this course, we will organize, summarize, and present the key concepts and methodological approaches towards efficient structure-aware shape processing. We discuss common models of structure, their implementation in terms of mathematical formalism and algorithms, and explain the key principles in the context of a number of state-of-the-art approaches. Further, we attempt to list the key open problems and challenges, both at the technical and at the conceptual level, to make it easier for new researchers to better explore and contribute to this topic.\n Our goal is to both give the practitioner an overview of available structure-aware shape processing techniques, as well as identify future research questions in this important, emerging, and fascinating research area.",
"title": ""
},
{
"docid": "50d56aa5eef9be1adc0514047a5777ef",
"text": "Photo-sequencing is the problem of recovering the temporal order of a set of still images of a dynamic event, taken asynchronously by a set of uncalibrated cameras. Solving this problem is a first, crucial step for analyzing (or visualizing) the dynamic content of the scene captured by a large number of freely moving spectators. We propose a geometric based solution, followed by rank aggregation to the photo-sequencing problem. Our algorithm trades spatial certainty for temporal certainty. Whereas the previous solution proposed by [4] relies on two images taken from the same static camera to eliminate uncertainty in space, we drop the static-camera assumption and replace it with temporal information available from images taken from the same (moving) camera. Our method thus overcomes the limitation of the static-camera assumption, and scales much better with the duration of the event and the spread of cameras in space. We present successful results on challenging real data sets and large scale synthetic data (250 images).",
"title": ""
},
{
"docid": "d71c8d9f5fed873937d6a645f17c9b47",
"text": "Yang, C.-C., Prasher, S.O., Landry, J.-A., Perret, J. and Ramaswamy, H.S. 2000. Recognition of weeds with image processing and their use with fuzzy logic for precision farming. Can. Agric. Eng. 42:195200. Herbicide use can be reduced if the spatial distribution of weeds in the field is taken into account. This paper reports the initial stages of development of an image capture/processing system to detect weeds, as well as a fuzzy logic decision-making system to determine where and how much herbicide to apply in an agricultural field. The system used a commercially available digital camera and a personal computer. In the image processing stage, green objects in each image were identified using a greenness method that compared the red, green, and blue (RGB) intensities. The RGB matrix was reduced to a binary form by applying the following criterion: if the green intensity of a pixel was greater than the red and the blue intensities, then the pixel was assigned a value of one; otherwise the pixel was given a value of zero. The resulting binary matrix was used to compute greenness area for weed coverage, and greenness distribution of weeds (weed patch). The values of weed coverage and weed patch were inputs to the fuzzy logic decision-making system, which used the membership functions to control the herbicide application rate at each location. Simulations showed that a graduated fuzzy strategy could potentially reduce herbicide application by 5 to 24%, and that an on/off strategy resulted in an even greater reduction of 15 to 64%.",
"title": ""
},
{
"docid": "258edc5760f2d87c8e3018e8d4c3bb2a",
"text": "Preamble This guideline is intended to indicate preferable approaches to the management of patients with colorectal polyps. It does not deal with either patients with known colon cancer or familial polyposis. When only data that will not withstand objective scrutiny are available, an American College of Gastroenterology (ACG) recommendation is identified as a consensus of experts. The guideline is applicable to all physicians who address this subject without regard to specialty training or interests and is intended to indicate the preferable but not necessarily the only acceptable approach to the patient with colorectal polyps. The guideline is intended to be flexible and must be distinguished from standards of care that are inflexible and rarely violated. Given the wide range of specifics in this common health care problem, the physician must always choose the course best suited to the individual patient and the variables in existence at the moment of decision. This guideline was developed under the auspices of the American College of Gastroenterology and its Practice Parameters Committee and approved by the Board of Trustees. It has been intensely reviewed and revised by the committee, other experts in the field, physicians who will use it, and specialists in the science of decision analysis (see Methods). The ACG recommendations are therefore considered valid at the time of their publication based on available data. Methods The human subject English-language literature was searched using MEDLINE and the following MeSH terms: polyp-, adenoma-, and polypectomy-colorectal. The titles and abstracts of the articles were reviewed by the primary author. All randomized, controlled trials were read in depth, as were all large casecontrol and cohort studies. In the resulting review, evidence was evaluated along a hierarchy with randomized, controlled trials receiving the greatest weight. Abstracts presented at national meetings were used only in special circumstances in which unique data from ongoing randomized trials were presented. When scientific data were lacking, recommendations were based on expert consensus. During its preparation, the guideline was submitted for review by the Practice Committees and the Governing Boards of the American Gastroenterological Association and the American Society for Gastrointestinal Endoscopy, and by selected authorities in colorectal neoplasia including gastroenterologists, pathologists, and radiologists. All recommendations resulting from this review were carefully considered by the Committee and incorporated in the final revision. In addition, the guideline was circulated for review and comment to primary internal medicine and family practice societies and to the membership of the American College of Gastroenterology. Clinical Considerations A colorectal polyp is a circumscribed mass of tissue that projects above the surface of the bowel mucosa. Grossly, a polyp is classified as pedunculated or sessile depending on whether it contains a discrete stalk. Polyps may ulcerate and cause intestinal bleeding. Rarely, large polyps may cause symptoms of partial bowel obstruction. Most polyps, however, are asymptomatic lesions detected only by screening or diagnostic studies. This guideline addresses the management of patients known to have one or more polyps; it does not address primary screening for colorectal neoplasia. Colorectal polyps are extremely common in Western countries; they are found in more than 30% of autopsies performed in people older than 60 years [1, 2]. The main importance of polyps is their well-recognized relationship to colorectal cancer [3]. After years of debate, it is generally accepted that most colorectal cancers arise from benign, neoplastic polyps (adenomas). Although this adenoma-cancer sequence can probably never be proved directly, persuasive data exist indicating that colorectal neoplasia changes through a continuous process from normal mucosa, to benign adenoma, to carcinoma [4]. Evidence of this sequence includes the following: 1. There is a parallel prevalence of adenomas and carcinomas, with the average age of patients with adenomas being 5 to 7 years less than that of patients with carcinomas [5, 6]. 2. Cancer is often contiguous with benign adenomatous tissue, whereas small carcinomas without adenomatous tissue are rare [7, 8]. 3. The adenomas of the familial polyposis syndrome, a well-recognized premalignant state, are histologically similar to sporadic adenomas [9]. 4. As adenomas grow, they exhibit increasing cellular atypia and abnormal chromosomal patterns [5, 10]. 5. The anatomic distribution is similar for adenomas and carcinomas [11]. 6. Adenomas are found in more than one third of surgical specimens containing a colorectal cancer [12, 13]. Histologically, polyps are classified as neoplastic (adenomas) or non-neoplastic [14, 15]. Non-neoplastic polyps have no malignant potential and include hyperplastic polyps, hamartomas, lymphoid aggregates, and inflammatory polyps. Neoplastic polyps or adenomas have malignant potential and are classified according to the World Health Organization as tubular, tubulovillous, or villous adenomas, depending on the presence and volume of villous tissue [16]. Tubular adenomas are composed of straight or branched tubules of dysplastic tissue; villous adenomas contain fingerlike projections of dysplastic epithelium. Approximately 70% of polyps removed at colonoscopy are adenomas [17]. Seventy percent to 85% of these are classified as tubular (0% to 25%, villous tissue), 10% to 25% are tubulovillous (25% to 75%, villous tissue), and fewer than 5% are villous adenomas (75% to 100%, villous tissue). Some degree of dysplasia exists in all adenomas. Most authorities recommend that dysplasia be classified as mild, moderate, or severe [18]. Others prefer only two gradations, low- and high-grade dysplasia, because this classification reduces the problem of interobserver variation [19]. Severe, or high-grade, dysplasia includes the histologic changes previously called carcinoma in situ, intramucosal carcinoma, or focal carcinoma. Abandonment of these terms is recommended because of concern for misinterpretation of the clinical significance that might lead to overtreatment, and thus they will not be used in this guideline. Approximately 5% to 7% of patients with adenomas have severe dysplasia and 3% to 5% have invasive carcinoma at the time of diagnosis. Increasing dysplasia and, presumably, malignant potential correlate with increasing adenoma size, villous component, and patient age [19]. The likelihood of invasive carcinoma also increases with increasing polyp size [15]. The development of colorectal adenomas and carcinomas probably involves both environmental and genetic factors [10, 20-22]. Environmental carcinogens appear to act on a genetically susceptible mucosa causing cellular proliferation followed by oncogene activation and chromosomal deletions leading to adenoma formation, growth, increasing dysplasia, and then invasive carcinoma. Diagnosis and Treatment Colonic polyps are diagnosed by endoscopy or barium radiography. Because most polyps are asymptomatic, they are usually found incidentally. The single-contrast barium enema examination is an inaccurate method for detecting polyps in most patients. In one large screening study, single-contrast barium enemas found only 40% of neoplastic polyps detected on subsequent colonoscopy [23]. Double-contrast techniques greatly improve the accuracy of radiologic methods for detecting polyps [24]. A study comparing the accuracy of both radiographic methods in 425 patients reported a sensitivity for detecting polyps of 90% and 40% for double- and single-contrast methods, respectively [25]. Several studies indicate that the double-contrast barium enema can accurately detect most cancers and most polyps that are larger than 1 cm in diameter [26, 27]. The main limitation of barium enema is that it does not allow biopsy or polypectomy. The most common use of flexible sigmoidoscopy is for screening asymptomatic average-risk persons for colonic neoplasms. Flexible sigmoidoscopy done with the standard 60-cm instrument detects two to three times as many polyps and is more comfortable than is rigid sigmoidoscopy [28, 29]. Sensitivity and specificity are very high because few polyps within reach of the examination instrument are missed and the false-positive rate is negligible. The combination of a double-contrast barium enema and flexible sigmoidoscopy has been promoted as an acceptable alternative to colonoscopy for patients requiring a complete examination of the large bowel. When a barium enema is used for surveillance, rigid or flexible proctosigmoidoscopy should always be done to ensure an adequate examination of the rectum. Flexible sigmoidoscopy also provides a more accurate examination of the sigmoid colon, which is often a difficult area for the radiologist to examine. Double-contrast barium enema appears to be more accurate in the proximal colon than in the distal colon [30]. Although flexible sigmoidoscopy allows biopsy of lesions, it should not be used for electrosurgical polypectomy unless the entire colon is prepared to eliminate the risk for electrocautery-induced explosion [31]. Furthermore, detection of a neoplastic polyp by screening flexible sigmoidoscopy is usually an indication for colonoscopy, at which time the polyp can be resected and a search made for synchronous neoplasia. Colonoscopy is the best method for detecting polyps accurately, especially those measuring less than 1 cm in diameter, and it allows biopsy of lesions and resection of most polyps [32, 33]. A controlled, single-blinded comparison study of double-contrast barium enema and colonoscopy performed by expert examiners reported an accuracy of 94% and 67% for diagnosing polyps for colonoscopy and radiographic studies, respectively [34]. In a recent, similarly controlled investigation, tandem",
"title": ""
},
{
"docid": "52e75a2e3d34c1cef5e61c69e074caf2",
"text": "In this paper, we propose an efficient method for license plate localization in the images with various situations and complex background. At the first, in order to reduce problems such as low quality and low contrast in the vehicle images, image contrast is enhanced by the two different methods and the best for following is selected. At the second part, vertical edges of the enhanced image are extracted by sobel mask. Then the most of the noise and background edges are removed by an effective algorithm. The output of this stage is given to a morphological filtering to extract the candidate regions and finally we use several geometrical features such as area of the regions, aspect ratio and edge density to eliminate the non-plate regions and segment the plate from the input car image. This method is performed on some real images that have been captured at the different imaging conditions. The appropriate experimental results show that our proposed method is nearly independent to environmental conditions such as lightening, camera angles and camera distance from the automobile, and license plate rotation.",
"title": ""
},
{
"docid": "489f3a1bc2527f683258ddf17d53807b",
"text": "This paper presents the development of a dual channel, air coupled Ultra-Wideband (UWB) Ground Penetrating Radar (GPR) targeting highway pavements and bridge deck inspections. Compared to most existing GPRs with a single channel and low survey speeds, our GPR possesses competitive features, including wide area coverage, high spatial resolution and operating capability at normal highway driving speed (up to 60 mph). The system has a two-channel microwave front end, a high speed (8 Gsps) real time data acquisition unit, a high throughput multithread data transmission and storage module, and a customized low-cost control element developed in a field-programmable gate array (FPGA). Experiments with different steel reinforcing bars establish GPR system performance.",
"title": ""
},
{
"docid": "df97dff1e2539f192478f2aa91f69cc4",
"text": "Computer systems are increasingly employed in circumstances where their failure (or even their correct operation, if they are built to flawed requirements) can have serious consequences. There is a surprising diversity of opinion concerning the properties that such “critical systems” should possess, and the best methods to develop them. The dependability approach grew out of the tradition of ultra-reliable and fault-tolerant systems, while the safety approach grew out of the tradition of hazard analysis and system safety engineering. Yet another tradition is found in the security community, and there are further specialized approaches in the tradition of real-time systems. In this report, I examine the critical properties considered in each approach, and the techniques that have been developed to specify them and to ensure their satisfaction. Since systems are now being constructed that must satisfy several of these critical system properties simultaneously, there is particular interest in the extent to which techniques from one tradition support or conflict with those of another, and in whether certain critical system properties are fundamentally compatible or incompatible with each other. As a step toward improved understanding of these issues, I suggest a taxonomy, based on Perrow’s analysis, that considers the complexity of component interactions and tightness of coupling as primary factors. C. Perrow. Normal Accidents: Living with High Risk Technologies. Basic Books, New York, NY, 1984.",
"title": ""
}
] |
scidocsrr
|
42d4b11d1799b7f42fbd220812e2f7a6
|
Using ICT with people with special education needs: what the literature tells us
|
[
{
"docid": "8c5a76124b7d37929cef1a7a67eae3ba",
"text": "This paper describes the ongoing development of a highly configurable word processing environment developed using a pragmatic, obstacle-by-obstacle approach to alleviating some of the visual problems encountered by dyslexic computer users. The paper describes the current version of the software and the development methodology as well as the results of a pilot study which indicated that visual environment individually configured using the SeeWord software improved reading accuracy as well as subjectively rated reading comfort.",
"title": ""
}
] |
[
{
"docid": "1b4710f97723339e2d20edcad67d31ab",
"text": "Three-way merging is a technique that may be employed for reintegrating changes to a document in cases where multiple independently modified copies have been made. While tools for three-way merge of ASCII text files exist in the form of the ubiquitous diff and patch tools these are of limited applicability to XML documents.\n We present a method for three-way merging of XML which is targeted at merging XML formats that model human-authored documents as ordered trees (e.g. rich text formats structured text drawings etc.). To this end we investigate a number of use cases on XML merging (collaborative editing propagating changes across document variants) from which we derive a set of high-level merge rules. Our merge is based on these rules.\n We propose that our merge is easy to both understand and implement yet sufficiently expressive to handle several important cases of merging on document structure that are beyond the capabilities of traditional text-based tools. In order to justify these claims we applied our merging method to the merging tasks contained in the use cases. The overall performance of the merge was found to be satisfactory.\n The key contributions of this work are: a set of merge rules derived from use cases on XML merging a compact and versatile XML merge in accordance with these rules and a classification of conflicts in the context of that merge.",
"title": ""
},
{
"docid": "938afbc53340a3aa6e454d17789bf021",
"text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.",
"title": ""
},
{
"docid": "20c3bfb61bae83494d7451b083bc2202",
"text": "Peripheral nerve hyperexcitability (PNH) syndromes can be subclassified as primary and secondary. The main primary PNH syndromes are neuromyotonia, cramp-fasciculation syndrome (CFS), and Morvan's syndrome, which cause widespread symptoms and signs without the association of an evident peripheral nerve disease. Their major symptoms are muscle twitching and stiffness, which differ only in severity between neuromyotonia and CFS. Cramps, pseudomyotonia, hyperhidrosis, and some other autonomic abnormalities, as well as mild positive sensory phenomena, can be seen in several patients. Symptoms reflecting the involvement of the central nervous system occur in Morvan's syndrome. Secondary PNH syndromes are generally seen in patients with focal or diffuse diseases affecting the peripheral nervous system. The PNH-related symptoms and signs are generally found incidentally during clinical or electrodiagnostic examinations. The electrophysiological findings that are very useful in the diagnosis of PNH are myokymic and neuromyotonic discharges in needle electromyography along with some additional indicators of increased nerve fiber excitability. Based on clinicopathological and etiological associations, PNH syndromes can also be classified as immune mediated, genetic, and those caused by other miscellaneous factors. There has been an increasing awareness on the role of voltage-gated potassium channel complex autoimmunity in primary PNH pathogenesis. Then again, a long list of toxic compounds and genetic factors has also been implicated in development of PNH. The management of primary PNH syndromes comprises symptomatic treatment with anticonvulsant drugs, immune modulation if necessary, and treatment of possible associated dysimmune and/or malignant conditions.",
"title": ""
},
{
"docid": "e976a452f6e8a04036608c7354fed8f3",
"text": "This paper discusses control and protection of power electronics interfaced distributed generation (DG) systems in a customer-driven microgrid (CDM). Particularly, the following topics will be addressed: microgrid system configurations and features, DG interfacing converter topologies and control, power flow control in grid-connected operation, islanding detection, autonomous islanding operation with load shedding and load demand sharing among DG units, and system/DG protection. Most of the above mentioned control and protection issues should be embedded into the DG interfacing converter control scheme. Some case study results are also shown in this paper to further illustrate the above mentioned issues.",
"title": ""
},
{
"docid": "6c82a481bc5613091b49213baf23185a",
"text": "! Abstract The world’s population is concentrated in urban areas. This change in demography has brought landscape transformations that have a number of documented effects on stream ecosystems. The most consistent and pervasive effect is an increase in impervious surface cover within urban catchments, which alters the hydrology and geomorphology of streams. This results in predictable changes in stream habitat. In addition to imperviousness, runoff from urbanized surfaces aswell asmunicipal and industrial discharges result in increased loading of nutrients, metals, pesticides, and other contaminants to streams. These changes result in consistent declines in the richness of algal, invertebrate, and fish communities in urban streams. Although understudied in urban streams, ecosystem processes are also affected by urbanization. Urban streams represent opportunities for ecologists interested in studying disturbance and contributing to more effective landscape management.",
"title": ""
},
{
"docid": "056f9496de2911ac3d41f7e03a2e6f76",
"text": "This paper presents a survey on the role of negationin sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis. We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects, such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.",
"title": ""
},
{
"docid": "49002be42dfa6e6998e6975203357e3b",
"text": "In this paper, we present a new tone mapping algorithm for the display of high dynamic range images, inspired by adaptive process of the human visual system. The proposed algorithm is based on the center-surround Retinex processing. In our method, the local details are enhanced according to a non-linear adaptive spatial filter (Gaussian filter), whose shape (filter variance) is adapted to high-contrast edges of the image. Thus our method does not generate halo artifacts meanwhile preserves visibility and contrast impression of high dynamic range scenes in the common display devices. The proposed method is tested on a variety of HDR images and the results show the good performance of our method in terms of visual quality.",
"title": ""
},
{
"docid": "6c8d0b2b0da9d39d7d34e64700382767",
"text": "We consider two graph models of semantic change. The first is a time-series model that relates embedding vectors from one time period to embedding vectors of previous time periods. In the second, we construct one graph for each word: nodes in this graph correspond to time points and edge weights to the similarity of the word’s meaning across two time points. We apply our two models to corpora across three different languages. We find that semantic change is linear in two senses. Firstly, today’s embedding vectors (= meaning) of words can be derived as linear combinations of embedding vectors of their neighbors in previous time periods. Secondly, self-similarity of words decays linearly in time. We consider both findings as new laws/hypotheses of semantic change.",
"title": ""
},
{
"docid": "b74818aca22974927fdcdcbf60ce239b",
"text": "We are currently observing a significant increase in the popularity of Unmanned Aerial Vehicles (UAVs), popularly also known by their generic term drones. This is not only the case for recreational UAVs, that one can acquire for a few hundred dollars, but also for more sophisticated ones, namely professional UAVs, whereby the cost can reach several thousands of dollars. These professional UAVs are known to be largely employed in sensitive missions such as monitoring of critical infrastructures and operations by the police force. Given these applications, and in contrast to what we have been seeing for the case of recreational UAVs, one might assume that professional UAVs are strongly resilient to security threats. In this demo we prove such an assumption wrong by presenting the security gaps of a professional UAV, which is used for critical operations by police forces around the world. We demonstrate how one can exploit the identified security vulnerabilities, perform a Man-in-the-Middle attack, and inject control commands to interact with the compromised UAV. In addition, we discuss appropriate countermeasures to help improving the security and resilience of professional UAVs.",
"title": ""
},
{
"docid": "eb6636299df817817aa49f1f8dad04f5",
"text": "This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models.",
"title": ""
},
{
"docid": "c8cd0f14edee76888e4f1fd0ccc72dfa",
"text": "BACKGROUND\nTotal hip and total knee arthroplasties are well accepted as reliable and suitable surgical procedures to return patients to function. Health-related quality-of-life instruments have been used to document outcomes in order to optimize the allocation of resources. The objective of this study was to review the literature regarding the outcomes of total hip and knee arthroplasties as evaluated by health-related quality-of-life instruments.\n\n\nMETHODS\nThe Medline and EMBASE medical literature databases were searched, from January 1980 to June 2003, to identify relevant studies. Studies were eligible for review if they met the following criteria: (1). the language was English or French, (2). at least one well-validated and self-reported health-related quality of life instrument was used, and (3). a prospective cohort study design was used.\n\n\nRESULTS\nOf the seventy-four studies selected for the review, thirty-two investigated both total hip and total knee arthroplasties, twenty-six focused on total hip arthroplasty, and sixteen focused on total knee arthroplasty exclusively. The most common diagnosis was osteoarthritis. The duration of follow-up ranged from seven days to seven years, with the majority of studies describing results at six to twelve months. The Short Form-36 and the Western Ontario and McMaster University Osteoarthritis Index, the most frequently used instruments, were employed in forty and twenty-eight studies, respectively. Seventeen studies used a utility index. Overall, total hip and total knee arthroplasties were found to be quite effective in terms of improvement in health-related quality-of-life dimensions, with the occasional exception of the social dimension. Age was not found to be an obstacle to effective surgery, and men seemed to benefit more from the intervention than did women. When improvement was found to be modest, the role of comorbidities was highlighted. Total hip arthroplasty appears to return patients to function to a greater extent than do knee procedures, and primary surgery offers greater improvement than does revision. Patients who had poorer preoperative health-related quality of life were more likely to experience greater improvement.\n\n\nCONCLUSIONS\nHealth-related quality-of-life data are valuable, can provide relevant health-status information to health professionals, and should be used as a rationale for the implementation of the most adequate standard of care. Additional knowledge and scientific dissemination of surgery outcomes should help to ensure better management of patients undergoing total hip or total knee arthroplasty and to optimize the use of these procedures.",
"title": ""
},
{
"docid": "0618e88e1319a66cd7f69db491f78aca",
"text": "The rich dependency structure found in the columns of real-world relational databases can be exploited to great advantage, but can also cause query optimizers---which usually assume that columns are statistically independent---to underestimate the selectivities of conjunctive predicates by orders of magnitude. We introduce CORDS, an efficient and scalable tool for automatic discovery of correlations and soft functional dependencies between columns. CORDS searches for column pairs that might have interesting and useful dependency relations by systematically enumerating candidate pairs and simultaneously pruning unpromising candidates using a flexible set of heuristics. A robust chi-squared analysis is applied to a sample of column values in order to identify correlations, and the number of distinct values in the sampled columns is analyzed to detect soft functional dependencies. CORDS can be used as a data mining tool, producing dependency graphs that are of intrinsic interest. We focus primarily on the use of CORDS in query optimization. Specifically, CORDS recommends groups of columns on which to maintain certain simple joint statistics. These \"column-group\" statistics are then used by the optimizer to avoid naive selectivity estimates based on inappropriate independence assumptions. This approach, because of its simplicity and judicious use of sampling, is relatively easy to implement in existing commercial systems, has very low overhead, and scales well to the large numbers of columns and large table sizes found in real-world databases. Experiments with a prototype implementation show that the use of CORDS in query optimization can speed up query execution times by an order of magnitude. CORDS can be used in tandem with query feedback systems such as the LEO learning optimizer, leveraging the infrastructure of such systems to correct bad selectivity estimates and ameliorating the poor performance of feedback systems during slow learning phases.",
"title": ""
},
{
"docid": "c101290e355e76df7581a4500c111c86",
"text": "The Internet of Things (IoT) is a network of physical things, objects, or devices, such as radio-frequency identification tags, sensors, actuators, mobile phones, and laptops. The IoT enables objects to be sensed and controlled remotely across existing network infrastructure, including the Internet, thereby creating opportunities for more direct integration of the physical world into the cyber world. The IoT becomes an instance of cyberphysical systems (CPSs) with the incorporation of sensors and actuators in IoT devices. Objects in the IoT have the potential to be grouped into geographical or logical clusters. Various IoT clusters generate huge amounts of data from diverse locations, which creates the need to process these data more efficiently. Efficient processing of these data can involve a combination of different computation models, such as in situ processing and offloading to surrogate devices and cloud-data centers.",
"title": ""
},
{
"docid": "7cb61609adf6e3c56c762d6fe322903c",
"text": "In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.",
"title": ""
},
{
"docid": "0739c95aca9678b3c001c4d2eb92ec57",
"text": "The Image segmentation is referred to as one of the most important processes of image processing. Image segmentation is the technique of dividing or partitioning an image into parts, called segments. It is mostly useful for applications like image compression or object recognition, because for these types of applications, it is inefficient to process the whole image. So, image segmentation is used to segment the parts from image for further processing. There exist several image segmentation techniques, which partition the image into several parts based on certain image features like pixel intensity value, color, texture, etc. These all techniques are categorized based on the segmentation method used. In this paper the various image segmentation techniques are reviewed, discussed and finally a comparison of their advantages and disadvantages is listed.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "298d67edd4095672c69f14598ba12ab6",
"text": "Cryptocurrencies have emerged as important financial software systems. They rely on a secure distributed ledger data structure; mining is an integral part of such systems. Mining adds records of past transactions to the distributed ledger known as Blockchain, allowing users to reach secure, robust consensus for each transaction. Mining also introduces wealth in the form of new units of currency. Cryptocurrencies lack a central authority to mediate transactions because they were designed as peer-to-peer systems. They rely on miners to validate transactions. Cryptocurrencies require strong, secure mining algorithms. In this paper we survey and compare and contrast current mining techniques as used by major Cryptocurrencies. We evaluate the strengths, weaknesses, and possible threats to each mining strategy. Overall, a perspective on how Cryptocurrencies mine, where they have comparable performance and assurance, and where they have unique threats and strengths are outlined.",
"title": ""
},
{
"docid": "db11208267e18717bba0643bd4c9fa80",
"text": "Nasal tip deficiency can be congenital or secondary to previous nasal surgeries. Underdeveloped medial crura usually present with underprojected tip and lack of tip definition. Weakness or malposition of lateral crura causes alar rim retraction and lateral nasal wall weakness. Structural grafting of alar cartilages strengthens the tip framework, reinforces the disrupted support mechanisms, and controls the position of the nasal tip. In secondary cases, anatomic reconstruction of the weakened or interrupted alar cartilages and reconstitution of a stable nasal tip tripod must be the goal for a predictable outcome.",
"title": ""
},
{
"docid": "809aed520d0023535fec644e81ddbb53",
"text": "This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e52a2c807612cb383076f2fae508c6cc",
"text": "We present a new corpus for computational stylometry, more specifically authorship attribution and the prediction of author personality from text. Because of the large number of authors (145), the corpus will allow previously impossible studies of variation in features considered predictive for writing style. The innovative meta-information (personality profiles of the authors) associated with these texts allows the study of personality prediction, a not yet very well researched aspect of style. In this paper, we describe the contents of the corpus and show its use in both authorship attribution and personality prediction. We focus on features that have been proven useful in the field of author recognition. Syntactic features like part-of-speech n-grams are generally accepted as not being under the author’s conscious control and therefore providing good clues for predicting gender or authorship. We want to test whether these features are helpful for personality prediction and authorship attribution on a large set of authors. Both tasks are approached as text categorization tasks. First a document representation is constructed based on feature selection from the linguistically analyzed corpus (using the Memory-Based Shallow Parser (MBSP)). These are associated with each of the 145 authors or each of the four components of the Myers-Briggs Type Indicator (Introverted-Extraverted, Sensing-iNtuitive, Thinking-Feeling, JudgingPerceiving). Authorship attribution on 145 authors achieves results around 50% accuracy. Preliminary results indicate that the first two personality dimensions can be predicted fairly accurately.",
"title": ""
}
] |
scidocsrr
|
f74f74b59c814e83f9732f4d9ac01148
|
A Freely Available Automatically Generated Thesaurus of Related Words
|
[
{
"docid": "805fe4eea0e9415f8683f1135b135059",
"text": "In machine translation, information on word ambiguities is usually provided by the lexicographers who construct the lexicon. In this paper we propose an automatic method for word sense induction, i.e. for the discovery of a set of sense descriptors to a given ambiguous word. The approach is based on the statistics of the distributional similarity between the words in a corpus. Our algorithm works as follows: The 20 strongest first-order associations to the ambiguous word are considered as sense descriptor candidates. All pairs of these candidates are ranked according to the following two criteria: First, the two words in a pair should be as dissimilar as possible. Second, although being dissimilar their co-occurrence vectors should add up to the co-occurrence vector of the ambiguous word scaled by two. Both conditions together have the effect that preference is given to pairs whose co-occurring words are complementary. For best results, our implementation uses singular value decomposition, entropy-based weights, and second-order similarity metrics.",
"title": ""
}
] |
[
{
"docid": "e9940668ce12749d7b6ee82ea1e1e2e4",
"text": "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.",
"title": ""
},
{
"docid": "ba118d5a155e1c74d748ae6db557838d",
"text": "Born 1963; diploma in architecture and in civil engineering; Ph.D. in structural engineering RWTH Aachen; founder of Bureau d’études Weinand, Liège; professor at EPFL and director of the IBOIS/EPFL Lausanne; co-founder of SHEL Architecture Engineering and Production Design, Geneva. Olivier BAVEREL Associate Prof. Dr. Navier Research center, ENPC, Champs-sur-Marne ENSAG, France baverel@lami.enpc.fr",
"title": ""
},
{
"docid": "26295dded01b06c8b11349723fea81dd",
"text": "The increasing popularity of parametric design tools goes hand in hand with the use of building performance simulation (BPS) tools from the early design phase. However, current methods require a significant computational time and a high number of parameters as input, as they are based on traditional BPS tools conceived for detailed building design phase. Their application to the urban scale is hence difficult. As an alternative to the existing approaches, we developed an interface to CitySim, a validated building simulation tool adapted to urban scale assessments, bundled as a plug-in for Grasshopper, a popular parametric design platform. On the one hand, CitySim allows faster simulations and requires fewer parameters than traditional BPS tools, as it is based on algorithms providing a good trade-off between the simulations requirements and their accuracy at the urban scale; on the other hand, Grasshopper allows the easy manipulation of building masses and energy simulation parameters through semi-automated parametric",
"title": ""
},
{
"docid": "eb29f0094237da86af1df56735e310ab",
"text": "INTRODUCTION\nTemporary skeletal anchorage devices now offer the possibility of closing anterior open bites and decreasing anterior face height by intruding maxillary posterior teeth, but data for treatment outcomes are lacking. This article presents outcomes and posttreatment changes for consecutive patients treated with a standardized technique.\n\n\nMETHODS\nThe sample included 33 consecutive patients who had intrusion of maxillary posterior teeth with a maxillary occlusal splint and nickel-titanium coil springs to temporary anchorage devices in the zygomatic buttress area, buccal and apical to the maxillary molars. Of this group, 30 had adequate cephalograms available for the period of treatment, 27 had cephalograms including 1-year posttreatment, and 25 had cephalograms from 2 years or longer.\n\n\nRESULTS\nDuring splint therapy, the mean molar intrusion was 2.3 mm. The mean decrease in anterior face height was 1.6 mm, less than expected because of a 0.6-mm mean eruption of the mandibular molars. During the postintrusion orthodontics, the mean change in maxillary molar position was a 0.2-mm extrusion, and there was a mean 0.5-mm increase in face height. Positive overbite was maintained in all patients, with a slight elongation (<2 mm) of the incisors contributing to this. During the 1 year of posttreatment retention, the mean changes were a further eruption of 0.5 mm of the maxillary molars, whereas the mandibular molars intruded by 0.6 mm, and there was a small decrease in anterior face height. Changes beyond 1 year posttreatment were small and attributable to growth rather than relapse in tooth positions.\n\n\nCONCLUSIONS\nIntrusion of the maxillary posterior teeth can give satisfactory correction of moderately severe anterior open bites, but 0.5 to 1.5 mm of reeruption of these teeth is likely to occur. Controlling the vertical position of the mandibular molars so that they do not erupt as the maxillary teeth are intruded is important in obtaining a decrease in face height.",
"title": ""
},
{
"docid": "2d905398cfb131e0ea674c564552b090",
"text": "In this article, I review the diverse ways in which perceived self-efficacy contributes to cognitive development and functioning. Perceived self-efficacy exerts its influence through four major processes. They include cognitive, motivational, affective, and selection processes. There are three different levels at which perceived self-efficacy operates as an important contributor to academic devellopment. Students' beliefs in their efficacy to regulate their own learning and to master academic activities determine their aspirations, level of motivation, and academic accomplishments. Teachers' beliefs in their personal efficacy to motivate and promote learning affect the types of learning environments tlhey create and the level of academic progress their students achieve. Faculti~es' beliefs in their collective instructional efficacy contribute significantly to their schools' level of academic achievement. Student body characteristics influence school-level achievement more strongly by altering faculties' beliefs in their collective efficacy than through direct affects on school achievement.",
"title": ""
},
{
"docid": "68e714e5a3e92924c63167781149e628",
"text": "This paper presents a millimeter wave wideband differential line to waveguide transition using a short ended slot line. The slot line connected in parallel to the rectangular waveguide can effectively compensate the frequency dependence of the susceptance in the waveguide. Thus it is suitable to achieve a wideband characteristic together with a simpler structure. It is experimentally demonstrated that the proposed transitions have the relative bandwidth of 20.2 % with respect to -10 dB reflection, which is a significant wideband characteristic compared with the conventional transition's bandwidth of 11%.",
"title": ""
},
{
"docid": "b69f6ed1ba20025801ce090ef5f2e4a3",
"text": "At the heart of a well-disciplined, systematic methodology that explicitly supports the use of COTS components is a clearly defined process for effectively using components that meet the needs of the system under development. In this paper, we present the CARE/SA approach which supports the iterative matching, ranking, and selection of COTS components, using a representation of COTS components as an aggregate of their functional and non-functional requirements and architecture. The approach is illustrated using a Digital Library System example. 1 This is an extended and improved version of [8]; this extension considers both functional and non-functional requirements as candidates for the matching, ranking, and selection process.",
"title": ""
},
{
"docid": "920c977ce3ed5f310c97b6fcd0f5bef4",
"text": "In this paper, different automatic registration schemes base d on different optimization techniques in conjunction with different similarity measures are compared in term s of accuracy and efficiency. Results from every optimizat ion procedure are quantitatively evaluated with respect to t he manual registration, which is the standard registration method used in clinical practice. The comparison has shown automatic regi st ation schemes based on CD consist of an accurate and reliable method that can be used in clinical ophthalmology, as a satisfactory alternative to the manual method. Key-Words: multimodal image registration, optimization algorithms, sim ilarity metrics, retinal images",
"title": ""
},
{
"docid": "da1109932b3ab9ca5420ac93b44c48f9",
"text": "The deployment of rescue robots in real operations is becoming increasingly common thanks to recent advances in AI technologies and high performance hardware. Rescue robots can now operate for extended period of time, cover wider areas and process larger amounts of sensory information making them considerably more useful during real life threatening situations, including both natural or man-made disasters. In this thesis we present results of our research which focuses on investigating ways of enhancing visual perception for Unmanned Ground Vehicles (UGVs) through environmental interactions using different sensory systems, such as tactile sensors and wireless receivers. We argue that a geometric representation of the robot surroundings built upon vision data only, may not suffice in overcoming challenging scenarios, and show that robot interactions with the environment can provide a rich layer of new information that needs to be suitably represented and merged into the cognitive world model. Visual perception for mobile ground vehicles is one of the fundamental problems in rescue robotics. Phenomena such as rain, fog, darkness, dust, smoke and fire heavily influence the performance of visual sensors, and often result in highly noisy data, leading to unreliable or incomplete maps. We address this problem through a collection of studies and structure the thesis as follow: Firstly, we give an overview of the Search & Rescue (SAR) robotics field, and discuss scenarios, hardware and related scientific questions. Secondly, we focus on the problems of control and communication. Mobile robots require stable communication with the base station to exchange valuable information. Communication loss often presents a significant mission risk and disconnected robots are either abandoned, or autonomously try to back-trace their way to the base station. We show how non-visual environmental properties (e.g. the WiFi signal distribution) can be efficiently modeled using probabilistic active perception frameworks based on Gaussian Processes, and merged into geometric maps so to facilitate the SAR mission. We then show how to use tactile perception to enhance mapping. Implicit environmental properties such as the terrain deformability, are analyzed through strategic glances and touches and then mapped into probabilistic models. Lastly, we address the problem of reconstructing objects in the environment. We present a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene that enables on-the-fly model generation. Although this thesis focuses mostly on rescue UGVs, the concepts presented can be applied to other mobile platforms that operates under similar circumstances. To make sure that the suggested methods work, we have put efforts into design of user interfaces and the evaluation of those in user studies.",
"title": ""
},
{
"docid": "576c215649f09f2f6fb75369344ce17f",
"text": "The emergence of two new technologies, namely, software defined networking (SDN) and network function virtualization (NFV), have radically changed the development of network functions and the evolution of network architectures. These two technologies bring to mobile operators the promises of reducing costs, enhancing network flexibility and scalability, and shortening the time-to-market of new applications and services. With the advent of SDN and NFV and their offered benefits, the mobile operators are gradually changing the way how they architect their mobile networks to cope with ever-increasing growth of data traffic, massive number of new devices and network accesses, and to pave the way toward the upcoming fifth generation networking. This survey aims at providing a comprehensive survey of state-of-the-art research work, which leverages SDN and NFV into the most recent mobile packet core network architecture, evolved packet core. The research work is categorized into smaller groups according to a proposed four-dimensional taxonomy reflecting the: 1) architectural approach, 2) technology adoption, 3) functional implementation, and 4) deployment strategy. Thereafter, the research work is exhaustively compared based on the proposed taxonomy and some added attributes and criteria. Finally, this survey identifies and discusses some major challenges and open issues, such as scalability and reliability, optimal resource scheduling and allocation, management and orchestration, and network sharing and slicing that raise from the taxonomy and comparison tables that need to be further investigated and explored.",
"title": ""
},
{
"docid": "6ed5198b9b0364f41675b938ec86456f",
"text": "Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)",
"title": ""
},
{
"docid": "d8e309d93cb48e0d4717c1acdf7a64c5",
"text": "Enterprise resource planning ~ERP! was originated in the manufacturing industry. It provides a general working environ for an enterprise to integrate its major business management functions with one single common database so that information can and efficient communications can be achieved between management functions. This paper first briefs the ERP technology, its o its current development in general. Based on the needs of running a construction enterprise, ERP shows its potential for the co industry. However, the unique nature of the industry prevents a direct implementation of existing ERP systems, which are developed for the manufacturing industry. This paper underlines the importance of the establishment of the basic theory for d construction enterprise resource planning systems ~CERP!. A CERP must address the nature of the general industry practice. Fundam features are identified and discussed in the paper. A three-tiered client/server architecture is proposed, with discussions on th and major components of each tier. Needed research issues are discussed, including CERP architectures, project manageme advanced planning techniques, standardization of management functions, and modeling human intelligence. Construction ma examples are incorporated into the discussions. DOI: 10.1061/ ~ASCE!0733-9364~2003!129:2~214! CE Database subject headings: Construction management; Construction industry; Planning.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "1adacc7dc452e27024756c36eecb8cae",
"text": "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "ad1dde10286d4c43f4783fc727e5e820",
"text": "A fast method of handwritten word recognition suitable for real time applications is presented in this paper. Preprocessing, segmentation and feature extraction are implemented using a chain code representation of the word contour. Dynamic matching between characters of a lexicon entry and segment(s) of the input word image is used to rank the lexicon entries in order of best match. Variable duration for each character is defined and used during the matching. Experimental results prove that our approach using the variable duration outperforms the method using fixed duration in terms of both accuracy and speed. Speed of the entire recognition process is about 200 msec on a single SPARC-10 platform and the recognition accuracy is 96.8 percent are achieved for lexicon size of 10, on a database of postal words captured at 212 dpi.",
"title": ""
},
{
"docid": "d3087ea8bea3516606b8fc5e61888658",
"text": "This paper presents a novel topology for the generation of adjustable frequency and magnitude pulsewidth-modulated (PWM) three-phase ac from a balanced three-phase ac source with a high-frequency ac link. The proposed single-stage power electronic transformer (PET) with bidirectional power flow capability may find application in compact isolated PWM ac drives. This topology along with the proposed control has the following advantages: 1) input power factor correction; 2) common-mode voltage suppression at the load end; 3) high-quality output voltage waveform (comparable with conventional space vector PWM); and 4) minimization of output voltage loss, common-mode voltage switching, and distortion of the load current waveform due to leakage inductance commutation. A source-based commutation of currents associated with energy in leakage inductance (termed as leakage energy) has been proposed. This results in soft-switching of the output-side converter and recovery of the leakage energy. The entire topology along with the proposed control scheme has been analyzed. The simulation and experimental results verify the analysis and advantages of the proposed PET.",
"title": ""
},
{
"docid": "3d2a072f265b259169fce33ccd6dd11a",
"text": "gem5-gpu is a new simulator that models tightly integrated CPU-GPU systems. It builds on gem5, a modular full-system CPU simulator, and GPGPUSim, a detailed GPGPU simulator. gem5-gpu routes most memory accesses through Ruby, which is a highly configurable memory system in gem5. By doing this, it is able to simulate many system configurations, ranging from a system with coherent caches and a single virtual address space across the CPU and GPU to a system that maintains separate GPU and CPU physical address spaces. gem5gpu can run most unmodified CUDA 3.2 source code. Applications can launch non-blocking kernels, allowing the CPU and GPU to execute simultaneously. We present gem5-gpu's software architecture and a brief performance validation. We also discuss possible extensions to the simulator. gem5-gpu is open source and available at gem5-gpu.cs.wisc.edu.",
"title": ""
},
{
"docid": "0845902210ac0d4dfcb41902623845ad",
"text": "Advances in data storage and image acquisition technologies have enabled the creation of large image datasets. In this scenario, it is necessary to develop appropriate information systems to efficiently manage these collections. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) systems. Basically, these systems try to retrieve images similar to a user-defined specification or pattern (e.g., shape sketch, image example). Their goal is to support image retrieval based on content properties (e.g., shape, color, texture), usually encoded into feature vectors. One of the main advantages of the CBIR approach is the possibility of an automatic retrieval process, instead of the traditional keyword-based approach, which usually requires very laborious and time-consuming previous annotation of database images. The CBIR technology has been used in several applications such as fingerprint identification, biodiversity information systems, digital libraries, crime prevention, medicine, historical research, among others. This paper aims to introduce the problems and challenges concerned with the creation of CBIR systems, to describe the existing solutions and applications, and to present the state of the art of the existing research in this area.",
"title": ""
},
{
"docid": "5329edd5259cf65d62922b17765fce0d",
"text": "T emergence of software-based platforms is shifting competition toward platform-centric ecosystems, although this phenomenon has not received much attention in information systems research. Our premise is that the coevolution of the design, governance, and environmental dynamics of such ecosystems influences how they evolve. We present a framework for understanding platform-based ecosystems and discuss five broad research questions that present significant research opportunities for contributing homegrown theory about their evolutionary dynamics to the information systems discipline and distinctive information technology-artifactcentric contributions to the strategy, economics, and software engineering reference disciplines.",
"title": ""
}
] |
scidocsrr
|
f050a2633d6969f6d7d49caae537e703
|
Independent learning of internal models for kinematic and dynamic control of reaching
|
[
{
"docid": "8051535c66ecd4a8553a7d33051b1ad4",
"text": "There are several invariant features of pointto-point human arm movements: trajectories tend to be straight, smooth, and have bell-shaped velocity profiles. One approach to accounting for these data is via optimization theory; a movement is specified implicitly as the optimum of a cost function, e.g., integrated jerk or torque change. Optimization models of trajectory planning, as well as models not phrased in the optimization framework, generally fall into two main groups-those specified in kinematic coordinates and those specified in dynamic coordinates. To distinguish between these two possibilities we have studied the effects of artificial visual feedback on planar two-joint arm movements. During self-paced point-to-point arm movements the visual feedback of hand position was altered so as to increase the perceived curvature of the movement. The perturbation was zero at both ends of the movement and reached a maximum at the midpoint of the movement. Cost functions specified by hand coordinate kinematics predict adaptation to increased curvature so as to reduce the visual curvature, while dynamically specified cost functions predict no adaptation in the underlying trajectory planner, provided the final goal of the movement can still be achieved. We also studied the effects of reducing the perceived curvature in transverse movements, which are normally slightly curved. Adaptation should be seen in this condition only if the desired trajectory is both specified in kinematic coordinates and actually curved. Increasing the perceived curvature of normally straight sagittal movements led to significant (P<0.001) corrective adaptation in the curvature of the actual hand movement; the hand movement became curved, thereby reducing the visually perceived curvature. Increasing the curvature of the normally curved transverse movements produced a significant (P<0.01) corrective adaptation; the hand movement became straighter, thereby again reducing the visually perceived curvature. When the curvature of naturally curved transverse movements was reduced, there was no significant adaptation (P>0.05). The results of the curvature-increasing study suggest that trajectories are planned in visually based kinematic coordinates. The results of the curvature-reducing study suggest that the desired trajectory is straight in visual space. These results are incompatible with purely dynamicbased models such as the minimum torque change model. We suggest that spatial perception-as mediated by vision-plays a fundamental role in trajectory planning.",
"title": ""
}
] |
[
{
"docid": "b1e0fa6b41fb697db8dfe5520b79a8e6",
"text": "The problem of computing the minimum-angle bounding cone of a set of three-dimensional vectors has numero cations in computer graphics and geometric modeling. One such application is bounding the tangents of space cur vectors normal to a surface in the computation of the intersection of two surfaces. No optimal-time exact solution to this problem has been yet given. This paper presents a roadmap for a few strate provide optimal or near-optimal (time-wise) solutions to this problem, which are also simple to implement. Specifica worst-case running time is required, we provide an O ( logn)-time Voronoi-diagram-based algorithm, where n is the number of vectors whose optimum bounding cone is sought. Otherwise, i f one is willing to accept an, in average, efficient algorithm, we show that the main ingredient of the algorithm of Shirman and Abi-Ezzi [Comput. Graphics Forum 12 (1993) 261–272 implemented to run in optimal (n) expected time. Furthermore, if the vectors (as points on the sphere of directions) are to occupy no more than a hemisphere, we show how to simplify this ingredient (by reducing the dimension of the p without affecting the asymptotic expected running time. Both versions of this algorithm are based on computing (as an problem) the minimum spanning circle (respectively, ball) of a two-dimensional (respectively, three-dimensional) set o 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "76dd43d62482ce1ea5837404a1fb4291",
"text": "This letter demonstrates GaN vertical Schottky and p-n diodes on Si substrates for the first time. With a total GaN drift layer of only 1.5-μm thick, a breakdown voltage (BV) of 205 V was achieved for GaN-on-Si Schottky diodes, and a soft BV higher than 300 V was achieved for GaN-on-Si p-n diodes with a peak electric field of 2.9 MV/cm in GaN. A trap-assisted space-charge-limited conduction mechanism determined the reverse leakage and breakdown mechanism for GaN-on-Si vertical p-n diodes. The ON-resistance was 6 and 10 mQ · cm2 for the vertical Schottky and p-n diode, respectively. These results show the promising performance of GaN-on-Si vertical devices for future power applications.",
"title": ""
},
{
"docid": "c10f142af3861fcc3bae2d739b83bb30",
"text": "The term ‘crowdsourcing’ was initially introduced in 2006 to describe an emerging distributed problem-solving model by online workers. Since then it has been widely studied and practiced to support software engineering. In this paper we provide a comprehensive survey of the use of crowdsourcing in software engineering, seeking to cover all literature on this topic. We first review the definitions of crowdsourcing and derive our definition of Crowdsourcing Software Engineering together with its taxonomy. Then we summarise industrial crowdsourcing practice in software engineering and corresponding case studies. We further analyse the software engineering domains, tasks and applications for crowdsourcing and the platforms and stakeholders involved in realising Crowdsourced Software Engineering solutions. We conclude by exposing trends, open issues and opportunities for future research on Crowdsourced Software Engineering.",
"title": ""
},
{
"docid": "bc2a32d116e79d0120da6ce81b97ce09",
"text": "Naeem Akhtar MS Scholar; Department of Management Sciences, COMSATS Institute of Information Technology, Sahiwal, Pakistan Saqib Ali and Muhammad Salman MS Scholar; Department of Management Sciences, Bahauddin Zakariya University Sub Campus Sahiwal, Pakistan Asad-Ur-Rehman MS Scholar; Department of Management Sciences, COMSATS Institute of Information Technology, Sahiwal, Pakistan Aqsa Ijaz BBA (Hons), Department of Management Sciences, University of Education Lahore (Okara Campus), Pakistan",
"title": ""
},
{
"docid": "320a299ad474f0f68ca30f8983a0becd",
"text": "This paper describes the design of a new type of Pneumatic Artificial Muscle (PAM), namely the Pleated Pneumatic Artificial Muscle (PPAM). It was developed as an improvement with regard to existing types of PAM, e.g. the McKibben muscle. Its principle characteristic is its pleated membrane. It can inflate without material stretching and friction and has practically no stress in the direction perpendicular to its axis of symmetry. Besides these it is extremely strong and yet very lightweight and it has a large stroke compared to other designs. A general introduction on PAMs is given together with a short discussion and motivation for this new design. The concept of the PPAM is explained and a mathematical model is derived. This model proves its principle of operation. From the model, several characteristics, such as developed force, maximum contraction, diameter, volume and membrane tensile stress, are obtained. Material choices and dimensions of a typical PPAM are next discussed and its measured values of static force and diameter are compared to the model predicted values. The agreement between both is found to be very good.",
"title": ""
},
{
"docid": "0b2f0b36bb458221b340b5e4a069fe2b",
"text": "The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratorybased immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an ‘artificial DC’. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.",
"title": ""
},
{
"docid": "d83a90a3a080f4e3bce2a68d918d20ce",
"text": "We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications’ data structures. Frequently used data structures have “average-case” expected running time that’s far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.",
"title": ""
},
{
"docid": "3ce203d713a0060cc3c1466d62c9bd36",
"text": "This paper describes successful applications of discriminative lexicon models to the statistical machine translation (SMT) systems into morphologically complex languages. We extend the previous work on discriminatively trained lexicon models to include more contextual information in making lexical selection decisions by building a single global log-linear model of translation selection. In offline experiments, we show that the use of the expanded contextual information, including morphological and syntactic features, help better predict words in three target languages with complex morphology (Bulgarian, Czech and Korean). We also show that these improved lexical prediction models make a positive impact in the end-to-end SMT scenario from English to these languages.",
"title": ""
},
{
"docid": "e6457f5257e95d727e06e212bef2f488",
"text": "The emerging ability to comply with caregivers' dictates and to monitor one's own behavior accordingly signifies a major growth of early childhood. However, scant attention has been paid to the developmental course of self-initiated regulation of behavior. This article summarizes the literature devoted to early forms of control and highlights the different philosophical orientations in the literature. Then, focusing on the period from early infancy to the beginning of the preschool years, the author proposes an ontogenetic perspective tracing the kinds of modulation or control the child is capable of along the way. The developmental sequence of monitoring behaviors that is proposed calls attention to contributions made by the growth of cognitive skills. The role of mediators (e.g., caregivers) is also discussed.",
"title": ""
},
{
"docid": "3ebc26643334c88ccc44fb01f60d600f",
"text": "Skin whitening products are commercially available for cosmetic purposes in order to obtain a lighter skin appearance. They are also utilized for clinical treatment of pigmentary disorders such as melasma or postinflammatory hyperpigmentation. Whitening agents act at various levels of melanin production in the skin. Many of them are known as competitive inhibitors of tyrosinase, the key enzyme in melanogenesis. Others inhibit the maturation of this enzyme or the transport of pigment granules (melanosomes) from melanocytes to surrounding keratinocytes. In this review we present an overview of (natural) whitening products that may decrease skin pigmentation by their interference with the pigmentary processes.",
"title": ""
},
{
"docid": "4cfd7fab35e081f2d6f81ec23c4d0d18",
"text": "In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.",
"title": ""
},
{
"docid": "54546694b5b43b561237d50ce4a67dfc",
"text": "We describe a load balancing system for parallel intrusion detection on multi-core systems using a novel model allowing fine-grained selection of the network traffic to be analyzed. The system receives data from a network and distributes it to multiple IDSs running on individual CPU cores. In contrast to related approaches, we do not assume a static association of flows to IDS processes but adaptively determine the load of each IDS process to allocate network flows for a limited time window. We developed a priority model for the selection of network data and the assignment process. Special emphasis is given to environments with highly dynamic network traffic, where only a fraction of all data can be analyzed due to system constraints. We show that IDSs analyzing packet payload data disproportionately suffer from random packet drops due to overload. Our proposed system ensures loss-free analysis for selected data streams in a specified time interval. Our primary focus lies on the treatment of dynamic network behavior: neither data should be lost unintentionally, nor analysis processes should be needlessly idle. To evaluate the priority model and assignment systems, we implemented a prototype and evaluated it with real network traffic.",
"title": ""
},
{
"docid": "6c00c0939246209c5b75a1e16114c86e",
"text": "The Flyback topology results in significant cost and space savings for multiple output power supplies for power levels up to 100 W. Flyback topologies store and transfer energy using a transformer, which due to physical limitations can cause large voltage transient spikes during the switching cycle at the drain of the power switch and at the secondary rectifier. This paper presents an overview of dissipative voltage snubbers and discusses their design guidelines to suppress the voltage transients on both the primary and secondary sides of a flyback converter. In particular, Snubbers used to reduce the stress on the switch and improve efficiency in a flyback topology are discussed.",
"title": ""
},
{
"docid": "b7c21feb8cef521cf579a3cceceeb334",
"text": "Over the last decade, growing attention has been paid to the potential value of design theory and practice in improving public services. Experience-based Co-design (EBCD) is a participatory research approach that draws upon design tools and ways of thinking in order to bring healthcare staff and patients together to improve the quality of care. The co-design process that is integral to EBCD is powerful but also challenging, as it requires both staff and patients to renegotiate their roles and expectations as part of a reconfiguration of the relationships of power between citizens and public services. In this paper, we reflect upon the implementation and adaptation of EBCD in a variety of projects and on the challenges of codesign work within healthcare settings. Our discussion aims to contribute to the growing field of service design and to encourage further research into how co-design processes shape and are shaped by the power relations that characterize contemporary public services.",
"title": ""
},
{
"docid": "764e5c5201217be1aa9e24ce4fa3760a",
"text": "Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Please do not copy or distribute without explicit permission of the authors. Abstract Customer defection or churn is a widespread phenomenon that threatens firms across a variety of industries with dramatic financial consequences. To tackle this problem, companies are developing sophisticated churn management strategies. These strategies typically involve two steps – ranking customers based on their estimated propensity to churn, and then offering retention incentives to a subset of customers at the top of the churn ranking. The implicit assumption is that this process would maximize firm's profits by targeting customers who are most likely to churn. However, current marketing research and practice aims at maximizing the correct classification of churners and non-churners. Profit from targeting a customer depends on not only a customer's propensity to churn, but also on her spend or value, her probability of responding to retention offers, as well as the cost of these offers. Overall profit of the firm also depends on the number of customers the firm decides to target for its retention campaign. We propose a predictive model that accounts for all these elements. Our optimization algorithm uses stochastic gradient boosting, a state-of-the-art numerical algorithm based on stage-wise gradient descent. It also determines the optimal number of customers to target. The resulting optimal customer ranking and target size selection leads to, on average, a 115% improvement in profit compared to current methods. Remarkably, the improvement in profit comes along with more prediction errors in terms of which customers will churn. However, the new loss function leads to better predictions where it matters the most for the company's profits. For a company like Verizon Wireless, this translates into a profit increase of at least $28 million from a single retention campaign, without any additional implementation cost.",
"title": ""
},
{
"docid": "c75c4f2acf49dd4d52116eae7559f6a5",
"text": "In 2005, Kreidstein first proposed the term \"Cutis pleonasmus,\" a Greek term meaning \"redundancy,\" which refers to the excessive skin that remains after massive weight loss. Cutis pleonasmus is clearly distinguishable from other diseases showing increased laxity of the skin, such as pseudoxanthoma elasticum, congenital and acquired generalized cutis laxa. Although individuals who are severely overweight are few and bariatric surgeries are less common in Korea than in the West, the number of these patients is increasing due to changes to Western life styles. We report a case for a 24-year-old man who presented with generalized lax and loose skin after massive weight loss. He was diagnosed with cutis pleonasmus based on the history of great weight loss, characteristic clinical features and normal histological findings. To the best of our knowledge, this is the first report of cutis pleonasmus in Korea.",
"title": ""
},
{
"docid": "a5f926bc15c7b3dd75b3e67c8537c3fb",
"text": "Practical and theoretical issues are presented concerning the design, implementation, and use of a good, minimal standard random number generator that will port to virtually all systems.",
"title": ""
},
{
"docid": "2e87c4fbb42424f3beb07e685c856487",
"text": "Conventional wisdom ties the origin and early evolution of the genus Homo to environmental changes that occurred near the end of the Pliocene. The basic idea is that changing habitats led to new diets emphasizing savanna resources, such as herd mammals or underground storage organs. Fossil teeth provide the most direct evidence available for evaluating this theory. In this paper, we present a comprehensive study of dental microwear in Plio-Pleistocene Homo from Africa. We examined all available cheek teeth from Ethiopia, Kenya, Tanzania, Malawi, and South Africa and found 18 that preserved antemortem microwear. Microwear features were measured and compared for these specimens and a baseline series of five extant primate species (Cebus apella, Gorilla gorilla, Lophocebus albigena, Pan troglodytes, and Papio ursinus) and two protohistoric human foraging groups (Aleut and Arikara) with documented differences in diet and subsistence strategies. Results confirmed that dental microwear reflects diet, such that hard-object specialists tend to have more large microwear pits, whereas tough food eaters usually have more striations and smaller microwear features. Early Homo specimens clustered with baseline groups that do not prefer fracture resistant foods. Still, Homo erectus and individuals from Swartkrans Member 1 had more small pits than Homo habilis and specimens from Sterkfontein Member 5C. These results suggest that none of the early Homo groups specialized on very hard or tough foods, but that H. erectus and Swartkrans Member 1 individuals ate, at least occasionally, more brittle or tough items than other fossil hominins studied.",
"title": ""
},
{
"docid": "43233ce6805a50ed931ce319245e4f6b",
"text": "Currently the use of three-phase induction machines is widespread in industrial applications due to several methods available to control the speed and torque of the motor. Many applications require that the same torque be available at all revolutions up to the nominal value. In this paper two control methods are compared: scalar control and vector control. Scalar control is a relatively simple method. The purpose of the technique is to control the magnitude of the chosen control quantities. At the induction motor the technique is used as Volts/Hertz constant control. Vector control is a more complex control technique, the evolution of which was inevitable, too, since scalar control cannot be applied for controlling systems with dynamic behaviour. The vector control technique works with vector quantities, controlling the desired values by using space phasors which contain all the three phase quantities in one phasor. It is also known as field-oriented control because in the course of implementation the identification of the field flux of the motor is required. This paper reports on the changing possibilities of the revolution – torque characteristic curve, and demonstrates the results of the two control methods with simulations. The simulations and the applied equivalent circuit parameters are based on real measurements done with no load, with direct current and with locked-rotor.",
"title": ""
},
{
"docid": "87ecd8c0331b6277cddb6a9a11cec42f",
"text": "OBJECTIVE\nThis study aimed to determine the principal factors contributing to the cost of avoiding a birth with Down syndrome by using cell-free DNA (cfDNA) to replace conventional screening.\n\n\nMETHODS\nA range of unit costs were assigned to each item in the screening process. Detection rates were estimated by meta-analysis and modeling. The marginal cost associated with the detection of additional cases using cfDNA was estimated from the difference in average costs divided by the difference in detection.\n\n\nRESULTS\nThe main factor was the unit cost of cfDNA testing. For example, replacing a combined test costing $150 with 3% false-positive rate and invasive testing at $1000, by cfDNA tests at $2000, $1500, $1000, and $500, the marginal cost is $8.0, $5.8, $3.6, and $1.4m, respectively. Costs were lower when replacing a quadruple test and higher for a 5% false-positive rate, but the relative importance of cfDNA unit cost was unchanged. A contingent policy whereby 10% to 20% women were selected for cfDNA testing by conventional screening was considerably more cost-efficient. Costs were sensitive to cfDNA uptake.\n\n\nCONCLUSION\nUniversal cfDNA screening for Down syndrome will only become affordable by public health purchasers if costs fall substantially. Until this happens, the contingent use of cfDNA is recommended.",
"title": ""
}
] |
scidocsrr
|
d481ce53c15c2181ccc811f46919bdbb
|
Quantum computing and communication
|
[
{
"docid": "a5c84abd70f221ba6f0c601cf1a275b5",
"text": "Richard Feynman's observation that certain quantum mechanical effects cannot be simulated efficiently on a computer led to speculation that computation in general could be done more efficiently if it used these quantum effects. This speculation proved justified when Peter Shor described a polynomial time quantum algorithm for factoring intergers.\nIn quantum systems, the computational space increases exponentially with the size of the system, which enables exponential parallelism. This parallelism could lead to exponentially faster quantum algorithms than possible classically. The catch is that accessing the results, which requires measurement, proves tricky and requires new nontraditional programming techniques.\nThe aim of this paper is to guide computer scientists through the barriers that separate quantum computing from conventional computing. We introduce basic principles of quantum mechanics to explain where the power of quantum computers comes from and why it is difficult to harness. We describe quantum cryptography, teleportation, and dense coding. Various approaches to exploiting the power of quantum parallelism are explained. We conclude with a discussion of quantum error correction.",
"title": ""
}
] |
[
{
"docid": "45390290974f347d559cd7e28c33c993",
"text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.",
"title": ""
},
{
"docid": "edeefde21bbe1ace9a34a0ebe7bc6864",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "90d2bf357eea588bc1326c87a723ed86",
"text": "Traffic is the chief puzzle problem which every country faces because of the enhancement in number of vehicles throughout the world, especially in large urban towns. Hence the need arises for simulating and optimizing traffic control algorithms to better accommodate this increasing demand. Fuzzy optimization deals with finding the values of input parameters of a complex simulated system which result in desired output. This paper presents a MATLAB simulation of fuzzy logic traffic controller for controlling flow of traffic in isolated intersections. This controller is based on the waiting time and queue length of vehicles at present green phase and vehicles queue lengths at the other phases. The controller controls the traffic light timings and phase difference to ascertain sebaceous flow of traffic with least waiting time and queue length. In this paper, the isolated intersection model used consists of two alleyways in each approach. Every outlook has different value of queue length and waiting time, systematically, at the intersection. The maximum value of waiting time and vehicle queue length has to be selected by using proximity sensors as inputs to controller for the ameliorate control traffic flow at the intersection. An intelligent traffic model and fuzzy logic traffic controller are developed to evaluate the performance of traffic controller under different pre-defined conditions for oleaginous flow of traffic. Additionally, this fuzzy logic traffic controller has emergency vehicle siren sensors which detect emergency vehicle movement like ambulance, fire brigade, Police Van etc. and gives maximum priority to him and pass preferred signal to it. Keywords-Fuzzy Traffic Controller; Isolated Intersection; Vehicle Actuated Controller; Emergency Vehicle Selector.",
"title": ""
},
{
"docid": "b9f665d7fe28d6abce0f429ed5a319ab",
"text": "■ Abstract The enzyme lactase that is located in the villus enterocytes of the small intestine is responsible for digestion of lactose in milk. Lactase activity is high and vital during infancy, but in most mammals, including most humans, lactase activity declines after the weaning phase. In other healthy humans, lactase activity persists at a high level throughout adult life, enabling them to digest lactose as adults. This dominantly inherited genetic trait is known as lactase persistence. The distribution of these different lactase phenotypes in human populations is highly variable and is controlled by a polymorphic element cis-acting to the lactase gene. A putative causal nucleotide change has been identified and occurs on the background of a very extended haplotype that is frequent in Northern Europeans, where lactase persistence is frequent. This single nucleotide polymorphism is located 14 kb upstream from the start of transcription of lactase in an intron of the adjacent gene MCM6. This change does not, however, explain all the variation in lactase expression.",
"title": ""
},
{
"docid": "680306f2f5a4e54e1b024f5cd47f60f4",
"text": "Age is one of the important biometric traits for reinforcing the identity authentication. The challenge of facial age estimation mainly comes from two difficulties: (1) the wide diversity of visual appearance existing even within the same age group and (2) the limited number of labeled face images in real cases. Motivated by previous research on human cognition, human beings can confidently rank the relative ages of facial images, we postulate that the age rank plays a more important role in the age estimation than visual appearance attributes. In this paper, we assume that the age ranks can be characterized by a set of ranking features lying on a low-dimensional space. We propose a simple and flexible subspace learning method by solving a sequence of constrained optimization problems. With our formulation, both the aging manifold, which relies on exact age labels, and the implicit age ranks are jointly embedded in the proposed subspace. In addition to supervised age estimation, our method also extends to semi-supervised age estimation via automatically approximating the age ranks of unlabeled data. Therefore, we can successfully include more available data to improve the feature discriminability. In the experiments, we adopt the support vector regression on the proposed ranking features to learn our age estimators. The results on the age estimation demonstrate that our method outperforms classic subspace learning approaches, and the semi-supervised learning successfully incorporates the age ranks from unlabeled data under different scales and sources of data set.",
"title": ""
},
{
"docid": "c6bfdc5c039de4e25bb5a72ec2350223",
"text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.",
"title": ""
},
{
"docid": "e9438241965b4cb6601624456b60f990",
"text": "This paper proposes a model for designing games around Artificial Intelligence (AI). AI-based games put AI in the foreground of the player experience rather than in a supporting role as is often the case in many commercial games. We analyze the use of AI in a number of existing games and identify design patterns for AI in games. We propose a generative ideation technique to combine a design pattern with an AI technique or capacity to make new AI-based games. Finally, we demonstrate this technique through two examples of AI-based game prototypes created using these patterns.",
"title": ""
},
{
"docid": "602afe27e9999f1bd3daefd0b0b93453",
"text": "The principle of Network Functions Virtualization (NFV) aims to transform network architectures by implementing Network Functions (NFs) in software that can run on commodity hardware. There are several challenges inherent to NFV, among which is the need for an orchestration and management framework. This paper presents the Cloud4NFV platform, which follows the major NFV standard guidelines. The platform is presented in detail and special attention is given to data modelling aspects. Further, insights on the current implementation of the platform are given, showing that part of its foundations lay on cloud infrastructure management and Software Defined Networking (SDN) platforms. Finally, it is presented a proof-of-concept (PoC) that illustrates how the platform can be used to deliver a novel service to end customers, focusing on Customer Premises Equipment (CPE) related functions.",
"title": ""
},
{
"docid": "a9dd71d336baa0ea78ceb0435be67f67",
"text": "In current credit ratings models, various accounting-based information are usually selected as prediction variables, based on historical information rather than the market’s assessment for future. In the study, we propose credit rating prediction model using market-based information as a predictive variable. In the proposed method, Moody’s KMV (KMV) is employed as a tool to evaluate the market-based information of each corporation. To verify the proposed method, using the hybrid model, which combine random forests (RF) and rough set theory (RST) to extract useful information for credit rating. The results show that market-based information does provide valuable information in credit rating predictions. Moreover, the proposed approach provides better classification results and generates meaningful rules for credit ratings. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b1e039673d60defd9b8699074235cf1b",
"text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.",
"title": ""
},
{
"docid": "96516274e1eb8b9c53296a935f67ca2a",
"text": "Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature.",
"title": ""
},
{
"docid": "4c16117954f9782b3a22aff5eb50537a",
"text": "Domain transfer is an exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (e.g., image-to-image, video-video), utilizing similar or shared networks to transform domain-specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (e.g., image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (e.g., variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.",
"title": ""
},
{
"docid": "3953a1a05e064b8211fe006af4595e70",
"text": "Sentiment analysis is a common task in natural language processing that aims to detect polarity of a text document (typically a consumer review). In the simplest settings, we discriminate only between positive and negative sentiment, turning the task into a standard binary classification problem. We compare several machine learning approaches to this problem, and combine them to achieve a new state of the art. We show how to use for this task the standard generative language models, which are slightly complementary to the state of the art techniques. We achieve strong results on a well-known dataset of IMDB movie reviews. Our results are easily reproducible, as we publish also the code needed to repeat the experiments. This should simplify further advance of the state of the art, as other researchers can combine their techniques with ours with little effort.",
"title": ""
},
{
"docid": "256b56bf5eb3a99de4b889d8e1eb735b",
"text": "This paper presents the design of a single layer, compact, tapered balun with a >20:1 bandwidth and less than λ/17 in length at the lowest frequency of operation. The balun operates from 0.7GHz to over 15GHz. It can provide both impedance transformation as well as a balanced feed for tightly coupled arrays. Its performance is compared with that of a full-length balun operating over the same frequency band. There is a high degree of agreement between the two baluns.",
"title": ""
},
{
"docid": "1159d85ed21049f3fb70db58307eafff",
"text": "Cannabis sativa L. is an annual dioecious plant from Central Asia. Cannabinoids, flavonoids, stilbenoids, terpenoids, alkaloids and lignans are some of the secondary metabolites present in C. sativa. Earlier reviews were focused on isolation and identification of more than 480 chemical compounds; this review deals with the biosynthesis of the secondary metabolites present in this plant. Cannabinoid biosynthesis and some closely related pathways that involve the same precursors are disscused.",
"title": ""
},
{
"docid": "cb6354591bbcf130beea46701ae0e59f",
"text": "Requirements engineering process is a human endeavor. People who hold a stake in a project are involved in the requirements engineering process. They are from different backgrounds and with different organizational and individual goals, social positions, and personalities. They have different ways to understand and express the knowledge, and communicate with others. The requirements development processes, therefore, vary widely depending on the people involved. In order to acquire quality requirements from different people, a large number of methods exit. However, because of the inadequate understanding about methods and the variability of the situations in which requirements are developed, it is difficult for organizations to identify a set of appropriate methods to develop requirements in a structured and systematic way. The insufficient requirements engineering process forms one important factor that cause the failure of an IT project [29].",
"title": ""
},
{
"docid": "9b220cb4c3883cb959d1665abefa5406",
"text": "Time domain synchronous OFDM (TDS-OFDM) has a higher spectrum and energy efficiency than standard cyclic prefix OFDM (CP-OFDM) by replacing the unknown CP with a known pseudorandom noise (PN) sequence. However, due to mutual interference between the PN sequence and the OFDM data block, TDS-OFDM cannot support high-order modulation schemes such as 256QAM in realistic static channels with large delay spread or high-definition television (HDTV) delivery in fast fading channels. To solve these problems, we propose the idea of using multiple inter-block-interference (IBI)-free regions of small size to realize simultaneous multi-channel reconstruction under the framework of structured compressive sensing (SCS). This is enabled by jointly exploiting the sparsity of wireless channels as well as the characteristic that path delays vary much slower than path gains. In this way, the mutually conditional time-domain channel estimation and frequency-domain data demodulation in TDS-OFDM can be decoupled without the use of iterative interference removal. The Cramér-Rao lower bound (CRLB) of the proposed estimation scheme is also derived. Moreover, the guard interval amplitude in TDS-OFDM can be reduced to improve the energy efficiency, which is infeasible for CP-OFDM. Simulation results demonstrate that the proposed SCS-aided TDS-OFDM scheme has a higher spectrum and energy efficiency than CP-OFDM by more than 10% and 20% respectively in typical applications.",
"title": ""
},
{
"docid": "a6f11cf1bf479fe72dcb8dabb53176ee",
"text": "This paper focuses on WPA and IEEE 802.11i protocols that represent two important solutions in the wireless environment. Scenarios where it is possible to produce a DoS attack and DoS flooding attacks are outlined. The last phase of the authentication process, represented by the 4-way handshake procedure, is shown to be unsafe from DoS attack. This can produce the undesired effect of memory exhaustion if a flooding DoS attack is conducted. In order to avoid DoS attack without increasing the complexity of wireless mobile devices too much and without changing through some further control fields of the frame structure of wireless security protocols, a solution is found and an extension of WPA and IEEE 802.11 is proposed. A protocol extension with three “static” variants and with a resource-aware dynamic approach is considered. The three enhancements to the standard protocols are achieved through some simple changes on the client side and they are robust against DoS and DoS flooding attack. Advantages introduced by the proposal are validated by simulation campaigns and simulation parameters such as attempted attacks, successful attacks, and CPU load, while the algorithm execution time is evaluated. Simulation results show how the three static solutions avoid memory exhaustion and present a good performance in terms of CPU load and execution time in comparison with the standard WPA and IEEE 802.11i protocols. However, if the mobile device presents different resource availability in terms of CPU and memory or if resource availability significantly changes in time, a dynamic approach that is able to switch among three different modalities could be more suitable.",
"title": ""
},
{
"docid": "f3a8e58eec0f243ae9fdfae78f75657d",
"text": "This paper studies the decentralized coded caching for a Fog Radio Access Network (F-RAN), whereby two edge-nodes (ENs) connected to a cloud server via fronthaul links with limited capacity are serving the requests of K r users. We consider all ENs and users are equipped with caches. A decentralized content placement is proposed to independently store contents at each network node during the off-peak hours. After that, we design a coded delivery scheme in order to deliver the user demands during the peak-hours under the objective of minimizing the normalized delivery time (NDT), which refers to the worst case delivery latency. An information-theoretic lower bound on the minimum NDT is derived for arbitrary number of ENs and users. We evaluate numerically the performance of the decentralized scheme. Additionally, we prove the approximate optimality of the decentralized scheme for a special case when the caches are only available at the ENs.",
"title": ""
},
{
"docid": "a442a5fd2ec466cac18f4c148661dd96",
"text": "BACKGROUND\nLong waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients.\n\n\nMETHODS\nData from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site.\n\n\nRESULTS\nA total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001).\n\n\nCONCLUSION\nCompared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.",
"title": ""
}
] |
scidocsrr
|
23bb732015e409d8a847bd30057ad231
|
Dynamic scheduling on video transcoding for MPEG DASH in the cloud environment
|
[
{
"docid": "8869cab615e5182c7c03f074ead081f7",
"text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.",
"title": ""
},
{
"docid": "a5c0ad9c841245e57bb71b19b4ad24b1",
"text": "HTTP video streaming, such as Flash video, is widely deployed to deliver stored media. Owing to TCP's reliable service, the picture and sound quality would not be degraded by network impairments, such as high delay and packet loss. However, the network impairments can cause rebuffering events which would result in jerky playback and deform the video's temporal structure. These quality degradations could adversely affect users' quality of experience (QoE). In this paper, we investigate the relationship among three levels of quality of service (QoS) of HTTP video streaming: network QoS, application QoS, and user QoS (i.e., QoE). Our ultimate goal is to understand how the network QoS affects the QoE of HTTP video streaming. Our approach is to first characterize the correlation between the application and network QoS using analytical models and empirical evaluation. The second step is to perform subjective experiments to evaluate the relationship between application QoS and QoE. Our analysis reveals that the frequency of rebuffering is the main factor responsible for the variations in the QoE.",
"title": ""
}
] |
[
{
"docid": "76049ed267e9327412d709014e8e9ed4",
"text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.",
"title": ""
},
{
"docid": "7fd33ebd4fec434dba53b15d741fdee4",
"text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.",
"title": ""
},
{
"docid": "0fafdd778991cf0fbdaf2487d09d9a0d",
"text": "This paper proposes a novel predictive cruise control method for a platoon of connected and autonomous vehicles. The main objective is to minimize idle time of vehicular platoon using information from the traffic lights. First, the reference velocity is determined for each vehicle in the platoon. Second, a data-driven learning strategy, named adaptive dynamic programming (ADP), is employed to develop an optimal state-feedback controller without any prior knowledge of the platooning system dynamics. This resultant controller regulates the headway, velocity and acceleration of each vehicle to accommodate both safety and trip time reduction goals. A numerical simulation is demonstrated to ascertain the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "f5a8d2d7ea71fa5444cc1594dc0cf5ab",
"text": "Radar sensors operating in the 76–81 GHz range are considered key for Advanced Driver Assistance Systems (ADAS) like adaptive cruise control (ACC), collision mitigation and avoidance systems (CMS) or lane change assist (LCA). These applications are the next wave in automotive safety systems and have thus generated increased interest in lower-cost solutions especially for the mm-wave front-end (FE) section. Today, most of the radar sensors in this frequency range use GaAs based FEs. These multi-chip GaAs FEs are a main cost driver in current radar sensors due to their low integration level. The step towards monolithic microwave integrated circuits (MMIC) based on a 200 GHz ft silicon-germanium (SiGe) technology integrating all needed RF building blocks (mixers, VCOs, dividers, buffers, PAs) on an single die does not only lead to cost reductions but also benefits the testability of these MMICs. This is especially important in the light of upcoming functional safety standards like ASIL-D and ISO26262.",
"title": ""
},
{
"docid": "7a50a8e670b2f33b666fc0625a0f7a9a",
"text": "Over the past few years, computer architecture research has moved towards execution-driven simulation, due to the inability of traces to capture timing-dependent thread execution interleaving. However, trace-driven simulation has many advantages over execution-driven that are being missed in multithreaded application simulations. We present a methodology to properly simulate multithreaded applications using trace-driven environments. We distinguish the intrinsic application behavior from the computation for managing parallelism. Application traces capture the intrinsic behavior in the sections of code that are independent from the dynamic multithreaded nature, and the points where parallelism-management computation occurs. The simulation framework is composed of a trace-driven simulation engine and a dynamic-behavior component that implements the parallelism-management operations for the application. Then, at simulation time, these operations are reproduced by invoking their implementation in the dynamic-behavior component. The decisions made by these operations are based on the simulated architecture, allowing to dynamically reschedule sections of code taken from the trace to the target simulated components. As the captured sections of code are independent from the parallel state of the application, they can be simulated on the trace-driven engine, while the parallelism-management operations, that require to be re-executed, are carried out by the execution-driven component, thus achieving the best of both trace- and execution-driven worlds. This simulation methodology creates several new research opportunities, including research on scheduling and other parallelism-management techniques for future architectures, and hardware support for programming models.",
"title": ""
},
{
"docid": "25cd669a4fcf62ff56669bff22974634",
"text": "In this paper, we introduce a novel framework for combining scientific knowledge within physicsbased models and recurrent neural networks to advance scientific discovery in many dynamical systems. We will first describe the use of outputs from physics-based models in learning a hybrid-physics-data model. Then, we further incorporate physical knowledge in real-world dynamical systems as additional constraints for training recurrent neural networks. We will apply this approach on modeling lake temperature and quality where we take into account the physical constraints along both the depth dimension and time dimension. By using scientific knowledge to guide the construction and learning the data-driven model, we demonstrate that this method can achieve better prediction accuracy as well as scientific consistency of results.",
"title": ""
},
{
"docid": "9e87798911431cc8f5231cd65ee64e0f",
"text": "Channel feature detectors are the most popular approaches for pedestrian detection recently. However, most of these approaches train the boosted decision trees by selecting a single feature at each node, which does not effectively exploit the multi-feature cues and spatial information. To address this issue, this paper proposes to construct the co-occurrence of multiple channel features in local image neighborhoods for pedestrian detection. In our approach, a binary pattern of feature co-occurrence is represented by combining the binary variables quantized from each channel feature, and the spatial information is incorporated by selecting the neighbors to jointly represent the feature co-occurrence in a local image block. However, feature co-occurrence selection leads to many possible feature combinations, which significantly increase the computational cost at the training stage. Therefore, in order to reduce the number of candidate features and obtain the most discriminative features effectively, a partial least squares-based feature selection approach called variable importance on projection is exploited. Comprehensive experiments are conducted on several challenging pedestrian data sets, and superior performances are achieved by the proposed approach in comparison with some state-of-the-art pedestrian detection approaches.",
"title": ""
},
{
"docid": "5cdb19d4e9bd167e45220870df09dc87",
"text": "Leveraging massive electronic health records (EHR) brings tremendous promises to advance clinical and precision medicine informatics research. However, it is very challenging to directly work with multifaceted patient information encoded in their EHR data. Deriving effective representations of patient EHRs is a crucial step to bridge raw EHR information and the endpoint analytical tasks, such as risk prediction or disease subtyping. In this paper, we propose Health-ATM, a novel and integrated deep architecture to uncover patients’ comprehensive health information from their noisy, longitudinal, heterogeneous and irregular EHR data. HealthATM extracts comprehensive multifaceted patient information patterns with attentive and time-aware modulars (ATM) and a hybrid network structure composed of both Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). The learned features are finally fed into a prediction layer to conduct the risk prediction task. We evaluated the Health-ATM on both artificial and real world EHR corpus and demonstrated its promising utility and efficacy on representation learning and disease onset predictions.",
"title": ""
},
{
"docid": "78ef7cd54c9b5aa41096d7496e433f69",
"text": "To meet the requirements of some intelligent vehicle monitoring system, the software integrates Global Position System (GPS), Geographic Information System (GIS) and Global System for Mobile communications (GSM) in the whole. The structure, network topology, functions, main technical features and their implementation principles of the system are introduced. Then hardware design of the vehicle terminal is given in short. Communication process and data transmission between the server and the client (relay server) and client through TCP/IP and UDP protocol are discussed in detail in this paper. Testing result using LoadRunner software is also analyzed. Practice shows the robustness of the software and feasibility of object-oriented programming.",
"title": ""
},
{
"docid": "d5955aa10ee95527bd7a3d13479d4018",
"text": "As urbanisation increases globally and the natural environment becomes increasingly fragmented, the importance of urban green spaces for biodiversity conservation grows. In many countries, private gardens are a major component of urban green space and can provide considerable biodiversity benefits. Gardens and adjacent habitats form interconnected networks and a landscape ecology framework is necessary to understand the relationship between the spatial configuration of garden patches and their constituent biodiversity. A scale-dependent tension is apparent in garden management, whereby the individual garden is much smaller than the unit of management needed to retain viable populations. To overcome this, here we suggest mechanisms for encouraging 'wildlife-friendly' management of collections of gardens across scales from the neighbourhood to the city.",
"title": ""
},
{
"docid": "4aeeb556c3566bfa18cd0b125690d43a",
"text": "Peer-to-peer technologies have proved to be effective for various bandwidth intensive, large scale applications such as file-transfer. For many years, there has been tremendous interest in academic environments for live video streaming as another application of P2P. Recently, a number of new commercial scale video streaming systems have cropped up. These systems differ from others in the type of content that they provide and attract a large number of users from across the globe. These are proprietary systems and very little is known about their architecture and behavior. This study is one of the first of its kind to analyze the performance and characteristics of P2P live streaming applications. In particular, we analyze PPLive and SOPCast, two of the most popular systems in this class. In this paper, we (1) present a framework in which to analyze these P2P applications from a single observable point, (2) analyze control traffic to present a probable operation model and (3) present analysis of resource usage, locality and stability of data distribution. We conclude that P2P live streaming has an even greater impact on network bandwidth utilization and control than P2P file transfer applications.",
"title": ""
},
{
"docid": "0b687d55b6927b5bad1ff8b7bbeda9e3",
"text": "Workflow is an important way to mashup reusable software services to create value-added data analytics services. Workflow provenance is core to understand how services and workflows behaved in the past, which knowledge can be used to provide a better recommendation. Existing workflow provenance management systems handle various types of provenance separately. A typical data science exploration scenario, however, calls for an integrated view of provenance and seamless transition among different types of provenance. In this paper, a graph-based, uniform provenance model is proposed to link together design-time and run-time provenance, by combining retrospective provenance, prospective provenance, and evolution provenance. Such a unified provenance model will not only facilitate workflow mining and exploration, but also facilitate workflow interoperability. The model is formalized into colored Petri nets for verification and monitoring management. A SQL-like query language is developed, which supports basic queries, recursive queries, and cross-provenance queries. To verify the effectiveness of our model, A web-based, collaborative workflow prototyping system is developed as a proof-of-concept. Experiments have been conducted to evaluate the effectiveness of the proposed SQL-like graph query against SQL query.",
"title": ""
},
{
"docid": "06675c4b42683181cecce7558964c6b6",
"text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.",
"title": ""
},
{
"docid": "3f40c24a8098fd0a06ef772f2d7d9e2f",
"text": "Knowing how hands move and what object is being manipulated are two key sub-tasks for analyzing first-person (egocentric) action. However, lack of fully annotated hand data as well as imprecise foreground segmentation make either sub-task challenging. This work aims to explicitly ad dress these two issues via introducing a cascaded interactional targeting (i.e., infer both hand and active object regions) deep neural network. Firstly, a novel EM-like learning framework is proposed to train the pixel-level deep convolutional neural network (DCNN) by seamlessly integrating weakly supervised data (i.e., massive bounding box annotations) with a small set of strongly supervised data (i.e., fully annotated hand segmentation maps) to achieve state-of-the-art hand segmentation performance. Secondly, the resulting high-quality hand segmentation maps are further paired with the corresponding motion maps and object feature maps, in order to explore the contextual information among object, motion and hand to generate interactional foreground regions (operated objects). The resulting interactional target maps (hand + active object) from our cascaded DCNN are further utilized to form discriminative action representation. Experiments show that our framework has achieved the state-of-the-art egocentric action recognition performance on the benchmark dataset Activities of Daily Living (ADL).",
"title": ""
},
{
"docid": "8f360c907e197beb5e6fc82b081c908f",
"text": "This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.",
"title": ""
},
{
"docid": "a4e9d39a3ab7339e40958ad6df97adac",
"text": "Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "4284e72d9db73e4dfdbe9cbeeb9123bd",
"text": "BACKGROUND\nClinicians are confronted with difficult choices regarding whether a tooth with pulpal and/or periapical disease should be saved through endodontic treatment or be extracted and replaced with an implant.\n\n\nMETHODS\nThe authors examined publications (research, literature reviews and systematic reviews) related to the factors affecting decision making for patients who have oral diseases or traumatic injuries.\n\n\nRESULTS\nThe factors to be considered included patient-related issues (systemic and oral health, as well as comfort and treatment perceptions), tooth- and periodontium-related factors (pulpal and periodontal conditions, color characteristics of the teeth, quantity and quality of bone, and soft-tissue anatomy) and treatment-related factors (the potential for procedural complications, required adjunctive procedures and treatment outcomes).\n\n\nCONCLUSIONS\nOn the basis of survival rates, it appears that more than 95 percent of dental implants and teeth that have undergone endodontic treatment remain functional over time.\n\n\nCLINICAL IMPLICATIONS\nClinicians need to consider carefully several factors before choosing whether to perform endodontic therapy or extract a tooth and place an implant. The result should be high levels of comfort, function, longevity and esthetics for patients.",
"title": ""
},
{
"docid": "ba9030da218e0ba5d4369758d80be5b9",
"text": "Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs, in conjunction with stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable candidate samples with notable variability, and in particular provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
}
] |
scidocsrr
|
5196019e07c87169ac3e55c142bdbb06
|
Real time Google map and Arduino based vehicle tracking system
|
[
{
"docid": "f5519eff0c13e0ee42245fdf2627b8ae",
"text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.",
"title": ""
}
] |
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
},
{
"docid": "c4183c8b08da8d502d84a650d804cac8",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "c7a542f144318fe1f81e81c923345b41",
"text": "In fifth-generation (5G) mobile networks, a major challenge is to effectively improve system capacity and meet dynamic service demands. One promising technology to solve this problem is heterogeneous networks (HetNets), which involve a large number of densified low power nodes (LPNs). This article proposes a software defined network (SDN) based intelligent model that can efficiently manage the heterogeneous infrastructure and resources. In particular, we first review the latest SDN standards and discuss the possible extensions. We then discuss the advantages of SDN in meeting the dynamic nature of services and requirements in 5G HetNets. Finally, we develop a variety of schemes to improve traffic control, subscriber management, and resource allocation. Performance analysis shows that our proposed system is reliable, scalable, and implementable.",
"title": ""
},
{
"docid": "d08c24228e43089824357342e0fa0843",
"text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.",
"title": ""
},
{
"docid": "33b1c3b2a999c62fe4f1da5d3cc7f534",
"text": "Individuals often appear with multiple names when considering large bibliographic datasets, giving rise to the synonym ambiguity problem. Although most related works focus on resolving name ambiguities, this work focus on classifying and characterizing multiple name usage patterns—the root cause for such ambiguity. By considering real examples bibliographic datasets, we identify and classify patterns of multiple name usage by individuals, which can be interpreted as name change, rare name usage, and name co-appearance. In particular, we propose a methodology to classify name usage patterns through a supervised classification task and show that different classes are robust (across datasets) and exhibit significantly different properties. We show that the collaboration network structure emerging around nodes corresponding to ambiguous names from different name usage patterns have strikingly different characteristics, such as their common neighborhood and degree evolution. We believe such differences in network structure and in name usage patterns can be leveraged to design more efficient name disambiguation algorithms that target the synonym problem.",
"title": ""
},
{
"docid": "0315f0355168a78bdead8d06d5f571b4",
"text": "Machine learning techniques are increasingly being applied to clinical text that is already captured in the Electronic Health Record for the sake of delivering quality care. Applications for example include predicting patient outcomes, assessing risks, or performing diagnosis. In the past, good results have been obtained using classical techniques, such as bag-of-words features, in combination with statistical models. Recently however Deep Learning techniques, such as Word Embeddings and Recurrent Neural Networks, have shown to possibly have even greater potential. In this work, we apply several Deep Learning and classical machine learning techniques to the task of predicting violence incidents during psychiatric admission using clinical text that is already registered at the start of admission. For this purpose, we use a novel and previously unexplored dataset from the Psychiatry Department of the University Medical Center Utrecht in The Netherlands. Results show that predicting violence incidents with state-of-the-art performance is possible, and that using Deep Learning techniques provides a relatively small but consistent improvement in performance. We finally discuss the potential implication of our findings for the psychiatric practice.",
"title": ""
},
{
"docid": "43f3c28db4732ef07d04c3bda628ab66",
"text": "This research proposes a conceptual framework for achieving a secure Internet of Things (IoT) routing that will enforce confidentiality and integrity during the routing process in IoT networks. With billions of IoT devices likely to be interconnected globally, the big issue is how to secure the routing of data in the underlying networks from various forms of attacks. Users will not feel secure if they know their private data could easily be accessed and compromised by unauthorized individuals or machines over the network. It is within this context that we present the design of SecTrust, a lightweight secure trust-based routing framework to identify and isolate common routing attacks in IoT networks. The proposed framework is based on the successful interactions between the IoT sensor nodes, which effectively is a reflection of their trustworthy behavior.",
"title": ""
},
{
"docid": "5609709136a45f355f988a7a4ec7857c",
"text": "Traditional information extraction systems have focused on satisfying precise, narrow, pre-specified requests from small, homogeneous corpora. In contrast, the TextRunner system demonstrates a new kind of information extraction, called Open Information Extraction (OIE), in which the system makes a single, data-driven pass over the entire corpus and extracts a large set of relational tuples, without requiring any human input. (Banko et al., 2007) TextRunner is a fullyimplemented, highly scalable example of OIE. TextRunner’s extractions are indexed, allowing a fast query mechanism. Our first public demonstration of the TextRunner system shows the results of performing OIE on a set of 117 million web pages. It demonstrates the power of TextRunner in terms of the raw number of facts it has extracted, as well as its precision using our novel assessment mechanism. And it shows the ability to automatically determine synonymous relations and objects using large sets of extractions. We have built a fast user interface for querying the results.",
"title": ""
},
{
"docid": "71757cd2f861f31759ead3310fbb8383",
"text": "The promise of cloud computing is to provide computing resources instantly whenever they are needed. The state-of-art virtual machine (VM) provisioning technology can provision a VM in tens of minutes. This latency is unacceptable for jobs that need to scale out during computation. To truly enable on-the-fly scaling, new VM needs to be ready in seconds upon request. In this paper, We present an online temporal data mining system called ASAP, to model and predict the cloud VM demands. ASAP aims to extract high level characteristics from VM provisioning request stream and notify the provisioning system to prepare VMs in advance. For quantification issue, we propose Cloud Prediction Cost to encodes the cost and constraints of the cloud and guide the training of prediction algorithms. Moreover, we utilize a two-level ensemble method to capture the characteristics of the high transient demands time series. Experimental results using historical data from an IBM cloud in operation demonstrate that ASAP significantly improves the cloud service quality and provides possibility for on-the-fly provisioning.",
"title": ""
},
{
"docid": "1c19d0b156673e70544fe93154f1ae33",
"text": "Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing.",
"title": ""
},
{
"docid": "7292ceb6718d0892a154d294f6434415",
"text": "This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.",
"title": ""
},
{
"docid": "b4c12965618d7d3a8049a91b513ca896",
"text": "There is a convergence in recent theories of creativity that go beyond characteristics and cognitive processes of individuals to recognize the importance of the social construction of creativity. In parallel, there has been a rise in social computing supporting the collaborative construction of knowledge. The panel will discuss the challenges and opportunities from the confluence of these two developments by bringing together the contrasting and controversial perspective of the individual panel members. It will synthesize from different perspectives an analytic framework to understand these new developments, and how to promote rigorous research methods and how to identify the unique challenges in developing evaluation and assessment methods for creativity research.",
"title": ""
},
{
"docid": "0fd6b5272b75a39ad8839a714d4df5a5",
"text": "In machine reading comprehension (MRC) tasks, sentence inference is an important but extremely difficult problem. Most of MRC models directly interact articles with questions from the word level, which ignores inter and intra information of sentences and cannot well focus on problems about sentence reasoning and inference, especially when the answer clues are far apart in the article. In this paper, we propose an option gate approach for reading comprehension. We consider applying a sentence-level option gate module to make the model incorporate sentence information. In our approach we (1) extract key sentences in the article to filter out noise unrelated to the question and the options, (2) encode each sentence in articles, questions and options with dot-product self-attention to obtain intra sentence representations, (3) model inter relationships between the article and the question with bilinear attention and (4) apply an option gate with sentence inference information to each option representation with the question-aware article representation. This module can help better reasoning instead of directly word matching or paraphrasing. And this module can easily supply sentence information for most of the existing reading comprehension models. Experimental results on the RACE dataset show that this easy and simple module helps outperform the baseline models by 2.5% at most (single model), and achieve state-of-the-art results on the RACE-H dataset.",
"title": ""
},
{
"docid": "4f73815cc6bbdfbacee732d8724a3f74",
"text": "Networks can be considered as approximation schemes. Multilayer networks of the perceptron type can approximate arbitrarily well continuous functions (Cybenko 1988, 1989; Funahashi 1989; Stinchcombe and White 1989). We prove that networks derived from regularization theory and including Radial Basis Functions (Poggio and Girosi 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property ofbest approximation. The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.",
"title": ""
},
{
"docid": "cfd0cadbdf58ee01095aea668f0da4fe",
"text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.",
"title": ""
},
{
"docid": "5ceb415b17cc36e9171ddc72a860ccc8",
"text": "Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, as well as the parameters of the word embedding training process, namely the context window size, the dimensionality and the number of negative samples, on the attained classification performance. By comparing the classification results of word embedding models that have been trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data should align with the Twitter classification dataset both in data type and time period to achieve significantly better performance compared to baselines such as SVM with TF-IDF. Moreover, by evaluating the results of word embedding models trained using various context window sizes and dimensionalities, we find that large context window and dimension sizes are preferable to improve the performance. However, the number of negative samples parameter does not significantly affect the performance of the CNN classifiers. Our experimental results also show that choosing the correct word embedding model for use with CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings. Finally, for out-of-vocabulary (OOV) words that are not available in the learned word embedding models, we show that a simple OOV strategy to randomly initialise the OOV words without any prior knowledge is sufficient to attain a good classification performance among the current OOV strategies (e.g. a random initialisation using statistics of the pre-trained word embedding models).",
"title": ""
},
{
"docid": "8c70f1af7d3132ca31b0cf603b7c5939",
"text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "d0e45de6baf9665123a43a21d25c18c2",
"text": "This paper studies the problem of computing optimal journeys in dynamic public transit networks. We introduce a novel algorithmic framework, called Connection Scan Algorithm (CSA), to compute journeys. It organizes data as a single array of connections, which it scans once per query. Despite its simplicity, our algorithm is very versatile. We use it to solve earliest arrival and multi-criteria profile queries. Moreover, we extend it to handle the minimum expected arrival time (MEAT) problem, which incorporates stochastic delays on the vehicles and asks for a set of (alternative) journeys that in its entirety minimizes the user’s expected arrival time at the destination. Our experiments on the dense metropolitan network of London show that CSA computes MEAT queries, our most complex scenario, in 272ms on average.",
"title": ""
},
{
"docid": "f077dc076131748d97ec36b44c3feb6e",
"text": "The inspection, assessment, maintenance and safe operation of the existing civil infrastructure consists one of the major challenges facing engineers today. Such work requires either manual approaches, which are slow and yield subjective results, or automated approaches, which depend upon complex handcrafted features. Yet, for the latter case, it is rarely known in advance which features are important for the problem at hand. In this paper, we propose a fully automated tunnel assessment approach; using the raw input from a single monocular camera we hierarchically construct complex features, exploiting the advantages of deep learning architectures. Obtained features are used to train an appropriate defect detector. In particular, we exploit a Convolutional Neural Network to construct high-level features and as a detector we choose to use a Multi-Layer Perceptron due to its global function approximation properties. Such an approach achieves very fast predictions due to the feedforward nature of Convolutional Neural Networks and Multi-Layer Perceptrons.",
"title": ""
},
{
"docid": "056eaedfbf8c18418ea627f46fa8ac16",
"text": "The malleability of stereotyping matters in social psychology and in society. Previous work indicates rapid amygdala and cognitive responses to racial out-groups, leading some researchers to view these responses as inevitable. In this study, the methods of social-cognitive neuroscience were used to investigate how social goals control prejudiced responses. Participants viewed photographs of unfamiliar Black and White faces, under each of three social goals: social categorization (by age), social individuation (vegetable preference), and simple visual inspection (detecting a dot). One study recorded brain activity in the amygdala using functional magnetic resonance imaging, and another measured cognitive activation of stereotypes by lexical priming. Neither response to photos of the racial out-group was inevitable; instead, both responses depended on perceivers' current social-cognitive goal.",
"title": ""
}
] |
scidocsrr
|
856674a9cfddab31e33b4763dde2fce6
|
Load shedding for aggregation queries over data streams
|
[
{
"docid": "2eab78b8ec65340be1473086f31eb8c4",
"text": "We present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (DBMS). Such queries arise naturally in interactive exploratory decision-support applications.\nTraditional offline join algorithms are designed to minimize the time to completion of the query. In contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. Ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. Ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. We show how ripple joins can be implemented in an existing DBMS using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join “aspect-ratio” parameters. In experiments with an initial implementation of our algorithms in the POSTGRES DBMS, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers.",
"title": ""
},
{
"docid": "f84e0d8892d0b9d0b108aa5dcf317037",
"text": "We present a continuously adaptive, continuous query (CACQ) implementation based on the eddy query processing framework. We show that our design provides significant performance benefits over existing approaches to evaluating continuous queries, not only because of its adaptivity, but also because of the aggressive cross-query sharing of work and space that it enables. By breaking the abstraction of shared relational algebra expressions, our Telegraph CACQ implementation is able to share physical operators --- both selections and join state --- at a very fine grain. We augment these features with a grouped-filter index to simultaneously evaluate multiple selection predicates. We include measurements of the performance of our core system, along with a comparison to existing continuous query approaches.",
"title": ""
}
] |
[
{
"docid": "630e8f538d566af9375c231dd5195a99",
"text": "The investigation of the human microbiome is the most rapidly expanding field in biomedicine. Early studies were undertaken to better understand the role of microbiota in carbohydrate digestion and utilization. These processes include polysaccharide degradation, glycan transport, glycolysis, and short-chain fatty acid production. Recent research has demonstrated that the intricate axis between gut microbiota and the host metabolism is much more complex. Gut microbiota—depending on their composition—have disease-promoting effects but can also possess protective properties. This review focuses on disorders of metabolic syndrome, with special regard to obesity as a prequel to type 2 diabetes, type 2 diabetes itself, and type 1 diabetes. In all these conditions, differences in the composition of the gut microbiota in comparison to healthy people have been reported. Mechanisms of the interaction between microbiota and host that have been characterized thus far include an increase in energy harvest, modulation of free fatty acids—especially butyrate—of bile acids, lipopolysaccharides, gamma-aminobutyric acid (GABA), an impact on toll-like receptors, the endocannabinoid system and “metabolic endotoxinemia” as well as “metabolic infection.” This review will also address the influence of already established therapies for metabolic syndrome and diabetes on the microbiota and the present state of attempts to alter the gut microbiota as a therapeutic strategy.",
"title": ""
},
{
"docid": "3b2c99d14b2284901e98c70091ef089c",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4",
"title": ""
},
{
"docid": "a86056ab9e6fc98247459e9798aa9949",
"text": "We address the problem of 3D rotation equivariance in convolutional neural networks. 3D rotations have been a challenging nuisance in 3D classification tasks requiring higher capacity and extended data augmentation in order to tackle it. We model 3D data with multivalued spherical functions and we propose a novel spherical convolutional network that implements exact convolutions on the sphere by realizing them in the spherical harmonic domain. Resulting filters have local symmetry and are localized by enforcing smooth spectra. We apply a novel pooling on the spectral domain and our operations are independent of the underlying spherical resolution throughout the network. We show that networks with much lower capacity and without requiring data augmentation can exhibit performance comparable to the state of the art in standard retrieval and classification benchmarks.",
"title": ""
},
{
"docid": "1a615a022c441f413fcbdb3dbff9e66d",
"text": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.",
"title": ""
},
{
"docid": "846931a1e4c594626da26931110c02d6",
"text": "A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.",
"title": ""
},
{
"docid": "7cd4efb34472aa2e7f8019c14137bf4e",
"text": "In theory, the pose of a calibrated camera can be uniquely determined from a minimum of four coplanar but noncollinear points. In practice, there are many applications of camera pose tracking from planar targets and there is also a number of recent pose estimation algorithms which perform this task in real-time, but all of these algorithms suffer from pose ambiguities. This paper investigates the pose ambiguity for planar targets viewed by a perspective camera. We show that pose ambiguities - two distinct local minima of the according error function - exist even for cases with wide angle lenses and close range targets. We give a comprehensive interpretation of the two minima and derive an analytical solution that locates the second minimum. Based on this solution, we develop a new algorithm for unique and robust pose estimation from a planar target. In the experimental evaluation, this algorithm outperforms four state-of-the-art pose estimation algorithms",
"title": ""
},
{
"docid": "876bbee05b7838f4de218b424d895887",
"text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-",
"title": ""
},
{
"docid": "bd9878ef264e27321b3e0fe6fe3f25cc",
"text": "There is a wide gap between symbolic reasoning and deep learning. In this research, we explore the possibility of using deep learning to improve symbolic reasoning. Briefly, in a reasoning system, a deep feedforward neural network is used to guide rewriting processes after learning from algebraic reasoning examples produced by humans. To enable the neural network to recognise patterns of algebraic expressions with non-deterministic sizes, reduced partial trees are used to represent the expressions. Also, to represent both top-down and bottom-up information of the expressions, a centralisation technique is used to improve the reduced partial trees. Besides, symbolic association vectors and rule application records are used to improve the rewriting processes. Experimental results reveal that the algebraic reasoning examples can be accurately learnt only if the feedforward neural network has enough hidden layers. Also, the centralisation technique, the symbolic association vectors and the rule application records can reduce error rates of reasoning. In particular, the above approaches have led to 4.6% error rate of reasoning on a dataset of linear equations, differentials and integrals.",
"title": ""
},
{
"docid": "af050605034c21df4eefa99104a881ff",
"text": "The self-balancing robot is an example of underactuated mechanical systems; a class of mechanisms which have been very popular as benchmarks for control methods in the last years. The Interconnection and Damping Assignment Passivity Based Control (IDA-PBC) is a control methodology designed to solve the stabilization problem of underactuated systems. In this paper, we present an application of this control methodology to a self-balancing robot, by using an extension of the IDA-PBC method, which includes actuator dynamics. Based on the Lyapunov direct method we carry out stability analysis and, by invoking the theorem of Barbashin---Krasovskii, we arrive to asymptotic stability conditions. Experimental results are shown to illustrate the performance and viability of the proposed control.",
"title": ""
},
{
"docid": "8f0801de787ccea72bb0c61aefbd0ec8",
"text": "Recent fMRI studies demonstrated that functional connectivity is altered following cognitive tasks (e.g., learning) or due to various neurological disorders. We tested whether real-time fMRI-based neurofeedback can be a tool to voluntarily reconfigure brain network interactions. To disentangle learning-related from regulation-related effects, we first trained participants to voluntarily regulate activity in the auditory cortex (training phase) and subsequently asked participants to exert learned voluntary self-regulation in the absence of feedback (transfer phase without learning). Using independent component analysis (ICA), we found network reconfigurations (increases in functional network connectivity) during the neurofeedback training phase between the auditory target region and (1) the auditory pathway; (2) visual regions related to visual feedback processing; (3) insula related to introspection and self-regulation and (4) working memory and high-level visual attention areas related to cognitive effort. Interestingly, the auditory target region was identified as the hub of the reconfigured functional networks without a-priori assumptions. During the transfer phase, we again found specific functional connectivity reconfiguration between auditory and attention network confirming the specific effect of self-regulation on functional connectivity. Functional connectivity to working memory related networks was no longer altered consistent with the absent demand on working memory. We demonstrate that neurofeedback learning is mediated by widespread changes in functional connectivity. In contrast, applying learned self-regulation involves more limited and specific network changes in an auditory setup intended as a model for tinnitus. Hence, neurofeedback training might be used to promote recovery from neurological disorders that are linked to abnormal patterns of brain connectivity.",
"title": ""
},
{
"docid": "3a6a97b2705d90b031ab1e065281465b",
"text": "Common (Cinnamomum verum, C. zeylanicum) and cassia (C. aromaticum) cinnamon have a long history of use as spices and flavouring agents. A number of pharmacological and clinical effects have been observed with their use. The objective of this study was to systematically review the scientific literature for preclinical and clinical evidence of safety, efficacy, and pharmacological activity of common and cassia cinnamon. Using the principles of evidence-based practice, we searched 9 electronic databases and compiled data according to the grade of evidence found. One pharmacological study on antioxidant activity and 7 clinical studies on various medical conditions were reported in the scientific literature including type 2 diabetes (3), Helicobacter pylori infection (1), activation of olfactory cortex of the brain (1), oral candidiasis in HIV (1), and chronic salmonellosis (1). Two of 3 randomized clinical trials on type 2 diabetes provided strong scientific evidence that cassia cinnamon demonstrates a therapeutic effect in reducing fasting blood glucose by 10.3%–29%; the third clinical trial did not observe this effect. Cassia cinnamon, however, did not have an effect at lowering glycosylated hemoglobin (HbA1c). One randomized clinical trial reported that cassia cinnamon lowered total cholesterol, low-density lipoprotein cholesterol, and triglycerides; the other 2 trials, however, did not observe this effect. There was good scientific evidence that a species of cinnamon was not effective at eradicating H. pylori infection. Common cinnamon showed weak to very weak evidence of efficacy in treating oral candidiasis in HIV patients and chronic",
"title": ""
},
{
"docid": "65471409c1e2580b657b5fc1fe92fc84",
"text": "Bioinspiration in robotics deals with applying biological principles to the design of better performing devices. In this article, we propose a novel bioinspired framework using motor primitives for locomotion assistance through a wearable cooperative exoskeleton. In particular, the use of motor primitives for assisting different locomotion modes (i.e., ground-level walking at several cadences and ascending and descending stairs) is explored by means of two different strategies. In the first strategy, identified motor primitives are combined through weights to directly produce the desired assistive torque profiles. In the second strategy, identified motor primitives are combined to serve as neural stimulations to a virtual model of the musculoskeletal system, which, in turn, produces the desired assistive torque profiles.",
"title": ""
},
{
"docid": "c6029c95b8a6b2c6dfb688ac049427dc",
"text": "This paper presents development of a two-fingered robotic device for amputees whose hands are partially impaired. In this research, we focused on developing a compact and lightweight robotic finger system, so the target amputee would be able to execute simple activities in daily living (ADL), such as grasping a bottle or a cup for a long time. The robotic finger module was designed by considering the impaired shape and physical specifications of the target patient's hand. The proposed prosthetic finger was designed using a linkage mechanism which was able to create underactuated finger motion. This underactuated mechanism contributes to minimizing the number of required actuators for finger motion. In addition, the robotic finger was not driven by an electro-magnetic rotary motor, but a shape-memory alloy (SMA) actuator. Having a driving method using SMA wire contributed to reducing the total weight of the prosthetic robot finger as it has higher energy density than that offered by the method using the electrical DC motor. In this paper, we confirmed the performance of the proposed robotic finger by fundamental driving tests and the characterization of the SMA actuator.",
"title": ""
},
{
"docid": "78bc13c6b86ea9a8fda75b66f665c39f",
"text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.",
"title": ""
},
{
"docid": "5d98548bc4f65d66a8ece7e70cb61bc4",
"text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.09.003 ⇑ Corresponding author. Tel.: +86 10 62283240. E-mail address: liwenmin02@hotmail.com (W. Li). Value-added applications in vehicular ad hoc network (VANET) come with the emergence of electronic trading. The restricted connectivity scenario in VANET, where the vehicle cannot communicate directly with the bank for authentication due to the lack of internet access, opens up new security challenges. Hence a secure payment protocol, which meets the additional requirements associated with VANET, is a must. In this paper, we propose an efficient and secure payment protocol that aims at the restricted connectivity scenario in VANET. The protocol applies self-certified key agreement to establish symmetric keys, which can be integrated with the payment phase. Thus both the computational cost and communication cost can be reduced. Moreover, the protocol can achieve fair exchange, user anonymity and payment security. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "85af24a4bf3d5b8de83e2e144ce2b664",
"text": "One of the most prominent neuropsychologic theories of attention-deficit/hyperactivity disorder (ADHD) suggests that its symptoms arise from a primary deficit in executive functions (EF), defined as neurocognitive processes that maintain an appropriate problem-solving set to attain a later goal. To examine the validity of the EF theory, we conducted a meta-analysis of 83 studies that administered EF measures to groups with ADHD (total N = 3734) and without ADHD (N = 2969). Groups with ADHD exhibited significant impairment on all EF tasks. Effect sizes for all measures fell in the medium range (.46-.69), but the strongest and most consistent effects were obtained on measures of response inhibition, vigilance, working memory, and planning. Weaknesses in EF were significant in both clinic-referred and community samples and were not explained by group differences in intelligence, academic achievement, or symptoms of other disorders. ADHD is associated with significant weaknesses in several key EF domains. However, moderate effect sizes and lack of universality of EF deficits among individuals with ADHD suggest that EF weaknesses are neither necessary nor sufficient to cause all cases of ADHD. Difficulties with EF appear to be one important component of the complex neuropsychology of ADHD.",
"title": ""
},
{
"docid": "7a6bdbec098ac255214bf733c18eaeb0",
"text": "Aggregated search is that task of blending results from different search services, or verticals, into the core web results. Aggregated search coherence is the extent to which results from different sources focus on similar senses of an ambiguous or underspecified query. Prior research studied the effect of aggregated search coherence on search behavior and found that the query-senses in the vertical results can affect user interaction with the web results. In this work, we develop and evaluate algorithms for vertical results selection—deciding which results from a particular vertical to display. Results from a large-scale user study suggest that algorithms that improve the level of coherence between the vertical and web results influence users to make more productive decisions with respect to the web results—to engage with the web results when at least one of them is relevant and, to a lesser extent, to avoid engaging with the web results otherwise.",
"title": ""
},
{
"docid": "40f56ea7cb0894dde09729c98a038c93",
"text": "Software Defined Networking (SDN) provides an environment to test and use custom ideas in networking. One of the areas that needs this flexibility is routing in networking. In this study we design and implement a custom intra-domain routing approach in an SDN environment. In SDN routing can be implemented as part of a controller or as an application on top of a controller. In this study we implemented a module in Floodlight controller v1.1 with OpenFlow 1.3 support. This module interacts with another custom module that monitors active bandwidth use of inter-switch links inside a network. Using the information provided by monitoring module, routing module uses available capacity in inter-switch links to determine widest path between any given two points. We tested and evaluated the developed system to show its efficiency. Newly developed module can be used in traffic engineering with additional control options.",
"title": ""
},
{
"docid": "e2b95200b6da4d2ff8c69b55f023638e",
"text": "Phishing is the third cyber-security threat globally and the first cyber-security threat in China. There were 61.69 million phishing victims in China alone from June 2011 to June 2012, with the total annual monetary loss more than 4.64 billion US dollars. These phishing attacks were highly concentrated in targeting at a few major Websites. Many phishing Webpages had a very short life span. In this paper, we assume the Websites to protect against phishing attacks are known, and study the effectiveness of machine learning based phishing detection using only lexical and domain features, which are available even when the phishing Webpages are inaccessible. We propose several novel highly effective features, and use the real phishing attack data against Taobao and Tencent, two main phishing targets in China, in studying the effectiveness of each feature, and each group of features. We then select an optimal set of features in our phishing detector, which has achieved a detection rate better than 98%, with a false positive rate of 0.64% or less. The detector is still effective when the distribution of phishing URLs changes.",
"title": ""
},
{
"docid": "47be1d3a8540649073c6a9ed64d52f6c",
"text": "In this paper, we deal with the task of determining the audio segment that best represents a given music recording (similar to audio thumbnailing). Typically, such a segment has many (approximate) repetitions covering large parts of the music recording. As main contribution, we introduce a novel fitness measure that assigns to each segment a fitness value that expresses how much and how well the segment “explains” the repetitive structure of the recording. In co mbination with enhanced feature representations, we show that our fitness measure can cope even with strong variations in tempo, instrumentation, and modulations that may occur within and across related segments. We demonstrate the practicability of our approach by means of several challenging examples including field recordings of folk music and recordings of classical music.",
"title": ""
}
] |
scidocsrr
|
ae2ac9a324e8b141aca6fd1181cffb95
|
Stereo Matching Using Belief Propagation
|
[
{
"docid": "adb46bea91457f027c6040cd1d706a76",
"text": "Several new algorithms for visual correspondence based on graph cuts [6, 13, 16] have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a pixel corresponds to at most one pixel in the other image. In this paper, we present two new methods which properly address occlusions, while preserving the advantages of graph cut algorithms. We give experimental results for stereo as well as motion, which demonstrate that our methods perform well both at detecting occlusions and computing disparities.",
"title": ""
},
{
"docid": "13f1b9cf251b3b37de00cb68b17652c0",
"text": "This is an updated and expanded version of TR2000-26, but it is still in draft form. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe’s approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe’s approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions of these Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2001 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
}
] |
[
{
"docid": "6b8942948b3f23971254ba7b90dac6f0",
"text": "An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency.",
"title": ""
},
{
"docid": "44895e24ca91db113a8c01d84bd5b83c",
"text": "In living organisms, nitrogen arise primarily as ammonia (NH3) and ammonium (NH4(+)), which is a main component of the nucleic acid pool and proteins. Although nitrogen is essential for growth and maintenance in animals, but when the nitrogenous compounds exceeds the normal range which can quickly lead to toxicity and death. Urea cycle is the common pathway for the disposal of excess nitrogen through urea biosynthesis. Hyperammonemia is a consistent finding in many neurological disorders including congenital urea cycle disorders, reye's syndrome and acute liver failure leads to deleterious effects. Hyperammonemia and liver failure results in glutamatergic neurotransmission which contributes to the alteration in the function of the glutamate-nitric oxide-cGMP pathway, modulates the important cerebral process. Even though ammonia is essential for normal functioning of the central nervous system (CNS), in particular high concentrations of ammonia exposure to the brain leads to the alterations of glutamate transport by the transporters. Several glutamate transporters have been recognized in the central nervous system and each has a unique physiological property and distribution. The loss of glutamate transporter activity in brain during acute liver failure and hyperammonemia is allied with increased extracellular brain glutamate concentrations which may be conscientious for the cerebral edema and ultimately cell death.",
"title": ""
},
{
"docid": "8f0da69d48c3d5098018b2e5046b6e8e",
"text": "Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.",
"title": ""
},
{
"docid": "745bbe075634f40e6c66716a6b877619",
"text": "Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy.",
"title": ""
},
{
"docid": "d338c807948016bf978aa7a03841f292",
"text": "Emotions accompany everyone in the daily life, playing a key role in non-verbal communication, and they are essential to the understanding of human behavior. Emotion recognition could be done from the text, speech, facial expression or gesture. In this paper, we concentrate on recognition of “inner” emotions from electroencephalogram (EEG) signals as humans could control their facial expressions or vocal intonation. The need and importance of the automatic emotion recognition from EEG signals has grown with increasing role of brain computer interface applications and development of new forms of human-centric and human-driven interaction with digital media. We propose fractal dimension based algorithm of quantification of basic emotions and describe its implementation as a feedback in 3D virtual environments. The user emotions are recognized and visualized in real time on his/her avatar adding one more so-called “emotion dimension” to human computer interfaces.",
"title": ""
},
{
"docid": "c90ec1e1ed379464da193f72b8b35b38",
"text": "Many quality metrics take as input gamma corrected images and assume that pixel code values are scaled perceptually uniform. Although this is a valid assumption for darker displays operating in the luminance range typical for CRT displays (from 0.1 to 80 cd/m), it is no longer true for much brighter LCD displays (typically up to 500 cd/m), plasma displays (small regions up to 1000 cd/m) and HDR displays (up to 3000 cd/m). The distortions that are barely visible on dark displays become clearly noticeable when shown on much brighter displays. To estimate quality of images shown on bright displays, we propose a straightforward extension to the popular quality metrics, such as PSNR and SSIM, that makes them capable of handling all luminance levels visible to the human eye without altering their results for typical CRT display luminance levels. Such extended quality metrics can be used to estimate quality of high dynamic range (HDR) images as well as account for display brightness.",
"title": ""
},
{
"docid": "8792d60d2fd12a407091e7dc4e31ebaf",
"text": "Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning.",
"title": ""
},
{
"docid": "222eb13d4130746067e054425caf98f1",
"text": "Symmetric Positive Definite (SPD) matrix learning methods have become popular in many image and video processing tasks, thanks to their ability to learn appropriate statistical representations while respecting Riemannian geometry of underlying SPD manifolds. In this paper we build a Riemannian network architecture to open up a new direction of SPD matrix non-linear learning in a deep model. In particular, we devise bilinear mapping layers to transform input SPD matrices to more desirable SPD matrices, exploit eigenvalue rectification layers to apply a non-linear activation function to the new SPD matrices, and design an eigenvalue logarithm layer to perform Riemannian computing on the resulting SPD matrices for regular output layers. For training the proposed deep network, we exploit a new backpropagation with a variant of stochastic gradient descent on Stiefel manifolds to update the structured connection weights and the involved SPD matrix data. We show through experiments that the proposed SPD matrix network can be simply trained and outperform existing SPD matrix learning and state-of-the-art methods in three typical visual classification tasks.",
"title": ""
},
{
"docid": "5b021c0223ee25535508eb1d6f63ff55",
"text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications",
"title": ""
},
{
"docid": "37de72b0e9064d09fb6901b40d695c0a",
"text": "BACKGROUND AND OBJECTIVES\nVery little is known about the use of probiotics among pregnant women with gestational diabetes mellitus (GDM) especially its effect on oxidative stress and inflammatory indices. The aim of present study was to measure the effect of a probiotic supplement capsule on inflammation and oxidative stress biomarkers in women with newly-diagnosed GDM.\n\n\nMETHODS AND STUDY DESIGN\n64 pregnant women with GDM were enrolled in a double-blind placebo controlled randomized clinical trial in the spring and summer of 2014. They were randomly assigned to receive either a probiotic containing four bacterial strains of Lactobacillus acidophilus LA-5, Bifidobacterium BB-12, Streptococcus Thermophilus STY-31 and Lactobacillus delbrueckii bulgaricus LBY-27 or placebo capsule for 8 consecutive weeks. Blood samples were taken pre- and post-treatment and serum indices of inflammation and oxidative stress were assayed. The measured mean response scales were then analyzed using mixed effects model. All statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software (version 16).\n\n\nRESULTS\nSerum high-sensitivity C-reactive protein and tumor necrosis factor-α levels improved in the probiotic group to a statistically significant level over the placebo group. Serum interleukin-6 levels decreased in both groups after intervention; however, neither within group nor between group differences interleukin-6 serum levels was statistically significant. Malondialdehyde, glutathione reductase and erythrocyte glutathione peroxidase levels improved significantly with the use of probiotics when compared with the placebo.\n\n\nCONCLUSIONS\nThe probiotic supplement containing L.acidophilus LA- 5, Bifidobacterium BB- 12, S.thermophilus STY-31 and L.delbrueckii bulgaricus LBY-2 appears to improve several inflammation and oxidative stress biomarkers in women with GDM.",
"title": ""
},
{
"docid": "7d42d3d197a4d62e1b4c0f3c08be14a9",
"text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.",
"title": ""
},
{
"docid": "821e99985ec279659167996e620ce23a",
"text": "Information cascades are ubiquitous in both physical society and online social media, taking on large variations in structures, dynamics and semantics. Although the dynamics and semantics of information cascades have been studied, the structural patterns and their correlations with dynamics and semantics are largely unknown. Here we explore a large-scale dataset including 432 million information cascades with explicit records of spreading traces, spreading behaviors, information content as well as user profiles. We find that the structural complexity of information cascades is far beyond the previous conjectures. We first propose a ten-dimensional metric to quantify the structural characteristics of information cascades, reflecting cascade size, silhouette, direction and activity aspects. We find that bimodal law governs majority of the metrics, information flows in cascades have four directions, and the selfloop number and average activity of cascades follows power law. We then analyze the high-order structural patterns of information cascades. Finally, we evaluate to what extent the structural features of information cascades can explain its dynamic patterns and semantics, and finally uncover some notable implications of structural patterns in information cascades. Our discoveries also provide a foundation for the microscopic mechanisms for information spreading, potentially leading to implications for cascade prediction and outlier detection.",
"title": ""
},
{
"docid": "6a4844bf755830d14fb24caff1aa8442",
"text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.",
"title": ""
},
{
"docid": "c138221b620ef70466a7f27cdf235671",
"text": "A proper initialization of the weights in a neural network is critical to its convergence. Current insights into weight initialization come primarily from linear activation functions. In this paper, I develop a theory for weight initializations with non-linear activations. First, I derive a general weight initialization strategy for any neural network using activation functions differentiable at 0. Next, I derive the weight initialization strategy for the Rectified Linear Unit (RELU), and provide theoretical insights into why the Xavier initialization is a poor choice with RELU activations. My analysis provides a clear demonstration of the role of non-linearities in determining the proper weight initializations.",
"title": ""
},
{
"docid": "0d83203e0002c0342c2378d3e32502d4",
"text": "In a crisis ridden business environment, customers have become very averse to surprises. Business windows have become smaller; there is a heightened need for shorter development cycles and higher visibility. All this is translating into more and more customers specifically asking for agile. Service organizations such as Wipro Technologies need to adopt lean and agile methodologies to support the transition. As agile coaches, the biggest challenge we face is in transitioning the mindset of the team from that of a waterfall model to an agile thought pattern. Our experience in converting a waterfall team to agile is shared in this report.",
"title": ""
},
{
"docid": "72f307e6209f685442b7b194a28797e1",
"text": "It has been argued that creativity evolved, at least in part, through sexual selection to attract mates. Recent research lends support to this view and has also demonstrated a link between certain dimensions of schizotypy, creativity, and short-term mating. The current study delves deeper into these relationships by focusing on engagement in creative activity and employing an expansive set of personality and mental health measures (Five Factor Model, schizotypy, anxiety, and depression). A general tendency to engage in everyday forms of creative activity was related to number of sexual partners within the past year in males only. Furthermore, schizotypy, anxiety, and Neuroticism were all indirectly related to short-term mating success, again for males only. The study provides additional support for predictions made by sexual selection theory that men have a higher drive for creative display, and that creativity is linked with higher short-term mating success. The study also provides support for the contention that certain forms of mental illness may still exist in the gene pool because particular personality traits associated with milder forms of mental illness (i.e., Neuroticism & schizotypy) are also associated directly with creativity and indirectly with short-term mating success.",
"title": ""
},
{
"docid": "8055b2c65d5774000fe4fa81ff83efb7",
"text": "Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.",
"title": ""
},
{
"docid": "ad5c10745cd12c0fa47e52eac05907e0",
"text": "Many currently deployed Reinforcement Learning agents work in an environment shared with humans, be them co-workers, users or clients. It is desirable that these agents adjust to people’s preferences, learn faster thanks to their help, and act safely around them. We argue that most current approaches that learn from human feedback are unsafe: rewarding or punishing the agent a-posteriori cannot immediately prevent it from wrong-doing. In this paper, we extend Policy Gradient to make it robust to external directives, that would otherwise break the fundamentally on-policy nature of Policy Gradient. Our technique, Directed Policy Gradient (DPG), allows a teacher or backup policy to override the agent before it acts undesirably, while allowing the agent to leverage human advice or directives to learn faster. Our experiments demonstrate that DPG makes the agent learn much faster than reward-based approaches, while requiring an order of magnitude less advice. .",
"title": ""
},
{
"docid": "0aa566453fa3bd4bedec5ac3249d410a",
"text": "The approach of using passage-level evidence for document retrieval has shown mixed results when it is applied to a variety of test beds with different characteristics. One main reason of the inconsistent performance is that there exists no unified framework to model the evidence of individual passages within a document. This paper proposes two probabilistic models to formally model the evidence of a set of top ranked passages in a document. The first probabilistic model follows the retrieval criterion that a document is relevant if any passage in the document is relevant, and models each passage independently. The second probabilistic model goes a step further and incorporates the similarity correlations among the passages. Both models are trained in a discriminative manner. Furthermore, we present a combination approach to combine the ranked lists of document retrieval and passage-based retrieval.\n An extensive set of experiments have been conducted on four different TREC test beds to show the effectiveness of the proposed discriminative probabilistic models for passage-based retrieval. The proposed algorithms are compared with a state-of-the-art document retrieval algorithm and a language model approach for passage-based retrieval. Furthermore, our combined approach has been shown to provide better results than both document retrieval and passage-based retrieval approaches.",
"title": ""
},
{
"docid": "d779fa7abffd94dd83121b29aa367cfe",
"text": "The nexus of autonomous vehicle (AV) and electric vehicle (EV) technologies has important potential impacts on our transportation systems, particularly in the case of shared-use vehicles. There are natural synergies between shared AV fleets and EV technology, since fleets of AVs resolve the practical limitations of today’s non-autonomous EVs, including traveler range anxiety, access to charging infrastructure, and charging time management. Fleet-managed AVs relieve such concerns, managing range and charging activities based on real-time trip demand and established charging-station locations, as demonstrated in this paper. This work explores the management of a fleet of shared autonomous (battery-only) electric vehicles (SAEVs) in a regional discrete-time, agent-based model. The simulation examines the operation of SAEVs under various vehicle range and charging infrastructure scenarios in a gridded city modeled roughly after the densities of Austin, Texas. Results indicate that fleet size is sensitive to battery recharge time and vehicle range, with each 80-mile range SAEV replacing 3.7 privately owned vehicles and each 200-mile range SAEV replacing 5.5 privately owned vehicles, under Level II (240-volt AC) charging. With Level III 480-volt DC fast-charging infrastructure in place, these ratios rise to 5.4 vehicles for the 80-mile range SAEV and 6.8 vehicles for the 200-mile range SAEV. SAEVs can serve 96 to 98% of trip requests with average wait times between 7 and 10 minutes per trip. However, due to the need to travel while “empty” for charging and passenger pick-up, SAEV fleets are predicted to generate an additional 7.1 to 14.0% of travel miles. Financial analysis suggests that the combined cost of charging infrastructure, vehicle capital and maintenance, electricity, insurance, and registration for a fleet of SAEVs ranges from $0.42 to $0.49 per occupied mile traveled, which implies SAEV service can be offered at the equivalent per-mile cost of private vehicle ownership for low mileage households, and thus be competitive with current manually-driven carsharing services and significantly cheaper than on-demand driver-operated transportation services. The availability of inductive (wireless) charging infrastructure allows SAEVs to be price-competitive with nonelectric SAVs (when gasoline prices are between $2.18 and $3.50 per gallon). However, charging SAEVs at attendant-operated stations with traditional corded chargers incurs an additional $0.08 per mile compared to wireless charging, and as such would only be price-competitive with SAVs when gasoline reaches $4.35 to $5.70 per gallon.",
"title": ""
}
] |
scidocsrr
|
9aa79a1132792a0975f086b571c056f6
|
Secure Cyber-Physical Systems: Current trends, tools and open research problems
|
[
{
"docid": "f538089a72bcc5f6f9f944676b9f199d",
"text": "This paper focuses on the challenges of modeling cyber-physical systems (CPSs) that arise from the intrinsic heterogeneity, concurrency, and sensitivity to timing of such systems. It uses a portion of an aircraft vehicle management system (VMS), specifically the fuel management subsystem, to illustrate the challenges, and then discusses technologies that at least partially address the challenges. Specific technologies described include hybrid system modeling and simulation, concurrent and heterogeneous models of computation, the use of domain-specific ontologies to enhance modularity, and the joint modeling of functionality and implementation architectures.",
"title": ""
}
] |
[
{
"docid": "9a515a1266a868ca5680fc5676ca4b37",
"text": "To assure that an autonomous car is driving safely on public roads, its object detection module should not only work correctly, but show its prediction confidence as well. Previous object detectors driven by deep learning do not explicitly model uncertainties in the neural network. We tackle with this problem by presenting practical methods to capture uncertainties in a 3D vehicle detector for Lidar point clouds. The proposed probabilistic detector represents reliable epistemic uncertainty and aleatoric uncertainty in classification and localization tasks. Experimental results show that the epistemic uncertainty is related to the detection accuracy, whereas the aleatoric uncertainty is influenced by vehicle distance and occlusion. The results also show that we can improve the detection performance by 1%–5% by modeling the aleatoric uncertainty.",
"title": ""
},
{
"docid": "d6dba7a89bc123bc9bb616df6faee2bc",
"text": "Continuing interest in digital games indicated that it would be useful to update [Authors’, 2012] systematic literature review of empirical evidence about the positive impacts an d outcomes of games. Since a large number of papers was identified in th e period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. [Authors’] multidimensional analysis of games and t heir outcomes provided a useful framework for organising the varied research in this area. The mo st frequently occurring outcome reported for games for learning was knowledge acquisition, while entertain me t games addressed a broader range of affective, behaviour change, perceptual and cognitive and phys iological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experi m ntal work, examining in detail which game features are most effective in promoting engagement and supporting learning.",
"title": ""
},
{
"docid": "28115d61e528af469220651bcd7d592a",
"text": "There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships, i.e. they are not able to process temporal input sequences of arbitrary length. Fuzzy nite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic nite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree deened by a membership function. Based on previous work on encoding DFAs in discrete-time, second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by measuring string recognition performance of recurrent neural networks which encode large randomly generated FFAs. In particular, we examine how the networks' performance varies as a function of synaptic weight strength.",
"title": ""
},
{
"docid": "4cb49a91b5a30909c99138a8e36badcd",
"text": "The main goal of Business Process Management (BPM) is conceptualising, operationalizing and controlling workflows in organisations based on process models. In this paper we discuss several limitations of the workflow paradigm and suggest that process models can also play an important role in analysing how organisations think about themselves through storytelling. We contrast the workflow paradigm with storytelling through a comparative analysis. We also report a case study where storytelling has been used to elicit and document the practices of an IT maintenance team. This research contributes towards the development of better process modelling languages and tools.",
"title": ""
},
{
"docid": "1afb10bf586f26417b66b942f8c26586",
"text": "A combination of surface energy-guided blade coating and inkjet printing is used to fabricate an all-printed high performance, high yield, and low variability organic thin film transistor (OTFT) array on a plastic substrate. Functional inks and printing processes were optimized to yield self-assembled homogenous thin films in every layer of the OTFT stack. Specifically, we investigated the effect of capillary number, semiconductor ink composition (small molecule-polymer ratio), and additive high boiling point solvent concentrations on film fidelity, pattern design, device performance and yields.",
"title": ""
},
{
"docid": "c5576f31a30011c005280419204a2070",
"text": "During the past few years, the development of wireless sensor network technologies has spurred the design of novel protocol paradigms capable of meeting the needs of a wide broad of applications while taking into account the inherent constraints of the underlying network technologies, e.g. limited energy and computational capacities. Geographic routing is one of such paradigms whose principles of operation are based on the geographic location of the network nodes. Even though the large number of works already reported in the literature, there are still many open issues towards the design of robust and scalable geographic routing algorithms. In this study, after an analysis of the most relevant solutions reported in the literature, we introduce Azimuth-Range ROuting for largescale Wireless (ARROW) sensor networks. ARROW goes a step further on the design of geographic routing protocols by defining a simple and robust routing protocol whose operation principles completely free the network nodes of the burden of keeping routing records. Under ARROW, nodes carry out all routing decisions exclusively using the information imbedded in the data packets while avoiding the risk of routing loops, a major challenge when designing routing protocols for large-scale networks. Moreover, ARROW is supplemented with a simple yet effective forwarder resolution protocol, also introduced in this study, allowing the fast and loop-free selection of the forwarding node in a hop-to-hop basis. Both protocols, ARROW and the proposed forwarder resolution protocol, are validated by extensive computer simulations. Our results show that both protocols exhibit excellent scalability properties by limiting the overhead.",
"title": ""
},
{
"docid": "222ab6804b3fe15fe23b27bc7f5ede5f",
"text": "Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.",
"title": ""
},
{
"docid": "1c2acb749d89626cd17fd58fd7f510e3",
"text": "The lack of control of the content published is broadly regarded as a positive aspect of the Web, assuring freedom of speech to its users. On the other hand, there is also a lack of control of the content accessed by users when browsing Web pages. In some situations this lack of control may be undesired. For instance, parents may not desire their children to have access to offensive content available on the Web. In particular, accessing Web pages with nude images is among the most common problem of this sort. One way to tackle this problem is by using automated offensive image detection algorithms which can filter undesired images. Recent approaches on nude image detection use a combination of features based on color, texture, shape and other low level features in order to describe the image content. These features are then used by a classifier which is able to detect offensive images accordingly. In this paper we propose SNIF - simple nude image finder - which uses a color based feature only, extracted by an effective and efficient algorithm for image description, the border/interior pixel classification (BIC), combined with a machine learning technique, namely support vector machines (SVM). SNIF uses a simpler feature model when compared to previously proposed methods, which makes it a fast image classifier. The experiments carried out depict that the proposed method, despite its simplicity, is capable to identify up to 98% of nude images from the test set. This indicates that SNIF is as effective as previously proposed methods for detecting nude images.",
"title": ""
},
{
"docid": "341b6ae3f5cf08b89fb573522ceeaba1",
"text": "Neural parsers have benefited from automatically labeled data via dependencycontext word embeddings. We investigate training character embeddings on a word-based context in a similar way, showing that the simple method significantly improves state-of-the-art neural word segmentation models, beating tritraining baselines for leveraging autosegmented data.",
"title": ""
},
{
"docid": "b73a9a7770a2bbd5edcc991d7b848371",
"text": "This paper overviews various switched flux permanent magnet machines and their design and performance features, with particular emphasis on machine topologies with reduced magnet usage or without using magnet, as well as with variable flux capability. In addition, this paper also describes their relationships with doubly-salient permanent magnet machines and flux reversal permanent magnet machines.",
"title": ""
},
{
"docid": "9c798ee49b9243de0a851d686b4e197e",
"text": "Industry 4.0 combines the strengths of traditional industries with cutting edge internet technologies. It embraces a set of technologies enabling smart products integrated into intertwined digital and physical processes. Therefore, many companies face the challenge to assess the diversity of developments and concepts summarized under the term industry 4.0. The paper presents the result of a study on the potential of industry 4.0. The use of current technologies like Big Data or cloud-computing are drivers for the individual potential of use of Industry 4.0. Furthermore mass customization as well as the use of idle data and production time improvement are strong influence factors to the potential of Industry 4.0. On the other hand business process complexity has a negative influence.",
"title": ""
},
{
"docid": "7885cdfd33df957b6803d3d94c8ac212",
"text": "Ground penetrating radar (GPR) is non-destructive device used for monitoring underground structures. COST Action TU1208 promoted its use outside the civil engineering applications and provided a lot of free resources to the GPR community. In this paper, we built a low-cost GPR prototype for educational purposes according to the given resources and continued the work with focus on GPR antenna design. According to the required radiation characteristics, some antenna types are thoroughly discussed, fabricated and measured.",
"title": ""
},
{
"docid": "27c9ca50ac517c285bcb0f8b19f64ed3",
"text": "Traditional database management systems are best equipped to run onetime queries over finite stored data sets. However, many modern applications such as network monitoring, financial analysis, manufacturing, and sensor networks require long-running, or continuous, queries over continuous unbounded streams of data. In the STREAM project at Stanford, we are investigating data management and query processing for this class of applications. As part of the project we are building a general-purpose prototype Data Stream Management System (DSMS), also called STREAM, that supports a large class of declarative continuous queries over continuous streams and traditional stored data sets. The STREAM prototype targets environments where streams may be rapid, stream characteristics and query loads may vary over time, and system resources may be limited. Building a general-purpose DSMS poses many interesting challenges:",
"title": ""
},
{
"docid": "ca3a0e7bca08fc943d432179766f4ccf",
"text": "BACKGROUND\nMost errors in a clinical chemistry laboratory are due to preanalytical errors. Preanalytical variability of biospecimens can have significant effects on downstream analyses, and controlling such variables is therefore fundamental for the future use of biospecimens in personalized medicine for diagnostic or prognostic purposes.\n\n\nCONTENT\nThe focus of this review is to examine the preanalytical variables that affect human biospecimen integrity in biobanking, with a special focus on blood, saliva, and urine. Cost efficiency is discussed in relation to these issues.\n\n\nSUMMARY\nThe quality of a study will depend on the integrity of the biospecimens. Preanalytical preparations should be planned with consideration of the effect on downstream analyses. Currently such preanalytical variables are not routinely documented in the biospecimen research literature. Future studies using biobanked biospecimens should describe in detail the preanalytical handling of biospecimens and analyze and interpret the results with regard to the effects of these variables.",
"title": ""
},
{
"docid": "02d9153092f3cc2632810d4b46c272e8",
"text": "ion in concept learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11: 45–58. Chen, S., & Chaiken, S. 1999. The heuristic-systematic model in its broader context. In S. Chaiken & Y. Trope (Eds.), Dual-process theories in social psychology: 73–96. New York: Guilford Press. Chi, M. T. H., Glaser, R., & Farr, M. J. 1998. The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum Associates. Claxton, G. 1998. Knowing without knowing why. Psychologist, 11(5): 217–220. Collins, H. M. 1982. The replication of experiments in physics. In B. Barnes & D. Edge (Eds.), Science in context: 94–116. Cambridge, MA: MIT Press. Cyert, R. M., & March, J. G. 1963. A behavioral theory of the firm. Englewood Cliffs, NJ: Prentice-Hall. Dawes, R. M., Faust, D., & Meehl, P. E. 1989. Clinical versus actuarial judgment. Science, 31: 1668–1674. De Dreu, C. K. W. 2003. Time pressure and closing of the mind in negotiation. Organizational Behavior and Human Decision Processes, 91: 280–295. Denes-Raj, V., & Epstein, S. 1994. Conflict between intuitive and rational processing: When people behave against their better judgment. Journal of Personality and Social Psychology, 66: 819–829. Donaldson, T. 2003. Editor’s comments: Taking ethics seriously—A mission now more possible. Academy of Management Review, 28: 363–366. Dreyfus, H. L., & Dreyfus, S. E. 1986. Mind over machine: The power of human intuition and expertise in the era of the computer. New York: Free Press. Edland, A., & Svenson, O. 1993. Judgment and decision making under time pressure. In O. Svenson & A. J. Maule (Eds.), Time pressure and stress in human judgment and decision making: 27–40. New York: Plenum Press. Eisenhardt, K. 1989. Making fast strategic decisions in highvelocity environments. Academy of Management Jour-",
"title": ""
},
{
"docid": "32059170608532d89b2d20724f282f4a",
"text": "Functional near infrared spectroscopy (fNIRS) is a rapidly developing neuroimaging modality for exploring cortical brain behaviour. Despite recent advances, the quality of fNIRS experimentation may be compromised in several ways: firstly, by altering the optical properties of the tissues encountered in the path of light; secondly, through adulteration of the recovered biological signals (noise) and finally, by modulating neural activity. Currently, there is no systematic way to guide the researcher regarding these factors when planning fNIRS studies. Conclusions extracted from fNIRS data will only be robust if appropriate methodology and analysis in accordance with the research question under investigation are employed. In order to address these issues and facilitate the quality control process, a taxonomy of factors influencing fNIRS data have been established. For each factor, a detailed description is provided and previous solutions are reviewed. Finally, a series of evidence-based recommendations are made with the aim of improving consistency and quality of fNIRS research.",
"title": ""
},
{
"docid": "3ca04efcb370e8a30ab5ad42d1d2d047",
"text": "The exceptionally adhesive foot of the gecko remains clean in dirty environments by shedding contaminants with each step. Synthetic gecko-inspired adhesives have achieved similar attachment strengths to the gecko on smooth surfaces, but the process of contact self-cleaning has yet to be effectively demonstrated. Here, we present the first gecko-inspired adhesive that has matched both the attachment strength and the contact self-cleaning performance of the gecko's foot on a smooth surface. Contact self-cleaning experiments were performed with three different sizes of mushroom-shaped elastomer microfibres and five different sizes of spherical silica contaminants. Using a load-drag-unload dry contact cleaning process similar to the loads acting on the gecko foot during locomotion, our fully contaminated synthetic gecko adhesives could recover lost adhesion at a rate comparable to that of the gecko. We observed that the relative size of contaminants to the characteristic size of the microfibres in the synthetic adhesive strongly determined how and to what degree the adhesive recovered from contamination. Our approximate model and experimental results show that the dominant mechanism of contact self-cleaning is particle rolling during the drag process. Embedding of particles between adjacent fibres was observed for particles with diameter smaller than the fibre tips, and further studied as a temporary cleaning mechanism. By incorporating contact self-cleaning capabilities, real-world applications of synthetic gecko adhesives, such as reusable tapes, clothing closures and medical adhesives, would become feasible.",
"title": ""
},
{
"docid": "aa9d428d21a5cebee2990dede931953a",
"text": "A grand challenge of the 21 century cosmology is to accurately estimate the cosmological parameters of our Universe. A major approach in estimating the cosmological parameters is to use the large scale matter distribution of the Universe. Galaxy surveys provide the means to map out cosmic large-scale structure in three dimensions. Information about galaxy locations is typically summarized in a “single” function of scale, such as the galaxy correlation function or powerspectrum. We show that it is possible to estimate these cosmological parameters directly from the distribution of matter. This paper presents the application of deep 3D convolutional networks to volumetric representation of dark-matter simulations as well as the results obtained using a recently proposed distribution regression framework, showing that machine learning techniques are comparable to, and can sometimes outperform, maximum-likelihood point estimates using “cosmological models”. This opens the way to estimating the parameters of our Universe with higher accuracy.",
"title": ""
},
{
"docid": "30aaf753d3ec72f07d4838de391524ca",
"text": "The present study was aimed to determine the effect on liver, associated oxidative stress, trace element and vitamin alteration in dogs with sarcoptic mange. A total of 24 dogs with clinically established diagnosis of sarcoptic mange, divided into two groups, severely infested group (n=9) and mild/moderately infested group (n=15), according to the extent of skin lesions caused by sarcoptic mange and 6 dogs as control group were included in the present study. In comparison to healthy control hemoglobin, PCV, and TEC were significantly (P<0.05) decreased in dogs with sarcoptic mange however, significant increase in TLC along with neutrophilia and lymphopenia was observed only in severely infested dogs. The albumin, glucose and cholesterol were significantly (P<0.05) decreased and globulin, ALT, AST and bilirubin were significantly (P<0.05) increased in severely infested dogs when compared to other two groups. Malondialdehyde (MDA) levels were significantly (P<0.01) higher in dogs with sarcoptic mange, with levels highest in severely infested groups. Activity of superoxide dismutase (SOD) (P<0.05) and catalase were significantly (P<0.01) lower in sarcoptic infested dogs when compared with the healthy control group. Zinc and copper levels in dogs with sarcoptic mange were significantly (P<0.05) lower when compared with healthy control group with the levels lowest in severely infested group. Vitamin A and vitamin C levels were significantly (P<0.05) lower in sarcoptic infested dogs when compared to healthy control. From the present study, it was concluded that sarcoptic mange in dogs affects the liver and the infestation is associated with oxidant/anti-oxidant imbalance, significant alteration in trace elements and vitamins.",
"title": ""
},
{
"docid": "b866fc215dbae6538e998b249563e78d",
"text": "The term `heavy metal' is, in this context, imprecise. It should probably be reserved for those elements with an atomic mass of 200 or greater [e.g., mercury (200), thallium (204), lead (207), bismuth (209) and the thorium series]. In practice, the term has come to embrace any metal, exposure to which is clinically undesirable and which constitutes a potential hazard. Our intention in this review is to provide an overview of some general concepts of metal toxicology and to discuss in detail metals of particular importance, namely, cadmium, lead, mercury, thallium, bismuth, arsenic, antimony and tin. Poisoning from individual metals is rare in the UK, even when there is a known risk of exposure. Table 1 shows that during 1991±92 only 1 ́1% of male lead workers in the UK and 5 ́5% of female workers exceeded the legal limits for blood lead concentration. Collectively, however, poisoning with metals forms an important aspect of toxicology because of their widespread use and availability. Furthermore, hitherto unrecognized hazards and accidents continue to be described. The investigation of metal poisoning forms a distinct specialist area, since most metals are usually measured using atomic absorption techniques. Analyses require considerable expertise and meticulous attention to detail to ensure valid results. Different analytical performance standards may be required of assays used for environmental and occupational monitoring, or for solely toxicological purposes. Because of the high capital cost of good quality instruments, the relatively small numbers of tests required and the variety of metals, it is more cost-effective if such testing is carried out in regional, national or other centres having the necessary experience. Nevertheless, patients are frequently cared for locally, and clinical biochemists play a crucial role in maintaining a high index of suspicion and liaising with clinical colleagues to ensure the provision of correct samples for analysis and timely advice.",
"title": ""
}
] |
scidocsrr
|
2dddbe3dae06552bb4475ff6cd026805
|
The psycholinguistic and affective structure of words conveying pain
|
[
{
"docid": "1c7131fcb031497b2c1487f9b25d8d4e",
"text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.",
"title": ""
}
] |
[
{
"docid": "6db5c9bb64e5715c6ef88bc820167559",
"text": "In this paper, we propose a simple yet effective method of identifying traffic conditions on surface streets given location traces collected from on-road vehicles---this requires only GPS location data, plus infrequent low-bandwidth cellular updates. Unlike other systems, which simply display vehicle speeds on the road, our system characterizes unique traffic patterns on each road segment and identifies unusual traffic states on a segment-by-segment basis. We developed and evaluated the system by applying it to two sets of location traces. Evaluation results show that higher than 90% accuracy in characterization can be achieved after ten or more traversals are collected on a given road segment. We also show that traffic patterns on a road are very consistent over time, provided that the underlying road conditions do not change. This allows us to use a longer history in identifying traffic conditions with higher accuracy.",
"title": ""
},
{
"docid": "2796b923379ef29768e4b20019a2cbe1",
"text": "L-BFGS-B is a limited-memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is difficult to obtain, or for large dense problems. L-BFGS-B can also be used for unconstrained problems and in this case performs similarly to its predessor, algorithm L-BFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.",
"title": ""
},
{
"docid": "d0f73e97a231098891faf93d1d8406b8",
"text": "With the explosion of video content on the Internet, there is a need for research on methods for video analysis which take human cognition into account. One such cognitive measure is memorability, or the ability to recall visual content after watching it. Prior research has looked into image memorability and shown that it is intrinsic to visual content, but the problem of modeling video memorability has not been addressed sufficiently. In this work, we develop a prediction model for video memorability, including complexities of video content in it. Detailed feature analysis reveals that the proposed method correlates well with existing findings on memorability. We also describe a novel experiment of predicting video sub-shot memorability and show that our approach improves over current memorability methods in this task. Experiments on standard datasets demonstrate that the proposed metric can achieve results on par or better than the state-of-the art methods for video summarization.",
"title": ""
},
{
"docid": "aa54e87bd6a2967ceda284975bdedfeb",
"text": "In this paper we present a method for segmentation of fingernail patterns and differentiate them as distinct nail parts; fingernail plate with lunula and distal free edge of nail plate. In the research work, focus is on fixed area of the fingernail plate plus lunula, as it remains unchanged in structure, where as the distal nail edge extends and changes in structure over a period of time. In order to segment fingernail parts, we have devised an algorithm that automatically separates unchanging region of fingernail plate from free distal edge of nail structure. The fingernail plate that includes lunula within (may or may not be prominently present in fingernails), is used as biometric in our advance study. Theory suggests, every fingernail within finger formation comprises of the brightest regions amongst the captured finger data set (in our system). Proposed method is of two stages. In first stage, color image is converted to gray scale and contrast enhancement is applied using adaptive histogram equalization. In second stage, we perform segmentation using watershed method that exercises maxima and minima properties of marker controlled watershed principles. In order to verify the results of the algorithm, we have constructed a confusion matrix where evaluation has been done with ground truth. Additionally, the segmented object's from both the methods is considered for quality metrics assessment. Similarity accuracy between the ground truth and watershed result is 84.0% correctness for fingernail plate. Initial fingernail segmentation results are promising, supporting its use for biometric application.",
"title": ""
},
{
"docid": "35b025c508a568e0d73d9113c7cdb5e2",
"text": "Satire is an attractive subject in deception detection research: it is a type of deception that intentionally incorporates cues revealing its own deceptiveness. Whereas other types of fabrications aim to instill a false sense of truth in the reader, a successful satirical hoax must eventually be exposed as a jest. This paper provides a conceptual overview of satire and humor, elaborating and illustrating the unique features of satirical news, which mimics the format and style of journalistic reporting. Satirical news stories were carefully matched and examined in contrast with their legitimate news counterparts in 12 contemporary news topics in 4 domains (civics, science, business, and “soft” news). Building on previous work in satire detection, we proposed an SVMbased algorithm, enriched with 5 predictive features (Absurdity, Humor, Grammar, Negative Affect, and Punctuation) and tested their combinations on 360 news articles. Our best predicting feature combination (Absurdity, Grammar and Punctuation) detects satirical news with a 90% precision and 84% recall (F-score=87%). Our work in algorithmically identifying satirical news pieces can aid in minimizing the potential deceptive impact of satire.",
"title": ""
},
{
"docid": "32b4b275dc355dff2e3e168fe6355772",
"text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.",
"title": ""
},
{
"docid": "2717779fa409f10f3a509e398dc24233",
"text": "Hallyu refers to the phenomenon of Korean popular culture which came into vogue in Southeast Asia and mainland China in late 1990s. Especially, hallyu is very popular among young people enchanted with Korean music (K-pop), dramas (K-drama), movies, fashion, food, and beauty in China, Taiwan, Hong Kong, and Vietnam, etc. This cultural phenomenon has been closely connected with multi-layered transnational movements of people, information and capital flows in East Asia. Since the 15 century, East and West have been the two subjects of cultural phenomena. Such East–West dichotomy was articulated by Westerners in the scholarly tradition known as “Orientalism.”During the Age of Exploration (1400–1600), West didn’t only take control of East by military force, but also created a new concept of East/Orient, as Edward Said analyzed it expertly in his masterpiece Orientalism in 1978. Throughout the history of imperialism for nearly 4-5 centuries, west was a cognitive subject, but East was an object being recognized by the former. Accordingly, “civilization and modernization” became the exclusive properties of which West had copyright (?!), whereas East was a “sub-subject” to borrow or even plagiarize from Western standards. In this sense, (making) modern history in East Asia was a compulsive imitation of Western civilization or a catch-up with the West in other wards. Thus, it is interesting to note that East Asian people, after gaining economic power through “compressed modernization,” are eager to be main agents of their cultural activities in and through the enjoyment of East Asian popular culture in a postmodern era. In this transition from Westerncentered into East Asian-based popular culture, they are no longer sub-subjects of modernity.",
"title": ""
},
{
"docid": "a4da82c9c98203810cdfcf5c1a2c7f0a",
"text": "Software producing organizations are frequently judged by others for being ‘open’ or ‘closed’, where a more ‘closed’ organization is seen as being detrimental to its software ecosystem. These qualifications can harm the reputation of these companies, for they are deemed to promote vendor lock-in, use closed data formats, and are seen as using intellectual property laws to harm others. These judgements, however, are frequently based on speculation and the need arises for a method to establish openness of an organization, such that decisions are no longer based on prejudices, but on an objective assessment of the practices of a software producing organization. In this article the open software enterprise model is presented that roduct software vendors",
"title": ""
},
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
},
{
"docid": "54bd0eb63c80eec832be468d8bb4b129",
"text": "The impulse response and frequency response of indoor visible light communication diffuse channels are characterized experimentally in this paper. Both the short pulse technique and frequency sweep technique are adopted for experimental investigation. The iterative site-based modeling is also carried out to simulate the channel impulse response, and good conformity is observed between the experimental and simulation results. Finally, the impact of receiver pointing angles and field of view on the channel 3dB bandwidth is investigated.",
"title": ""
},
{
"docid": "777e3818dfeb25358dedd6f740e20411",
"text": "Chronic obstructive pulmonary, pneumonia, asthma, tuberculosis, lung cancer diseases are the most important chest diseases. These chest diseases are important health problems in the world. In this study, a comparative chest diseases diagnosis was realized by using multilayer, probabilistic, learning vector quantization, and generalized regression neural networks. The chest diseases dataset were prepared by using patient’s epicrisis reports from a chest diseases hospital’s database. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "af3af0a4102ea0fb555cad52e4cafa50",
"text": "The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when apriori information about the expected duration of the states is incorporated into the model, such as in a hidden semiMarkov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average F1 score of 95.63 ± 0.85%, while the current state of the art achieved 86.28 ± 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.",
"title": ""
},
{
"docid": "99e71a45374284cbcb28b3dbe69e175d",
"text": "Spatial event detection is an important and challenging problem. Unlike traditional event detection that focuses on the timing of global urgent event, the task of spatial event detection is to detect the spatial regions (e.g. clusters of neighboring cities) where urgent events occur. In this paper, we focus on the problem of spatial event detection using textual information in social media. We observe that, when a spatial event occurs, the topics relevant to the event are often discussed more coherently in cities near the event location than those far away. In order to capture this pattern, we propose a new method called Graph Topic Scan Statistic (Graph-TSS) that corresponds to a generalized log-likelihood ratio test based on topic modeling. We first demonstrate that the detection of spatial event regions under Graph-TSS is NP-hard due to a reduction from classical node-weighted prize-collecting Steiner tree problem (NW-PCST). We then design an efficient algorithm that approximately maximizes the graph topic scan statistic over spatial regions of arbitrary form. As a case study, we consider three applications using Twitter data, including Argentina civil unrest event detection, Chile earthquake detection, and United States influenza disease outbreak detection. Empirical evidence demonstrates that the proposed Graph-TSS performs superior over state-of-the-art methods on both running time and accuracy.",
"title": ""
},
{
"docid": "f4e67e19f5938f475a2757282082b695",
"text": "Classrooms are complex social systems, and student-teacher relationships and interactions are also complex, multicomponent systems. We posit that the nature and quality of relationship interactions between teachers and students are fundamental to understanding student engagement, can be assessed through standardized observation methods, and can be changed by providing teachers knowledge about developmental processes relevant for classroom interactions and personalized feedback/support about their interactive behaviors and cues. When these supports are provided to teachers’ interactions, student engagement increases. In this chapter, we focus on the theoretical and empirical links between interactions and engagement and present an approach to intervention designed to increase the quality of such interactions and, in turn, increase student engagement and, ultimately, learning and development. Recognizing general principles of development in complex systems, a theory of the classroom as a setting for development, and a theory of change specifi c to this social setting are the ultimate goals of this work. Engagement, in this context, is both an outcome in its own R. C. Pianta , Ph.D. (*) Curry School of Education , University of Virginia , PO Box 400260 , Charlottesville , VA 22904-4260 , USA e-mail: rcp4p@virginia.edu B. K. Hamre , Ph.D. Center for Advanced Study of Teaching and Learning , University of Virginia , Charlottesville , VA , USA e-mail: bkh3d@virginia.edu J. P. Allen , Ph.D. Department of Psychology , University of Virginia , Charlottesville , VA , USA e-mail: allen@virginia.edu Teacher-Student Relationships and Engagement: Conceptualizing, Measuring, and Improving the Capacity of Classroom Interactions* Robert C. Pianta , Bridget K. Hamre , and Joseph P. Allen *Preparation of this chapter was supported in part by the Wm. T. Grant Foundation, the Foundation for Child Development, and the Institute of Education Sciences. 366 R.C. Pianta et al.",
"title": ""
},
{
"docid": "90edad4c0a8209065638778e2cf28d1f",
"text": "Christopher J.C. Burges Advanced Technologies, Bell Laboratories, Lucent Technologies Holmdel, New Jersey burges@lucent.com We show that the recently proposed variant of the Support Vector machine (SVM) algorithm, known as v-SVM, can be interpreted as a maximal separation between subsets of the convex hulls of the data, which we call soft convex hulls. The soft convex hulls are controlled by choice of the parameter v. If the intersection of the convex hulls is empty, the hyperplane is positioned halfway between them such that the distance between convex hulls, measured along the normal, is maximized; and if it is not, the hyperplane's normal is similarly determined by the soft convex hulls, but its position (perpendicular distance from the origin) is adjusted to minimize the error sum. The proposed geometric interpretation of v-SVM also leads to necessary and sufficient conditions for the existence of a choice of v for which the v-SVM solution is nontrivial.",
"title": ""
},
{
"docid": "1ed3efac601cb0c85790079c4dc0280b",
"text": "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"title": ""
},
{
"docid": "c2c832689f0bfa9dec0b32203ae355d4",
"text": "Steve Jobs, one of the greatest visionaries of our time was quoted in 1996 saying “a lot of times, people don’t know what they want until you show it to them”[38] indicating he advocated products to be developed based on human intuition rather than research. With the advancements of mobile devices, social networks and the Internet of Things (IoT) enormous amounts of complex data, both structured & unstructured are being captured in hope to allow organizations to make better business decisions as data is now vital for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Datalifecycle, Privacy & Security, and Data Representation. This paper reviews the fundamental concept of Big Data, the Data Storage domain, the MapReduce programming paradigm used in processing these large datasets, and focuses on two case studies showing the effectiveness of Big Data Analytics and presents how it could be of greater good in the future if handled appropriately. Keywords—Big Data; Big Data Analytics; Big Data Inconsistencies; Data Storage; MapReduce; Knowledge-Space",
"title": ""
},
{
"docid": "215b02216c68ba6eb2d040e8e01c1ac1",
"text": "Numerous companies are expecting their knowledge management (KM) to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. The KM strategy selection is a kind of multiple criteria decision-making (MCDM) problem, which requires considering a large number of complex factors as multiple evaluation criteria. A robust MCDM method should consider the interactions among criteria. The analytic network process (ANP) is a relatively new MCDM method which can deal with all kinds of interactions systematically. Moreover, the Decision Making Trial and Evaluation Laboratory (DEMATEL) not only can convert the relations between cause and effect of criteria into a visual structural model, but also can be used as a way to handle the inner dependences within a set of criteria. Hence, this paper proposes an effective solution based on a combined ANP and DEMATEL approach to help companies that need to evaluate and select KM strategies. Additionally, an empirical study is presented to illustrate the application of the proposed method. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f391d24622a123cf35c56693ac3b0044",
"text": "Web users are confronted with the daunting challenges of creating, remembering, and using more and more strong passwords than ever before in order to protect their valuable assets on different websites. Password manager is one of the most popular approaches designed to address these challenges by saving users' passwords and later automatically filling the login forms on behalf of users. Fortunately, all the five most popular Web browsers have provided password managers as a useful built-in feature. Unfortunately, the designs of all those Browser-based Password Managers (BPMs) have severe security vulnerabilities. In this paper, we uncover the vulnerabilities of existing BPMs and analyze how they can be exploited by attackers to crack users' saved passwords. Moreover, we propose a novel Cloud-based Storage-Free BPM (CSF-BPM) design to achieve a high level of security with the desired confidentiality, integrity, and availability properties. We have implemented a CSF-BPM system into Firefox and evaluated its correctness and performance. We believe CSF-BPM is a rational design that can also be integrated into other popular Web browsers.",
"title": ""
},
{
"docid": "7aaf5d401c410b3b82277dadcd3246b4",
"text": "We present a novel approach to segment text lines from handwritten document images. In contrast to existing approaches which mainly use hand-designed features or heuristic rules to estimate the location of text lines, we train a fully convolutional network (FCN) to predict text line structure in document images. By using the FCN, a line map which is a rough estimation of text line is obtained. From this line map, text strings that pass through characters in each text line are constructed. To deal with touching text lines, line adjacency graph (LAG) is used to separate the touching characters into different text strings. The testing result on ICDAR2013 Handwritten Segmentation Contest dataset shows high performance together with the robustness of our system with different types of languages and multi-skewed text lines.",
"title": ""
}
] |
scidocsrr
|
992ff9c265ed70c94eb4c4456e1d7407
|
On Learning and Learned Representation with Dynamic Routing in Capsule Networks
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
}
] |
[
{
"docid": "efa566cdd4f5fa3cb12a775126377cb5",
"text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.",
"title": ""
},
{
"docid": "b9148f25ba143660cf38035425443ee9",
"text": "Humans tend to swing their arms when they walk, a curious behaviour since the arms play no obvious role in bipedal gait. It might be costly to use muscles to swing the arms, and it is unclear whether potential benefits elsewhere in the body would justify such costs. To examine these costs and benefits, we developed a passive dynamic walking model with free-swinging arms. Even with no torques driving the arms or legs, the model produced walking gaits with arm swinging similar to humans. Passive gaits with arm phasing opposite to normal were also found, but these induced a much greater reaction moment from the ground, which could require muscular effort in humans. We therefore hypothesized that the reduction of this moment may explain the physiological benefit of arm swinging. Experimental measurements of humans (n = 10) showed that normal arm swinging required minimal shoulder torque, while volitionally holding the arms still required 12 per cent more metabolic energy. Among measures of gait mechanics, vertical ground reaction moment was most affected by arm swinging and increased by 63 per cent without it. Walking with opposite-to-normal arm phasing required minimal shoulder effort but magnified the ground reaction moment, causing metabolic rate to increase by 26 per cent. Passive dynamics appear to make arm swinging easy, while indirect benefits from reduced vertical moments make it worthwhile overall.",
"title": ""
},
{
"docid": "179e9c0672086798e74fa1197a0fda21",
"text": "Narcissism is typically viewed as a dimensional construct in social psychology. Direct evidence supporting this position is lacking, however, and recent research suggests that clinical measures of narcissism exhibit categorical properties. It is therefore unclear whether social psychological researchers should conceptualize narcissism as a category or continuum. To help remedy this, the latent structure of narcissism—measured by the Narcissistic Personality Inventory (NPI)—was examined using 3895 participants and three taxometric procedures. Results suggest that NPI scores are distributed dimensionally. There is no apparent shift from ‘‘normal’’ to ‘‘narcissist’’ observed across the NPI continuum. This is consistent with the prevailing view of narcissism in social psychology and suggests that narcissism is structured similar to other aspects of general personality. This also suggests a difference in how narcissism is structured in clinical versus social psychology (134 words). 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7a02abe86e40d5157cb6c2a8d593cb24",
"text": "Preoperative planning is of paramount importance in obtaining reproducible results in modern hip arthroplasty. Planning helps the surgeon visualize the operation after careful review of the clinical and radiographic findings. A standardized radiograph with a known magnification should be used for templating. The cup template should be placed relative to the ilioischial line, the teardrop, and the superolateral acetabular margin, so that the removal of the supportive subchondral bone is minimal and the center of rotation of the hip is restored. When acetabular abnormalities are encountered, additional measures are necessary to optimize cup coverage and minimize the risk of malposition. Templating the femoral side for cemented and cementless implants should aim to optimize limb length and femoral offset, thereby improving the biomechanics of the hip joint. Meticulous preoperative planning allows the surgeon to perform the procedure expediently and precisely, anticipate potential intraoperative complications, and achieve reproducible results.",
"title": ""
},
{
"docid": "67bc6aa954413241827114fd20686355",
"text": "Hardware-based Trusted Execution Environments (TEEs) are widely deployed in mobile devices. Yet their use has been limited primarily to applications developed by the device vendors. Recent standardization of TEE interfaces by GlobalPlatform (GP) promises to partially address this problem by enabling GP-compliant trusted applications to run on TEEs from different vendors. Nevertheless ordinary developers wishing to develop trusted applications face significant challenges. Access to hardware TEE interfaces are difficult to obtain without support from vendors. Tools and software needed to develop and debug trusted applications may be expensive or non-existent. In this paper, we describe Open-TEE, a virtual, hardware-independent TEE implemented in software. Open-TEE conforms to GP specifications. It allows developers to develop and debug trusted applications with the same tools they use for developing software in general. Once a trusted application is fully debugged, it can be compiled for any actual hardware TEE. Through performance measurements and a user study we demonstrate that Open-TEE is efficient and easy to use. We have made Open-TEE freely available as open source.",
"title": ""
},
{
"docid": "a1444497114eadc1c90c1cfb85852641",
"text": "For several years it has been argued that neural synchronisation is crucial for cognition. The idea that synchronised temporal patterns between different neural groups carries information above and beyond the isolated activity of these groups has inspired a shift in focus in the field of functional neuroimaging. Specifically, investigation into the activation elicited within certain regions by some stimulus or task has, in part, given way to analysis of patterns of co-activation or functional connectivity between distal regions. Recently, the functional connectivity community has been looking beyond the assumptions of stationarity that earlier work was based on, and has introduced methods to incorporate temporal dynamics into the analysis of connectivity. In particular, non-invasive electrophysiological data (magnetoencephalography/electroencephalography (MEG/EEG)), which provides direct measurement of whole-brain activity and rich temporal information, offers an exceptional window into such (potentially fast) brain dynamics. In this review, we discuss challenges, solutions, and a collection of analysis tools that have been developed in recent years to facilitate the investigation of dynamic functional connectivity using these imaging modalities. Further, we discuss the applications of these approaches in the study of cognition and neuropsychiatric disorders. Finally, we review some existing developments that, by using realistic computational models, pursue a deeper understanding of the underlying causes of non-stationary connectivity.",
"title": ""
},
{
"docid": "f45d6d572325e20bad1eaffe5330f077",
"text": "Ongoing brain activity can be recorded as electroen-cephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% ± 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.",
"title": ""
},
{
"docid": "adf530152b474c2b6147da07acf3d70d",
"text": "One of the basic services in a distributed network is clock synchronization as it enables a palette of services, such as synchronized measurements, coordinated actions, or time-based access to a shared communication medium. The IEEE 1588 standard defines the Precision Time Protocol (PTP) and provides a framework to synchronize multiple slave clocks to a master by means of synchronization event messages. While PTP is capable for synchronization accuracies below 1 ns, practical synchronization approaches are hitting a new barrier due to asymmetric line delays. Although compensation fields for the asymmetry are present in PTP version 2008, no specific measures to estimate the asymmetry are defined in the standard. In this paper we present a solution to estimate the line asymmetry in 100Base-TX networks based on line swapping. This approach seems appealing for existing installations as most Ethernet PHYs have the line swapping feature built in, and it only delays the network startup, but does not alter the operation of the network. We show by an FPGA-based prototype system that our approach is able to improve the synchronization offset from more than 10 ns down to below 200 ps.",
"title": ""
},
{
"docid": "1cbc333cce4870cc0f465bb76b6e4d3c",
"text": "This note attempts to raise awareness within the network research community about the security of the interdomain routing infrastructure. We identify several attack objectives and mechanisms, assuming that one or more BGP routers have been compromised. Then, we review the existing and proposed countermeasures, showing that they are either generally ineffective (route filtering), or probably too heavyweight to deploy (S-BGP). We also review several recent proposals, and conclude by arguing that a significant research effort is urgently needed in the area of routing security.",
"title": ""
},
{
"docid": "4348c83744962fcc238e7f73abecfa5e",
"text": "We introduce MeSys, a meaning-based approach, for solving English math word problems (MWPs) via understanding and reasoning in this paper. It first analyzes the text, transforms both body and question parts into their corresponding logic forms, and then performs inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating an extracted math quantity with its associated context information (i.e., the physical meaning of this quantity). Statistical models are proposed to select the operator and operands. A noisy dataset is designed to assess if a solver solves MWPs mainly via understanding or mechanical pattern matching. Experimental results show that our approach outperforms existing systems on both benchmark datasets and the noisy dataset, which demonstrates that the proposed approach understands the meaning of each quantity in the text more.",
"title": ""
},
{
"docid": "aec560c27d4873674114bd5dd9d64625",
"text": "Caches consume a significant amount of energy in modern microprocessors. To design an energy-efficient microprocessor, it is important to optimize cache energy consumption. This paper examines performance and power trade-offs in cache designs and the effectiveness of energy reduction for several novel cache design techniques targeted for low power.",
"title": ""
},
{
"docid": "7d261a30fa2542ac8d7befdc10704433",
"text": "Regulating deep neural networks (DNNs) with human structured knowledge has shown to be of great benefit for improved accuracy and interpretability. We develop a general framework that enables learning knowledge and its confidence jointly with the DNNs, so that the vast amount of fuzzy knowledge can be incorporated and automatically optimized with little manual efforts. We apply the framework to sentence sentiment analysis, augmenting a DNN with massive linguistic constraints on discourse and polarity structures. Our model substantially enhances the performance using less training data, and shows improved interpretability. The principled framework can also be applied to posterior regularization for regulating other statistical models.",
"title": ""
},
{
"docid": "6fc44560d9784d22fc7d6ebcab756b10",
"text": "In this paper we employ probabilistic relational affordance models in a robotic manipulation task. Such affordance models capture the interdependencies between properties of multiple objects, executed actions, and effects of those actions on objects. Recently it was shown how to learn such models from observed video demonstrations of actions manipulating several objects. This paper extends that work and employs those models for sequential tasks. Our approach consists of two parts. First, we employ affordance models sequentially in order to recognize the individual actions making up a demonstrated sequential skill or high level concept. Second, we utilize the models of concepts to plan a suitable course of action to replicate the observed consequences of a demonstration. For this we adopt the framework of relational Markov decision processes. Empirical results show the viability of the affordance models for sequential manipulation skills for object placement.",
"title": ""
},
{
"docid": "f8878dd6e858f2acba35bf0f75168815",
"text": "BACKGROUND\nPsoriasis can be found at several different localizations which may be of various impact on patients' quality of life (QoL). One of the easy visible, and difficult to conceal localizations are the nails.\n\n\nOBJECTIVE\nTo achieve more insight into the QoL of psoriatic patients with nail psoriasis, and to characterize the patients with nail involvement which are more prone to the impact of the nail alterations caused by psoriasis.\n\n\nMETHOD\nA self-administered questionnaire was distributed to all members (n = 5400) of the Dutch Psoriasis Association. The Dermatology Life Quality Index (DLQI) and the Nail Psoriasis Quality of life 10 (NPQ10) score were included as QoL measures. Severity of cutaneous lesions was determined using the self-administered psoriasis area and severity index (SAPASI).\n\n\nRESULTS\nPatients with nail psoriasis scored significantly higher mean scores on the DLQI (4.9 vs. 3.7, P = <0.001) and showed more severe psoriasis (SAPASI, 6.6 vs. 5.3, P = <0.001). Patients with coexistence of nail bed and nail matrix features showed higher DLQI scores compared with patients with involvement of one of the two localizations exclusively (5.3 vs. 4.2 vs. 4.3, P = 0.003). Patients with only nail bed alterations scored significant higher NPQ10 scores when compared with patients with only nail matrix features. Patients with psoriatic arthritis (PsA) and nail psoriasis experiences more impairments compared with nail psoriasis patients without PsA (DLQI 5.5 vs. 4.3, NPQ10 13.3 vs. 7.0). Females scored higher mean scores on all QoL scores.\n\n\nCONCLUSION\nGreater attention should be paid to the possible impact nail abnormalities have on patients with nail psoriasis, which can be identified by nail psoriasis specific questionnaires such as the NPQ10. As improving the severity of disease may have a positive influence on QoL, the outcome of QoL measurements should be taken into account when deciding on treatment strategies.",
"title": ""
},
{
"docid": "32b860121b49bd3a61673b3745b7b1fd",
"text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",
"title": ""
},
{
"docid": "5ca5cfcd0ed34d9b0033977e9cde2c74",
"text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We
nd that RP signi
cantly reduces both brand-name and generic prices, and results in signi
cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi
cant cost-savings, and that patients copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi
cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for
nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: kurt.brekke@nhh.no. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: tor.holmas@uni.no. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: o.r.straume@eeg.uminho.pt.",
"title": ""
},
{
"docid": "2b1eda1c5a0bb050b82f5fa42893466b",
"text": "In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings such as the SQuAD (Rajpurkar et al. 2016) dataset, which provides a preselected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia) instead of a pre-selected passage (Chen et al. 2017a). This setting is more complex as it requires large-scale search for relevant passages by an information retrieval component, combined with a reading comprehension model that “reads” the passages to generate an answer to the question. Performance in this setting lags well behind closed-domain performance. In this paper, we present a novel open-domain QA system called Reinforced Ranker-Reader (R), based on two algorithmic innovations. First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of extracting the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker along with an answer-extraction Reader model, based on reinforcement learning. We report extensive experimental results showing that our method significantly improves on the state of the art for multiple open-domain QA datasets. 2",
"title": ""
},
{
"docid": "17f719b2bfe2057141e367afe39d7b28",
"text": "Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.",
"title": ""
},
{
"docid": "9379cad59abab5e12c97a9b92f4aeb93",
"text": "SigTur/E-Destination is a Web-based system that provides personalized recommendations of touristic activities in the region of Tarragona. The activities are properly classified and labeled according to a specific ontology, which guides the reasoning process. The recommender takes into account many different kinds of data: demographic information, travel motivations, the actions of the user on the system, the ratings provided by the user, the opinions of users with similar demographic characteristics or similar tastes, etc. The system has been fully designed and implemented in the Science and Technology Park of Tourism and Leisure. The paper presents a numerical evaluation of the correlation between the recommendations and the user’s motivations, and a qualitative evaluation performed by end users. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5baf228a42c64a1728e2aa881844c021",
"text": "This article addresses the problem of managing Moving Objects Databases (MODs) which capture the inherent imprecision of the information about the moving object's location at a given time. We deal systematically with the issues of constructing and representing the trajectories of moving objects and querying the MOD. We propose to model an uncertain trajectory as a three-dimensional (3D) cylindrical body and we introduce a set of novel but natural spatio-temporal operators which capture the uncertainty and are used to express spatio-temporal range queries. We devise and analyze algorithms for processing the operators and demonstrate that the model incorporates the uncertainty in a manner which enables efficient querying, thus striking a balance between the modeling power and computational efficiency. We address some implementation aspects which we experienced in our DOMINO project, as a part of which the operators that we introduce have been implemented. We also report on some experimental observations of a practical relevance.",
"title": ""
}
] |
scidocsrr
|
27adc99b8fef204e5bc2d6b120930cb2
|
ARREST: A RSSI Based Approach for Mobile Sensing and Tracking of a Moving Object
|
[
{
"docid": "38dd8cb9f7509fef17a542d087e1cc35",
"text": "In this work, we develop an anchor-less relative localisation algorithm aimed to be used in multi-robot teams. The localisation is performed based on the Received Signal Strength Indicator (RSSI) readings collected from the messages exchanged between nodes. We use the RSSI as a rough estimate of the inverse of distance between any pair of communicating nodes, and we claim that such estimates provide a coarse information of the nodes relative localisation that is still suitable to support several coordination tasks. In addition, we introduce a relative velocity estimation framework based on the RSSI measurements. This framework uses consecutive distance measurements and position estimates to provide the relative velocity vectors for all the nodes in the network. To accomplish this, we propose using a Kalman filter and the Floyd–Warshall algorithm to generate smooth RSSI pairwise signal distance for all nodes. Then we use Multidimensional Scaling to obtain relative positions from the pairwise distances. Finally, due to anchor unavailability, relative positions are adjusted to reflect the continuous mobility by using geometric transformations, thus obtaining smoother trajectories for mobile nodes. This allows us to estimate velocity and to establish a correspondence between orientation in the physical world and in the relative coordinates system. Additionally, we study the impact of several parameters in calculating the network topology, namely different approaches to provide a symmetric distances matrix, the period of the matrix dissemination, the use of synchronisation of the transmissions, and the filtering of the RSSI data. Experimental results, with a set of MicaZ motes, show that the period of matrix dissemination is the most relevant of the parameters, specifically with larger periods providing the best results, however, shorter periods are shown to be possible as long as the transmissions are synchronised. 2013 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "9904461a5595d69bef0015fe34514293",
"text": "Wake-up radio is an emerging technology with the ambitious goal of reducing the communication power consumption in smart sensor networks and Internet of Things. This reduction in power consumption will enable a new generation of applications which could achieve a longer lifetime than is achievable today. Wake up radios are required to work with a low power budget and should exhibit low latency coupled with high sensitivity and addressing capabilities. Typically they are combined with existing radio transceiver and power management techniques to reduce the overall communication power while maintaining the same communication performance. This paper presents a dual band (2.4GHz and 868MHz) wake up radio with the above mentioned characteristics. The dual band solution is exploited to increase the flexibility of the wake up radio, allowing interoperability with the two most common frequencies used in Wireless Sensors Networks and Internet of Things. Simulation results present a system able to exploit the two bands with sensitivity as low as -53dBm at 868MHz and -45dBm at 2450MHz. Experimental results on power consumption demonstrate the low power consumption of the proposed solution with only 1.276μW of power consumption in listening mode. The addressing is performed by an ultra low power on board PIC microcontroller with 40nW of power consumption when the wake up radio is in listening mode and only 70 μW when the data are received and parsed.",
"title": ""
},
{
"docid": "2eabe3d3edbc9b57b1a13c41688b9d68",
"text": "This paper presents a design method of on-chip patch antenna integration in a standard CMOS technology without post processing. A 60 GHz on-chip patch antenna is designed utilizing the top metal layer and an intermediate metal layer as the patch and ground plane, respectively. Interference between the patch and digital baseband circuits located beneath the ground plane is analyzed. The 60 GHz on-chip antenna occupies an area of 1220 µm by 1580 µm with carefully placed fillers and slots to meet the design rules of the CMOS process. The antenna is centered at 60.51 GHz with 810 MHz bandwidth. The peak gain and radiation efficiency are −3.32 dBi and 15.87%, respectively. Analysis for mutual signal coupling between the antenna and the clock H-tree beneath the ground plane is reported, showing a −61 dB coupling from the antenna to the H-tree and a −95 dB coupling of 2 GHz clock signal from the H-tree to the antenna.",
"title": ""
},
{
"docid": "62376954e4974ea2d52e96b373c67d8a",
"text": "Imagine the following situation. You’re in your car, listening to the radio and suddenly you hear a song that catches your attention. It’s the best new song you have heard for a long time, but you missed the announcement and don’t recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that’s too cumbersome. Wouldn’t it be nice if you could push a few buttons on your mobile phone and a few seconds later the phone would respond with the name of the artist and the title of the music you’re listening to? Perhaps even sending an email to your default email address with some supplemental information. In this paper we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system are a highly robust fingerprint extraction method and a very efficient fingerprint search strategy, which enables searching a large fingerprint database with only limited computing resources.",
"title": ""
},
{
"docid": "22727f9a6951582de1e98b522b40f68e",
"text": "High-speed electric machines are becoming increasingly important and utilized in many applications. This paper addresses the considerations and challenges of the rotor design of high-speed surface permanent magnet machines. The paper focuses particularly on mechanical aspects of the design. Special attention is given to the rotor sleeve design including thickness and material. Permanent magnet design parameters are discussed. Surface permanent magnet rotor dynamic considerations and challenges are also discussed.",
"title": ""
},
{
"docid": "ba7789e6e46186efe4d934e4ea2d081d",
"text": "Business users need to analyse changing sets of information to effectively support their working tasks. Due to the complexity of enterprise systems and available tools, especially technically unskilled users face considerable challenges when trying to flexibly retrieve needed data in an ad-hoc manner. As a consequence, available data is limited to information artefacts like queries or reports which have been predefined for them by IT experts. To improve information self-service capabilities of business users, we present an ontology-based architecture and end-user tool, enabling easy data access and query creation for business users. Our approach is based on a semantic middleware integrating data from heterogeneous information systems and providing a comprehensible data model in the form of a business level ontology (BO). We show how our end-user tool Semantic Query Designer (SQD) enables convenient navigation and query building upon the BO, and illustrate its usage and the processing of data over all layers of our system architecture in detail, using a comprehensible use case example. As flexible query creation is a crucial precondition of leveraging the usage of enterprise data, we contribute to the enablement of business users of making better informed decisions, thus increasing effectiveness and efficiency of business processes.",
"title": ""
},
{
"docid": "aa73df5eadafff7533994c05a8d3c415",
"text": "In this paper, we report on the outcomes of the European project EduWear. The aim of the project was to develop a construction kit with smart textiles and to examine its impact on young people. The construction kit, including a suitable programming environment and a workshop concept, was adopted by children in a number of workshops.\n The evaluation of the workshops showed that designing, creating, and programming wearables with a smart textile construction kit allows for creating personal meaningful projects which relate strongly to aspects of young people's life worlds. Through their construction activities, participants became more self-confident in dealing with technology and were able to draw relations between their own creations and technologies present in their environment. We argue that incorporating such constructionist processes into an appropriate workshop concept is essential for triggering thought processes about the character of digital media beyond the construction process itself.",
"title": ""
},
{
"docid": "040d39a7bf861a05cbd10fda9c0a1576",
"text": "Skin laceration repair is an important skill in family medicine. Sutures, tissue adhesives, staples, and skin-closure tapes are options in the outpatient setting. Physicians should be familiar with various suturing techniques, including simple, running, and half-buried mattress (corner) sutures. Although suturing is the preferred method for laceration repair, tissue adhesives are similar in patient satisfaction, infection rates, and scarring risk in low skin-tension areas and may be more cost-effective. The tissue adhesive hair apposition technique also is effective in repairing scalp lacerations. The sting of local anesthesia injections can be lessened by using smaller gauge needles, administering the injection slowly, and warming or buffering the solution. Studies have shown that tap water is safe to use for irrigation, that white petrolatum ointment is as effective as antibiotic ointment in postprocedure care, and that wetting the wound as early as 12 hours after repair does not increase the risk of infection. Patient education and appropriate procedural coding are important after the repair.",
"title": ""
},
{
"docid": "a129ad8154320f7be949527843207b89",
"text": "Availability of several web services having a similar functionality has led to using quality of service (QoS) attributes to support services selection and management. To improve these operations and be performed proactively, time series ARIMA models have been used to forecast the future QoS values. However, the problem is that in this extremely dynamic context the observed QoS measures are characterized by a high volatility and time-varying variation to the extent that existing ARIMA models cannot guarantee accurate QoS forecasting where these models are based on a homogeneity (constant variation over time) assumption, which can introduce critical problems such as proactively selecting a wrong service and triggering unrequired adaptations and thus leading to follow-up failures and increased costs. To address this limitation, we propose a forecasting approach that integrates ARIMA and GARCH models to be able to capture the QoS attributes' volatility and provide accurate forecasts. Using QoS datasets of real-world web services we evaluate the accuracy and performance aspects of the proposed approach. Results show that the proposed approach outperforms the popular existing ARIMA models and improves the forecasting accuracy of QoS measures and violations by on average 28.7% and 15.3% respectively.",
"title": ""
},
{
"docid": "0792abb24552f04c8b8c7cb71a4357ea",
"text": "Deformable part-based models [1, 2] achieve state-of-the-art performance for object detection, but rely on heuristic initialization during training due to the optimization of non-convex cost function. This paper investigates limitations of such an initialization and extends earlier methods using additional supervision. We explore strong supervision in terms of annotated object parts and use it to (i) improve model initialization, (ii) optimize model structure, and (iii) handle partial occlusions. Our method is able to deal with sub-optimal and incomplete annotations of object parts and is shown to benefit from semi-supervised learning setups where part-level annotation is provided for a fraction of positive examples only. Experimental results are reported for the detection of six animal classes in PASCAL VOC 2007 and 2010 datasets. We demonstrate significant improvements in detection performance compared to the LSVM [1] and the Poselet [3] object detectors.",
"title": ""
},
{
"docid": "7bf0b158d9fa4e62b38b6757887c13ed",
"text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.",
"title": ""
},
{
"docid": "ff0c957939c46e325a8fc63870b0efbd",
"text": "Several human progerias, including Hutchinson-Gilford progeria syndrome (HGPS), are caused by the accumulation at the nuclear envelope of farnesylated forms of truncated prelamin A, a protein that is also altered during normal aging. Previous studies in cells from individuals with HGPS have shown that farnesyltransferase inhibitors (FTIs) improve nuclear abnormalities associated with prelamin A accumulation, suggesting that these compounds could represent a therapeutic approach for this devastating progeroid syndrome. We show herein that both prelamin A and its truncated form progerin/LAΔ50 undergo alternative prenylation by geranylgeranyltransferase in the setting of farnesyltransferase inhibition, which could explain the low efficiency of FTIs in ameliorating the phenotypes of progeroid mouse models. We also show that a combination of statins and aminobisphosphonates efficiently inhibits both farnesylation and geranylgeranylation of progerin and prelamin A and markedly improves the aging-like phenotypes of mice deficient in the metalloproteinase Zmpste24, including growth retardation, loss of weight, lipodystrophy, hair loss and bone defects. Likewise, the longevity of these mice is substantially extended. These findings open a new therapeutic approach for human progeroid syndromes associated with nuclear-envelope abnormalities.",
"title": ""
},
{
"docid": "1fa056e87c10811b38277d161c81c2ac",
"text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.",
"title": ""
},
{
"docid": "6bfc1850211819a2943c5cbff1355d0f",
"text": "Constrained image splicing detection and localization (CISDL) is a newly proposed challenging task for image forensics, which investigates two input suspected images and identifies whether one image has suspected regions pasted from the other. In this paper, we propose a novel adversarial learning framework to train the deep matching network for CISDL. Our framework mainly consists of three building blocks: 1) the deep matching network based on atrous convolution (DMAC) aims to generate two high-quality candidate masks which indicate the suspected regions of the two input images, 2) the detection network is designed to rectify inconsistencies between the two corresponding candidate masks, 3) the discriminative network drives the DMAC network to produce masks that are hard to distinguish from ground-truth ones. In DMAC, atrous convolution is adopted to extract features with rich spatial information, the correlation layer based on the skip architecture is proposed to capture hierarchical features, and atrous spatial pyramid pooling is constructed to localize tampered regions at multiple scales. The detection network and the discriminative network act as the losses with auxiliary parameters to supervise the training of DMAC in an adversarial way. Extensive experiments, conducted on 21 generated testing sets and two public datasets, demonstrate the effectiveness of the proposed framework and the superior performance of DMAC.",
"title": ""
},
{
"docid": "21f95ea645cf99b5664f956ea11adfc1",
"text": "Data mining methodology can analyze relevant information results and produce different perspectives to understand more about the students’ activities. When designing an educational environment, applying data mining techniques discovers useful information that can be used in formative evaluation to assist educators establish a pedagogical basis for taking important decisions. Mining in education environment is called Educational Data Mining. Educational Data Mining is concerned with developing new methods to discover knowledge from educational database and can used for decision making in educational system. In this study, we collected the student’s data that have different information about their previous and current academics records and then apply different classification algorithm using Data Mining tools (WEKA) for analysis the student’s academics performance for Training and placement. This study presents a proposed model based on classification approach to find an enhanced evaluation method for predicting the placement for students. This model can determine the relations between academic achievement of students and their placement in campus selection.",
"title": ""
},
{
"docid": "e3caf8dcb01139ae780616c022e1810d",
"text": "The relative age effect (RAE) and its relationships with maturation, anthropometry, and physical performance characteristics were examined across a representative sample of English youth soccer development programmes. Birth dates of 1,212 players, chronologically age-grouped (i.e., U9's-U18's), representing 17 professional clubs (i.e., playing in Leagues 1 & 2) were obtained and categorised into relative age quartiles from the start of the selection year (Q1 = Sep-Nov; Q2 = Dec-Feb; Q3 = Mar-May; Q4 = Jun-Aug). Players were measured for somatic maturation and performed a battery of physical tests to determine aerobic fitness (Multi-Stage Fitness Test [MSFT]), Maximal Vertical Jump (MVJ), sprint (10 & 20m), and agility (T-Test) performance capabilities. Odds ratio's (OR) revealed Q1 players were 5.3 times (95% confidence intervals [CI]: 4.08-6.83) more likely to be selected than Q4's, with a particularly strong RAE bias observed in U9 (OR: 5.56) and U13-U16 squads (OR: 5.45-6.13). Multivariate statistical models identified few between quartile differences in anthropometric and fitness characteristics, and confirmed chronological age-group and estimated age at peak height velocity (APHV) as covariates. Assessment of practical significance using magnitude-based inferences demonstrated body size advantages in relatively older players (Q1 vs. Q4) that were very-likely small (Effect Size [ES]: 0.53-0.57), and likely to very-likely moderate (ES: 0.62-0.72) in U12 and U14 squads, respectively. Relatively older U12-U14 players also demonstrated small advantages in 10m (ES: 0.31-0.45) and 20m sprint performance (ES: 0.36-0.46). The data identify a strong RAE bias at the entry-point to English soccer developmental programmes. RAE was also stronger circa-PHV, and relatively older players demonstrated anaerobic performance advantages during the pubescent period. Talent selectors should consider motor function and maturation status assessments to avoid premature and unwarranted drop-out of soccer players within youth development programmes.",
"title": ""
},
{
"docid": "ca659ea60b5d7c214460b32fe5aa3837",
"text": "Address Decoder is an important digital block in SRAM which takes up to half of the total chip access time and significant part of the total SRAM power in normal read/write cycle. To design address decoder need to consider two objectives, first choosing the optimal circuit technique and second sizing of their transistors. Novel address decoder circuit is presented and analysed in this paper. Address decoder using NAND-NOR alternate stages with predecoder and replica inverter chain circuit is proposed and compared with traditional and universal block architecture, using 90nm CMOS technology. Delay and power dissipation in proposed decoder is 60.49% and 52.54% of traditional and 82.35% and 73.80% of universal block architecture respectively.",
"title": ""
},
{
"docid": "687157db817e920e13b24d0d28a15a81",
"text": "Large lighting variation challenges all visual odometry methods, even with RGB-D cameras. Here we propose a line segment-based RGB-D indoor odometry algorithm robust to lighting variation. We know line segments are abundant indoors and less sensitive to lighting change than point features. However, depth data are often noisy, corrupted or even missing for line segments which are often found on object boundaries where significant depth discontinuities occur. Our algorithm samples depth data along line segments, and uses a random sample consensus approach to identify correct depth and estimate 3D line segments. We analyze 3D line segment uncertainties and estimate camera motion by minimizing the Mahalanobis distance. In experiments we compare our method with two state-of-the-art methods including a keypoint-based approach and a dense visual odometry algorithm, under both constant and varying lighting. Our method demonstrates superior robustness to lighting change by outperforming the competing methods on 6 out of 8 long indoor sequences under varying lighting. Meanwhile our method also achieves improved accuracy even under constant lighting when tested using public data.",
"title": ""
},
{
"docid": "670b35833f96a62bce9e2ddd58081fc4",
"text": "Although video summarization has achieved great success in recent years, few approaches have realized the influence of video structure on the summarization results. As we know, the video data follow a hierarchical structure, i.e., a video is composed of shots, and a shot is composed of several frames. Generally, shots provide the activity-level information for people to understand the video content. While few existing summarization approaches pay attention to the shot segmentation procedure. They generate shots by some trivial strategies, such as fixed length segmentation, which may destroy the underlying hierarchical structure of video data and further reduce the quality of generated summaries. To address this problem, we propose a structure-adaptive video summarization approach that integrates shot segmentation and video summarization into a Hierarchical Structure-Adaptive RNN, denoted as HSA-RNN. We evaluate the proposed approach on four popular datasets, i.e., SumMe, TVsum, CoSum and VTW. The experimental results have demonstrated the effectiveness of HSA-RNN in the video summarization task.",
"title": ""
},
{
"docid": "9e10e151b9e032e79296b35d09d45bbf",
"text": "PURPOSE\nAutomated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as \"U-net.\"\n\n\nMATERIALS AND METHODS\nWe used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations.\n\n\nRESULTS\nThe average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016).\n\n\nCONCLUSIONS\nIn conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.",
"title": ""
}
] |
scidocsrr
|
96279a118c04ddb7c79a531d499f6927
|
Integrating Palmprint and Fingerprint for Identity Verification
|
[
{
"docid": "8571835aad236d639533680232cdca6c",
"text": "A new approach for the personal identification using hand images is presented. This paper attempts to improve the performance of palmprint-based verification system by integrating hand geometry features. Unlike other bimodal biometric systems, the users does not have to undergo the inconvenience of passing through two sensors since the palmprint and hand geometry features can be are acquired from the same image, using a digital camera, at the same time. Each of these gray level images are aligned and then used to extract palmprint and hand geometry features. These features are then examined for their individual and combined performance. The image acquisition setup used in this work was inherently simple and it does not employ any special illumination nor does it use any pegs to cause any inconvenience to the users. Our experimental results on the image dataset from 100 users confirm the utility of hand geometry features with those from palmprints and achieve promising results with a simple image acquisition setup.",
"title": ""
}
] |
[
{
"docid": "cd811b8c1324ca0fef6a25e1ca5c4ce9",
"text": "This commentary discusses why most IS academic research today lacks relevance to practice and suggests tactics, procedures, and guidelines that the IS academic community might follow in their research efforts and articles to introduce relevance to practitioners. The commentary begins by defining what is meant by relevancy in the context of academic research. It then explains why there is a lack of attention to relevance within the IS scholarly literature. Next, actions that can be taken to make relevance a more central aspect of IS research and to communicate implications of IS research more effectively to IS professionals are suggested.",
"title": ""
},
{
"docid": "942be0aa4dab5904139919351d6d63d4",
"text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.",
"title": ""
},
{
"docid": "20cfcfde25db033db8d54fe7ae6fcca1",
"text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.",
"title": ""
},
{
"docid": "9b1a7f811d396e634e9cc5e34a18404e",
"text": "We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.",
"title": ""
},
{
"docid": "fcb175f1fb5bd1ab20acaa1a7460be53",
"text": "5G networks are expected to be able to satisfy users' different QoS requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on network slicing. Due to the diversity of 5G application scenarios, new mobility management schemes are greatly needed to guarantee seamless handover in network-slicing-based 5G systems. In this article, we introduce a logical architecture for network-slicing-based 5G systems, and present a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum-sharing two-tier systems based on network slicing, where both the co-tier interference and cross-tier interference are taken into account. Simulation results demonstrate that the proposed resource allocation scheme can flexibly allocate network resources between different slices in 5G systems. Finally, several open issues and challenges in network-slicing-based 5G networks are discussed, including network reconstruction, network slicing management, and cooperation with other 5G technologies.",
"title": ""
},
{
"docid": "c3b953c74d41cfeae8d5f637ad402017",
"text": "A new technique for solving polynomial nonlinear constrained optimal control problems is presented. The problem is reformulated into a parametric optimization problem, which in turn is solved in a two step procedure. First, in a pre-computation step, the equation part of the corresponding first order optimality conditions is solved for a generic value of the parameter. Relying on the underlying algebraic geometry, this first solution makes it possible to solve efficiently and in real time the corresponding optimal control problem at the measured parameter value for each subsequent time step. This approach has a probability one guarantee of finding the global optimal solution at each step. Controller synthesis for two applications from the area of power electronics featuring a dc-ac converter and a dc-dc converter are discussed to motivate the proposed approach.",
"title": ""
},
{
"docid": "e81f197acf7e3b7590d93481a4a4b5b3",
"text": "Naive T cells have long been regarded as a developmentally synchronized and fairly homogeneous and quiescent cell population, the size of which depends on age, thymic output and prior infections. However, there is increasing evidence that naive T cells are heterogeneous in phenotype, function, dynamics and differentiation status. Current strategies to identify naive T cells should be adjusted to take this heterogeneity into account. Here, we provide an integrated, revised view of the naive T cell compartment and discuss its implications for healthy ageing, neonatal immunity and T cell reconstitution following haematopoietic stem cell transplantation. Evidence is increasing that naive T cells are heterogeneous in phenotype, function, dynamics and differentiation status. Here, van den Broek et al. provide a revised view of the naive T cell compartment and then discuss the implications for ageing, neonatal immunity and T cell reconstitution following haematopoietic stem cell transplantation.",
"title": ""
},
{
"docid": "9ed575b1ae41ddab041adeaf14e90735",
"text": "This paper presents a semi-autonomous controller for integrated design of an active safety system. A model of the driver’s nominal behavior is estimated based on observed behavior. A nonlinear model of the vehicle is developed that utilizes a coordinate transformation which allows for obstacles and road bounds to be modeled as constraints while still allowing the controller full steering and braking authority. A Nonlinear Model Predictive Controller (NMPC) is designed that utilizes the vehicle and driver models to predict a future threat of collision or roadway departure. Simulations are presented which demonstrate the ability of the suggested approach to successfully avoid multiple obstacles while staying safely within the road bounds.",
"title": ""
},
{
"docid": "ea2cdd42c18efb830d13c4d67b8a90d4",
"text": "One of the threats in the diversity loss of the primary gene pool of Vanilla planifolia is the lack of information on existing level of polymorphism in cultivated germplasm, and the different expressions of this polymorphism. For this reason, it is proposed to study the chemical polymorphism of the four phytochemicals that define the vanilla aroma quality in fruits (vanillin, vanillic acid, p-hydroxybenzaldehyde, p-hydroxybenzoic acid) by HPLC analysis (High Performance Liquid Chromatography) of 25 collections of unknown genotype, grown in the region Totonacapan Puebla-Veracruz, Mexico. The results identified a selection process, domestication in fruit aroma of vanilla, during which increased the participation of vanillin and reduced the presence of three minor compounds (vanillic acid, p-hydroxybenzaldehyde and p-hydroxybenzoic acid) in the global aroma. We distinguished a total of six chemotypes of V. planifolia in the Totonacapan region, some chemotypes with wild aromatic characteristics (low participation of vanillin) related to the material less cultivated in the region and domesticated chemotypes with high participation of vanillin, for the most cultivated material. The results show that the diversification of the chemotypes of V. planifolia is not related to environmental variation. The data indicate that in the possible center of origin of vanilla, there is phytochemical polymorphism, which indirectly suggests the existence of genetic polymorphism, essential for the design of a breeding program for optimizing the use and conservation of diversity of the primary gene pool of Vanilla planifolia.",
"title": ""
},
{
"docid": "2ead8dda09a272942657787371dbd768",
"text": "Some billiard tables in R2 contain crucial references to dynamical systems but can be analyzed with Euclidean geometry. In this expository paper, we will analyze billiard trajectories in circles, circular rings, and ellipses as well as relate their charactersitics to ergodic theory and dynamical systems.",
"title": ""
},
{
"docid": "3ea9d312027505fb338a1119ff01d951",
"text": "Many experiments provide evidence that practicing retrieval benefits retention relative to conditions of no retrieval practice. Nearly all prior research has employed retrieval practice requiring overt responses, but a few experiments have shown that covert retrieval also produces retention advantages relative to control conditions. However, direct comparisons between overt and covert retrieval are scarce: Does covert retrieval-thinking of but not producing responses-on a first test produce the same benefit as overt retrieval on a criterial test given later? We report 4 experiments that address this issue by comparing retention on a second test following overt or covert retrieval on a first test. In Experiment 1 we used a procedure designed to ensure that subjects would retrieve on covert as well as overt test trials and found equivalent testing effects in the 2 cases. In Experiment 2 we replicated these effects using a procedure that more closely mirrored natural retrieval processes. In Experiment 3 we showed that overt and covert retrieval produced equivalent testing effects after a 2-day delay. Finally, in Experiment 4 we showed that covert retrieval benefits retention more than restudying. We conclude that covert retrieval practice is as effective as overt retrieval practice, a conclusion that contravenes hypotheses in the literature proposing that overt responding is better. This outcome has an important educational implication: Students can learn as much from covert self-testing as they would from overt responding.",
"title": ""
},
{
"docid": "ee4c10d53be10ed1a68e85e6a8a14f31",
"text": "1 Center for Manufacturing Research, Tennessee Technological University (TTU), Cookeville, TN 38505, USA 2 Department of Electrical and Computer Engineering, Tennessee Technological University (TTU), Cookeville, TN 38505, USA 3 Panasonic Princeton Laboratory (PPRL), Panasonic R&D Company of America, 2 Research Way, Princeton, NJ 08540, USA 4 Network Development Center, Matsushita Electric Industrial Co., Ltd., 4-12-4 Higashi-shinagawa, Shinagawa-ku, Tokyo 140-8587, Japan",
"title": ""
},
{
"docid": "986b23f5c2a9df55c2a8c915479a282a",
"text": "Recurrent neural network language models (RNNLM) have recently demonstrated vast potential in modelling long-term dependencies for NLP problems, ranging from speech recognition to machine translation. In this work, we propose methods for conditioning RNNLMs on external side information, e.g., metadata such as keywords or document title. Our experiments show consistent improvements of RNNLMs using side information over the baselines for two different datasets and genres in two languages. Interestingly, we found that side information in a foreign language can be highly beneficial in modelling texts in another language, serving as a form of cross-lingual language modelling.",
"title": ""
},
{
"docid": "ea7add72d2f03d2c6a6c357609e41259",
"text": "Generally, phenomena of spontaneous pattern formation are random and repetitive, whereas elaborate devices are the deterministic product of human design. Yet, biological organisms and collective insect constructions are exceptional examples of complex systems that are both architectured and self-organized. Can we understand their precise self-formation capabilities and integrate them with technological planning? Can physical systems be endowed with information, or informational systems be embedded in physics, to create autonomous morphologies and functions? This book is the first initiative of its kind toward establishing a new field of research, Morphogenetic Engineering, to explore the modeling and implementation of “self-architecturing” systems. Particular emphasis is set on the programmability and computational abilities of self-organization, properties that are often underappreciated in complex systems science—while, conversely, the benefits of selforganization are often underappreciated in engineering methodologies.",
"title": ""
},
{
"docid": "58cdff7c56803a549bb17e52dabac166",
"text": "Many previous research studies on extractive text summarization consider a subset of words in a document as keywords and use a sentence ranking function that ranks sentences based on their similarities with the list of extracted keywords. But the use of key concepts in automatic text summarization task has received less attention in literature on summarization. The proposed work uses key concepts identified from a document for creating a summary of the document. We view single-word or multi-word keyphrases of a document as the important concepts that a document elaborates on. Our work is based on the hypothesis that an extract is an elaboration of the important concepts to some permissible extent and it is controlled by the given summary length restriction. In other words, our method of text summarization chooses a subset of sentences from a document that maximizes the important concepts in the final summary. To allow diverse information in the summary, for each important concept, we select one sentence that is the best possible elaboration of the concept. Accordingly, the most important concept will contribute first to the summary, then to the second best concept, and so on. To prove the effectiveness of our proposed summarization method, we have compared it to some state-of-the art summarization systems and the results show that the proposed method outperforms the existing systems to which it is compared. Keywords—Automatic Text Summarization, Key Concepts, Keyphrase Extraction",
"title": ""
},
{
"docid": "ca50f634d24d4cd00a079e496d00e4b2",
"text": "We designed and implemented a fork-type automatic guided vehicle (AGV) with a laser guidance system. Most previous AGVs have used two types of guidance systems: magnetgyro and wire guidance. However, these guidance systems have high costs, are difficult to maintain with changes in the operating environment, and can drive only a pre-determined path with installed sensors. A laser guidance system was developed for addressing these issues, but limitations including slow response time and low accuracy remain. We present a laser guidance system and control system for AGVs with laser navigation. For analyzing the performance of the proposed system, we designed and built a fork-type AGV, and performed repetitions of our experiments under the same working conditions. The results show an average positioning error of 51.76 mm between the simulated driving path and the driving path of the actual fork-type AGV. Consequently, we verified that the proposed method is effective and suitable for use in actual AGVs.",
"title": ""
},
{
"docid": "e2ee26af1fb425f8591b5b8689080fff",
"text": "In this paper, we focus on a recent Web trend called microblogging, and in particular a site called Twitter. The content of such a site is an extraordinarily large number of small textual messages, posted by millions of users, at random or in response to perceived events or situations. We have developed an algorithm that takes a trending phrase or any phrase specified by a user, collects a large number of posts containing the phrase, and provides an automatically created summary of the posts related to the term. We present examples of summaries we produce along with initial evaluation.",
"title": ""
},
{
"docid": "2ef6e4f1aca010a75d3e078491e40cbe",
"text": "In the last several years hundreds of thousands of SSDs have been deployed in the data centers of Baidu, China's largest Internet search company. Currently only 40\\% or less of the raw bandwidth of the flash memory in the SSDs is delivered by the storage system to the applications. Moreover, because of space over-provisioning in the SSD to accommodate non-sequential or random writes, and additionally, parity coding across flash channels, typically only 50-70\\% of the raw capacity of a commodity SSD can be used for user data. Given the large scale of Baidu's data center, making the most effective use of its SSDs is of great importance. Specifically, we seek to maximize both bandwidth and usable capacity.\n To achieve this goal we propose {\\em software-defined flash} (SDF), a hardware/software co-designed storage system to maximally exploit the performance characteristics of flash memory in the context of our workloads. SDF exposes individual flash channels to the host software and eliminates space over-provisioning. The host software, given direct access to the raw flash channels of the SSD, can effectively organize its data and schedule its data access to better realize the SSD's raw performance potential.\n Currently more than 3000 SDFs have been deployed in Baidu's storage system that supports its web page and image repository services. Our measurements show that SDF can deliver approximately 95% of the raw flash bandwidth and provide 99% of the flash capacity for user data. SDF increases I/O bandwidth by 300\\% and reduces per-GB hardware cost by 50% on average compared with the commodity-SSD-based system used at Baidu.",
"title": ""
},
{
"docid": "868bac47ca1f40605e347597f538d848",
"text": "Relay technologies have been actively studied and considered in the standardization process of next-generation mobile broadband communication systems such as 3GPP LTE-Advanced, IEEE 802.16j, and IEEE 802.16m. This article first introduces and compares different relay types in LTE-Advanced and WiMAX standards. Simulation results show that relay technologies can effectively improve service coverage and system throughput. Three relay transmission schemes are then summarized and evaluated in terms of transmission efficiency under different radio channel conditions. Finally, a centralized pairing scheme and a distributed pairing scheme are developed for effective relay selection. Simulation results show that the proposed schemes can maximize the number of served UE units and the overall throughput of a cell in a realistic multiple-RS-multiple-UE scenario.",
"title": ""
},
{
"docid": "25216b9a56bca7f8503aa6b2e5b9d3a9",
"text": "The study at hand is the first of its kind that aimed to provide a comprehensive analysis of the determinants of foreign direct investment (FDI) in Mongolia by analyzing their short-run, long-run, and Granger causal relationships. In doing so, we methodically used a series of econometric methods to ensure reliable and robust estimation results that included the augmented Dickey-Fuller and Phillips-Perron unit root tests, the most recently advanced autoregressive distributed lag (ARDL) bounds testing approach to cointegration, fully modified ordinary least squares, and the Granger causality test within the vector error-correction model (VECM) framework. Our findings revealed domestic market size and human capital to have a U-shaped relationship with FDI inflows, with an initial positive impact on FDI in the short-run, which then turns negative in the long-run. Macroeconomic instability was found to deter FDI inflows in the long-run. In terms of the impact of trade on FDI, imports were found to have a complementary relationship with FDI; while exports and FDI were found to be substitutes in the short-run. Financial development was also found to induce a deterring effect on FDI inflows in both the shortand long-run; thereby also revealing a substitutive relationship between the two. Infrastructure level was not found to have a significant impact on FDI on any conventional level, in either the shortor long-run. Furthermore, the results have exhibited significant Granger causal relationships between the variables; thereby, ultimately stressing the significance of policy choice in not only attracting FDI inflows, but also in translating their positive spill-over benefits into long-run economic growth. © 2017 AESS Publications. All Rights Reserved.",
"title": ""
}
] |
scidocsrr
|
281d8de53a486aa1fc78e8c5447887b8
|
A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem
|
[
{
"docid": "e509e4f36fedbcc1125368346aa4fb19",
"text": "Portfolio management is the decision-making process of allocating an amount of fund into different financial investment products. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. This paper presents a model-less convolutional neural network with historic prices of a set of financial assets as its input, outputting portfolio weights of the set. The network is trained with 0.7 years' price data from a cryptocurrency exchange. The training is done in a reinforcement manner, maximizing the accumulative return, which is regarded as the reward function of the network. Back test trading experiments with trading period of 30 minutes is conducted in the same market, achieving 10-fold returns in 1.8 month's periods. Some recently published portfolio selection strategies are also used to perform the same back tests, whose results are compared with the neural network. The network is not limited to cryptocurrency, but can be applied to any other financial markets.",
"title": ""
},
{
"docid": "be692c1251cb1dc73b06951c54037701",
"text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.",
"title": ""
}
] |
[
{
"docid": "66e100e31b2c100d2428024513fc4953",
"text": "In order to make the search engine transfer information efficiently and accurately and do this optimization to improve the web search ranking, beginning with understanding the principle of search engine, this paper exports the specific explanation of search engine optimization. And then it introduces the new website building concepts and design concepts for the purpose of the construction of search engine optimization. Through an empirical research from the fields of the internal coding method, the website content realizable form and website overall architecture, the paper expounds search engine optimization tools, strategies and methods, and analysis the new thought that the enterprise and e-commerce sites with the search engine do the effective website promotion. And when the user through the search engine to search, the website can get a good rankings position in the search results, so as to improve the site traffic and finally enhance the website sales ability or advocacy capacity.",
"title": ""
},
{
"docid": "e0b1e38b08b6fb098808585a5a3c8753",
"text": "The decade since the Human Genome Project ended has witnessed a remarkable sequencing technology explosion that has permitted a multitude of questions about the genome to be asked and answered, at unprecedented speed and resolution. Here I present examples of how the resulting information has both enhanced our knowledge and expanded the impact of the genome on biomedical research. New sequencing technologies have also introduced exciting new areas of biological endeavour. The continuing upward trajectory of sequencing technology development is enabling clinical applications that are aimed at improving medical diagnosis and treatment.",
"title": ""
},
{
"docid": "c6360e4f9704d362d37d9da6146bd51e",
"text": "There are several tools and models found in machine learning that can be used to forecast a certain time series; however, it is not always clear which model is appropriate for selection, as different models are suited for different types of data, and domain-specific transformations and considerations are usually required. This research aims to examine the issue by modeling four types of machineand deep learning algorithms support vector machine, random forest, feed-forward neural network, and a LSTM neural network on a high-variance, multivariate time series to forecast trend changes one time step in the future, accounting for lag. The models were trained on clinical trial data of patients in an alcohol addiction treatment plan provided by a Uppsala-based company. The results showed moderate performance differences, with a concern that the models were performing a random walk or naive forecast. Further analysis was able to prove that at least one model, the feed-forward neural network, was not undergoing this and was able to make meaningful forecasts one time step into the future. In addition, the research also examined the effect of optimization processes by comparing a grid search, a random search, and a Bayesian optimization process. In all cases, the grid search found the lowest minima, though its slow runtimes were consistently beaten by Bayesian optimization, which contained only slightly lower performances than the grid search. Key words— Data science, alcohl abuse, time series, forecasting, machine learning, deep learning, neural networks, regression",
"title": ""
},
{
"docid": "480d3a528ffd5b3f327b60ef122a9582",
"text": "This paper proposes the multiple output of dual half bridge LLC resonant converter using PFM-PD control. For the main output, dual half bridge LLC resonant converter controlled by pulse frequency modulation (PFM) is used. For the sub output, multiple outputs scheme is configured and operated by phase delay (PD) control. Since the control variables, PFM and PD, have little mutual effect, the sub output voltage can be regulated for a wide input and load range. All MOSFETs achieved ZVS and all rectifier diodes attained ZCS for the whole load range. The modes of operation are investigated and then steady state characteristics of the proposed converter are analyzed. A 320V- 400V input, 24V/20A, 5V/16A hardware prototype is realized with dsPIC33FJ16GS502 and tested to verify the performances of the proposed multiple output converters‥",
"title": ""
},
{
"docid": "26142d27adc7a682d7e6698532578811",
"text": "X-ray imaging has been developed not only for its use in medical imaging for human beings, but also for materials or objects, where the aim is to analyze (nondestructively) those inner parts that are undetectable to the naked eye. Thus, X-ray testing is used to determine if a test object deviates from a given set of specifications. Typical applications are analysis of food products, screening of baggage, inspection of automotive parts, and quality control of welds. In order to achieve efficient and effective X-ray testing, automated and semi-automated systems are being developed to execute this task. In this paper, we present a general overview of computer vision methodologies that have been used in X-ray testing. In addition, we review some techniques that have been applied in certain relevant applications, and we introduce a public database of X-ray images that can be used for testing and evaluation of image analysis and computer vision algorithms. Finally, we conclude that the following: that there are some areas -like casting inspection- where automated systems are very effective, and other application areas -such as baggage screening- where human inspection is still used, there are certain application areas -like weld and cargo inspections- where the process is semi-automatic, and there is some research in areas -including food analysis- where processes are beginning to be characterized by the use of X-ray imaging.",
"title": ""
},
{
"docid": "737bc68c51d2ae7665c47a060da3e25f",
"text": "Self-regulatory strategies of goal setting and goal striving are analyzed in three experiments. Experiment 1 uses fantasy realization theory (Oettingen, in: J. Brandstätter, R.M. Lerner (Eds.), Action and Self Development: Theory and Research through the Life Span, Sage Publications Inc, Thousand Oaks, CA, 1999, pp. 315-342) to analyze the self-regulatory processes of turning free fantasies about a desired future into binding goals. School children 8-12 years of age who had to mentally elaborate a desired academic future as well as present reality standing in its way, formed stronger goal commitments than participants solely indulging in the desired future or merely dwelling on present reality (Experiment 1). Effective implementation of set goals is addressed in the second and third experiments (Gollwitzer, Am. Psychol. 54 (1999) 493-503). Adolescents who had to furnish a set educational goal with relevant implementation intentions (specifying where, when, and how they would start goal pursuit) were comparatively more successful in meeting the goal (Experiment 2). Linking anticipated si tuations with goal-directed behaviors (i.e., if-then plans) rather than the mere thinking about good opportunities to act makes implementation intentions facilitate action initiation (Experiment 3). ©2001 Elsevier Science Ltd. All rights reserved. _____________________________________________________________________________________ Successful goal attainment demands completing two different tasks. People have to first turn their desires into binding goals, and second they have to attain the set goal. Both tasks benefit from selfregulatory strategies. In this article we describe a series of experiments with children, adolescents, and young adults that investigate self-regulatory processes facilitating effective goal setting and successful goal striving. The experimental studies investigate (1) different routes to goal setting depending on how",
"title": ""
},
{
"docid": "3d20ba5dc32270cb75df7a2d499a70e4",
"text": "The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.",
"title": ""
},
{
"docid": "a1d97d822a8e1a72eec2a4524e8a522c",
"text": "Tags have been popularly utilized for better annotating, organizing and searching for desirable images. Image tagging is the problem of automatically assigning tags to images. One major challenge for image tagging is that the existing/training labels associated with image examples might be incomplete and noisy. Valuable prior work has focused on improving the accuracy of the assigned tags, but very limited work tackles the efficiency issue in image tagging, which is a critical problem in many large scale real world applications. This paper proposes a novel Binary Codes Embedding approach for Fast Image Tagging (BCE-FIT) with incomplete labels. In particular, we construct compact binary codes for both image examples and tags such that the observed tags are consistent with the constructed binary codes. We then formulate the problem of learning binary codes as a discrete optimization problem. An efficient iterative method is developed to solve the relaxation problem, followed by a novel binarization method based on orthogonal transformation to obtain the binary codes from the relaxed solution. Experimental results on two large scale datasets demonstrate that the proposed approach can achieve similar accuracy with state-of-the-art methods while using much less time, which is important for large scale applications.",
"title": ""
},
{
"docid": "2be238b18e500be9de6388832deccc2e",
"text": "Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems.",
"title": ""
},
{
"docid": "5b07bc318cb0f5dd7424cdcc59290d31",
"text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.",
"title": ""
},
{
"docid": "9d37baf5ce33826a59cc7bd0fd7955c0",
"text": "A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.",
"title": ""
},
{
"docid": "1bb7922806f921cd820b65982f0a5a74",
"text": "Web Search is one of the most rapidly growing applications on the internet today. However, the current practice followed by most search engines – of logging and analyzing users’ queries – raises serious privacy concerns. In this paper, we concentrate on two existing solutions which are relatively easy to deploy – namely Query Obfuscation and Anonymizing Networks. In query obfuscation, a client-side software attempts to mask real user queries via injection of certain noisy queries. Anonymizing networks route the user queries through a series of relay servers, hiding the actual query source from the search engine. A fundamental problem with these solutions, however, is that user queries are still obviously revealed to the search engine, although they are “mixed” among queries generated either by a machine or by other users. We focus on TrackMeNot (TMN), a popular query obfuscation tool, and the Tor anonymizing network, and try to analyse whether these solutions can actually preserve users’ privacy in practice against an adversarial search engine. We demonstrate that a search engine, equipped with only a short-term history of a user’s search queries, can break the privacy guarantees of TMN and Tor by only utilizing off-the-shelf machine learning techniques.",
"title": ""
},
{
"docid": "40773627971f35b0af1e5f8d325e8118",
"text": "This tutorial covers the Dirichlet distribution, Dirichlet process, Pólya urn (and the associated Chinese restaurant process), hierarchical Dirichlet Process, and the Indian buffet process. Apart from basic properties, we describe and contrast three methods of generating samples: stick-breaking, the Pólya urn, and drawing gamma random variables. For the Dirichlet process we first present an informal introduction, and then a rigorous description for those more comfortable with probability theory.",
"title": ""
},
{
"docid": "abda350daca4705e661d8e59a6946e08",
"text": "Concept definition is important in language understanding (LU) adaptation since literal definition difference can easily lead to data sparsity even if different data sets are actually semantically correlated. To address this issue, in this paper, a novel concept transfer learning approach is proposed. Here, substructures within literal concept definition are investigated to reveal the relationship between concepts. A hierarchical semantic representation for concepts is proposed, where a semantic slot is represented as a composition of atomic concepts. Based on this new hierarchical representation, transfer learning approaches are developed for adaptive LU. The approaches are applied to two tasks: value set mismatch and domain adaptation, and evaluated on two LU benchmarks: ATIS and DSTC 2&3. Thorough empirical studies validate both the efficiency and effectiveness of the proposed method. In particular, we achieve state-ofthe-art performance (F1-score 96.08%) on ATIS by only using lexicon features.",
"title": ""
},
{
"docid": "fde3c86c90cabfb6e35ec1310b62a8de",
"text": "The LSDSem’17 shared task is the Story Cloze Test, a new evaluation for story understanding and script learning. This test provides a system with a four-sentence story and two possible endings, and the system must choose the correct ending to the story. Successful narrative understanding (getting closer to human performance of 100%) requires systems to link various levels of semantics to commonsense knowledge. A total of eight systems participated in the shared task, with a variety of approaches including end-to-end neural networks, feature-based regression models, and rule-based methods. The highest performing system achieves an accuracy of 75.2%, a substantial improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "3542be5328af88e269222fbffb535187",
"text": "Information technology has tremendously stimulated expansion of the banking networks and range of the offered services during recent years. The information technology has become a critical business resource because its absence could result in poor decisions and ultimately business failure. This study intends to find out the information technology influence on accounting information production in the Nigerian banking industry. Both primary and secondary data were used and Analysis of Variance (ANOVA) was used to test the hypothesis. Judgmental sampling method was used to obtain a representative sample of the population. Although for all Nigerian banks the efficiency has increased, the improvement of cost of efficiency is relatively much smaller than in the case of profit efficiency. It is also observed that accounting information technology can improve banks performance by reducing operational cost and by facilitating transactions among customers within the same or different network. It is, therefore, concludedthat accounting information technology is relevant in simplifying issues and in the provision of quality information in the Nigerian banking industry. That explains why the banks spend a greater part of their resources on information technology and consider its application as a comparative edge in the competitive banking industry. This paper recommends that the impact of the progress in accounting information technology on banking service should not lead to a very strong increase of cost of their processing, which put in question possibility to achieve economy of scale by Nigerian banks. Also all Nigerian banks should continue to utilize and upgrade their information technology for efficient service delivery and profitability.",
"title": ""
},
{
"docid": "ccb5a426e9636186d2819f34b5f0d5e8",
"text": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).",
"title": ""
},
{
"docid": "9841491b497a821a86c0d381380d7ce8",
"text": "Recently the progress in technology and flourishing applications open up new forecast and defy for the image and video processing community. Compared to still images, video sequences afford more information about how objects and scenarios change over time. Quality of video is very significant before applying it to any kind of processing techniques. This paper deals with two major problems in video processing they are noise reduction and object segmentation on video frames. The segmentation of objects is performed using foreground segmentation based and fuzzy c-means clustering segmentation is compared with the proposed method Improvised fuzzy c – means segmentation based on color. This was applied in the video frame to segment various objects in the current frame. The proposed technique is a powerful method for image segmentation and it works for both single and multiple feature data with spatial information. The experimental result was conducted using various noises and filtering methods to show which is best suited among others and the proposed segmentation approach generates good quality segmented frames.",
"title": ""
},
{
"docid": "6aa5b9ffcbecb624224ac0d8153ffcc8",
"text": "The successful implementation of new technologies is dependent on many factors including the efficient management of human resources. Furthermore, recent research indicates that intellectual assets and resources can be utilised much more efficiently and effectively if organisations apply knowledge management techniques for leveraging their human resources and enhancing their personnel management. The human resources departments are well positioned to ensure the success of knowledge management programs, which are directed at capturing, using and re-using employees' knowledge. Through human resources management a culture that encourages the free flow of knowledge for meeting organisational goals can be created. The strategic role of the human resources department in identifying strategic and knowledge gaps using knowledge mapping is discussed in this paper. In addition, the drivers and implementation strategies for knowledge management programs are proposed.",
"title": ""
}
] |
scidocsrr
|
2dbb1dd83109da6d38dfe5fc36f7aaa2
|
CRIME MAPPING AND THE TRAINING NEEDS OF LAW ENFORCEMENT
|
[
{
"docid": "62d21ddba64df488fc82e9558f2afc99",
"text": "The spatial analysis of crime and the current focus on hotspots has pushed the area of crime mapping to the fore, especially in regard to high volume offences such as vehicle theft and burglary. Hotspots also have a temporal component, yet police recorded crime databases rarely record the actual time of offence as this is seldom known. Police crime data tends, more often than not, to reflect the routine activities of the victims rather than the offence patterns of the offenders. This paper demonstrates a technique that uses police START and END crime times to generate a crime occurrence probability at any given time that can be mapped or visualized graphically. A study in the eastern suburbs of Sydney, Australia, demonstrates that crime hotspots with a geographical proximity can have distinctly different temporal patterns.",
"title": ""
}
] |
[
{
"docid": "9b6191f96f096035429583e8799a2eb2",
"text": "Recognition of food images is challenging due to their diversity and practical for health care on foods for people. In this paper, we propose an automatic food image recognition system for 85 food categories by fusing various kinds of image features including bag-of-features~(BoF), color histogram, Gabor features and gradient histogram with Multiple Kernel Learning~(MKL). In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 62.52% classification rate for 85 food categories.",
"title": ""
},
{
"docid": "62ca2853492b017a052b9bf5e9b955ff",
"text": "This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2% accuracy with Long Short Term Memory (LSTM) without normalizer.",
"title": ""
},
{
"docid": "dd06708ab6f67287e213bdb7b4436491",
"text": "Here we present the design of a passive-dynamics based, fully autonomous, 3-D, bipedal walking robot that uses simple control, consumes little energy, and has human-like morphology and gait. Design aspects covered here include the freely rotating hip joint with angle bisecting mechanism; freely rotating knee joints with latches; direct actuation of the ankles with a spring, release mechanism, and reset motor; wide feet that are shaped to aid lateral stability; and the simple control algorithm. The biomechanics context of this robot is discussed in more detail in [1], and movies of the robot walking are available at Science Online and http://www.tam.cornell.edu/~ruina/powerwalk.html. This robot adds evidence to the idea that passive-dynamic approaches might help design walking robots that are simpler, more efficient and easier to control.",
"title": ""
},
{
"docid": "2afd6afa18653ab234533bc99db0b4d8",
"text": "Autophagy is a lysosomal degradation pathway that is essential for survival, differentiation, development, and homeostasis. Autophagy principally serves an adaptive role to protect organisms against diverse pathologies, including infections, cancer, neurodegeneration, aging, and heart disease. However, in certain experimental disease settings, the self-cannibalistic or, paradoxically, even the prosurvival functions of autophagy may be deleterious. This Review summarizes recent advances in understanding the physiological functions of autophagy and its possible roles in the causation and prevention of human diseases.",
"title": ""
},
{
"docid": "1dad20d7f19e20945e9ad28aa5a70d93",
"text": "Article history: Received 3 January 2016 Received in revised form 9 June 2017 Accepted 26 September 2017 Available online 16 October 2017",
"title": ""
},
{
"docid": "809046f2f291ce610938de209d98a6f2",
"text": "Pregnancy loss before 20 weeks’ gestation without outside intervention is termed spontaneous abortion and may be encountered in as many as 20% of clinically diagnosed pregnancies.1 It is said to be complete when all products of conception are expelled, the uterus is in a contracted state, and the cervix is closed. On the other hand, retention of part of products of conception inside the uterus, cervix, or vagina results in incomplete abortion. Although incomplete spontaneous miscarriages are commonly encountered in early pregnancy,2 traumatic fetal decapitation has not been mentioned in the medical literature as a known complication of spontaneous abortion. We report an extremely rare and unusual case of traumatic fetal decapitation due to self-delivery during spontaneous abortion in a 26-year-old woman who presented at 15 weeks’ gestation with gradually worsening vaginal bleeding and lower abdominal pain and with the fetal head still lying in the uterine cavity. During our search for similar cases, we came across just 1 other case report describing traumatic fetal decapitation after spontaneous abortion,3 although there are reports of fetal decapitation from amniotic band syndrome, vacuum-assisted deliveries, and destructive operations.4–8 A 26-year-old woman, gravida 2, para 0, presented to the emergency department with vaginal bleeding and cramping pain in her lower abdomen, both of which had gradually increased in severity over the previous 2 days. Her pulse and blood pressure were 86 beats per minute and 100/66 mm Hg, respectively, and her respiratory rate was 26 breaths per minute. She had a high-grade fever; her temperature was 103°F (39.4°C), recorded orally. There was suprapubic tenderness on palpation. About 8 or 9 days before presentation, she had severe pain in the lower abdomen, followed by vaginal bleeding. She gave a history of passing brown to black clots, one of which was particularly large, and she had to pull it out herself as if it was stuck. It resembled “an incomplete very small baby” in her own words. Although not sure, she could not make out the head of the “baby,” although she could appreciate the limbs and trunk. Thereafter, the bleeding gradually decreased over the next 2 days, but her lower abdominal pain persisted. However, after 1 day, she again started bleeding, and her pain increased in intensity. Meanwhile she also developed fever. She gave a history of recent cocaine use and alcohol drinking occasionally. No history of smoking was present. According to her last menstrual period, the gestational age was at 15 weeks, and during this pregnancy, she never had a sonographic examination. She reported taking a urine test for pregnancy at home 4 weeks before, which showed positive results. She gave a history of being pregnant 11⁄2 years before. At that time, also, she aborted spontaneously at 9 weeks’ gestation. No complications were seen at that time. She resumed her menses normally after about 2 months and was regular until 3 months back. The patient was referred for emergency sonography, which revealed that the fetal head was lying in the uterine cavity (Figure 1, A and B) along with the presence of fluid/ hemorrhage in the cervix and upper vagina (Figure 1C). No other definite fetal part could be identified. The placenta was also seen in the uterine cavity, and it was upper anterior and fundic (Figure 1D). No free fluid in abdomen was seen. Subsequently after stabilization, the patient underwent dilation and evacuation and had an uneventful postoperative course. As mentioned earlier, traumatic fetal decapitation accompanying spontaneous abortion is a very rare occurrence; we came across only 1 other case3 describing similar findings. Patients presenting to the emergency department with features suggestive of abortion, whether threatened, incomplete, or complete, should be thoroughly evaluated by both pelvic and sonographic examinations to check for any retained products of conception with frequent followups in case of threatened or incomplete abortions.",
"title": ""
},
{
"docid": "77951641fea1115aae1bafcd589dfb7e",
"text": "We provide an overview of current approaches to DNA-based storage system design and of accompanying synthesis, sequencing and editing methods. We also introduce and analyze a suite of new constrained coding schemes for both archival and random access DNA storage channels. The analytic contribution of our work is the construction and design of sequences over discrete alphabets that avoid pre-specified address patterns, have balanced base content, and exhibit other relevant substring constraints. These schemes adapt the stored signals to the DNA medium and thereby reduce the inherent error-rate of the system.",
"title": ""
},
{
"docid": "cd89ebdc0fe3cf878b616b3be2819506",
"text": "Water is a scarce resource worldwide. Yet, we have many opportunities to conserve it. One particular opportunity for water conservation is the shower, because depending on the shower head and shower habits, an individual can save many liters of fresh water each day. Feedback proved to be an effective method to promote sustainable behavior. Therefore, in this paper we suggest to promote water conservation by providing feedback in form of an ambient display that can easily be integrated in current shower types. We built a prototype to study the potential of such a feedback device. These shower water meter (show-me) display the amount of water, that is used during one shower in form of LEDs assembled on a stick. Thus, an increasing water level is visualized. The user study revealed two groups. The subjects who considered themselves as ecologically conscious changed their behavior and turned the water down or off while soaping. Also, they are willing to pursue this behavior. Other subjects who did not have the goal to act more sustainable, were surprised about their water consumption and tried to reduce it. However, after the removal of the show-me device they did not maintain their behavior and fell back into their previous habit.",
"title": ""
},
{
"docid": "611f7b5564c9168f73f778e7466d1709",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "309a20834f17bd87e10f8f1c051bf732",
"text": "Tamper-resistant cryptographic processors are becoming the standard way to enforce data-usage policies. Their origins lie with military cipher machines and PIN processing in banking payment networks, expanding in the 1990s into embedded applications: token vending machines for prepayment electricity and mobile phone credit. Major applications such as GSM mobile phone identification and pay TV set-top boxes have pushed low-cost cryptoprocessors toward ubiquity. In the last five years, dedicated crypto chips have been embedded in devices such as game console accessories and printer ink cartridges, to control product and accessory after markets. The \"Trusted Computing\" initiative will soon embed cryptoprocessors in PCs so they can identify each other remotely. This paper surveys the range of applications of tamper-resistant hardware and the array of attack and defense mechanisms which have evolved in the tamper-resistance arms race.",
"title": ""
},
{
"docid": "8c31d750a503929a0776ae3b1e1d9f41",
"text": "Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many Natural Language Processing (NLP) applications. We present two new corpora of email and blog conversations annotated with topics, and evaluate annotator reliability for the segmentation and labeling tasks in these asynchronous conversations. We propose a complete computational framework for topic segmentation and labeling in asynchronous conversations. Our approach extends state-of-the-art methods by considering a fine-grained structure of an asynchronous conversation, along with other conversational features by applying recent graph-based methods for NLP. For topic segmentation, we propose two novel unsupervised models that exploit the fine-grained conversational structure, and a novel graph-theoretic supervised model that combines lexical, conversational and topic features. For topic labeling, we propose two novel (unsupervised) random walk models that respectively capture conversation specific clues from two different sources: the leading sentences and the fine-grained conversational structure. Empirical evaluation shows that the segmentation and the labeling performed by our best models beat the state-of-the-art, and are highly correlated with human annotations.",
"title": ""
},
{
"docid": "7b28505834de4346ef3c43e77a9444d6",
"text": "With the development of the modern aircraft, the large-scale thin walled parts have been used in aeronautics and astronautics. In NC milling process, the thin walled plates are very easy to deform which will influence the accuracy and quality. From the point of view of theoretically and numerical calculation, the paper proposes a new analytical deformation model suitable for static machining error prediction of low- rigidity components. The part deformation is predicted using a theoretical big deformation equations model, which is established on the basis of the equations of Von Karman when the linear load acts on thin-wall plates. The part big deformation is simulated using FE analysis. The simulating results shown that the diverse cutting forces, milling location and thickness of the plate may lead to various deformation results.",
"title": ""
},
{
"docid": "b44bf94943c26933b1d3cbab84c539f9",
"text": "2004;25;194 Pediatrics in Review David S. Rosen Physiologic Growth and Development During Adolescence http://pedsinreview.aappublications.org/content/25/6/194 located on the World Wide Web at: The online version of this article, along with updated information and services, is http://pedsinreview.aappublications.org/content/suppl/2005/01/26/25.6.194.DC1.html Data Supplement (unedited) at: Pediatrics. All rights reserved. Print ISSN: 0191-9601. Boulevard, Elk Grove Village, Illinois, 60007. Copyright © 2004 by the American Academy of published, and trademarked by the American Academy of Pediatrics, 141 Northwest Point publication, it has been published continuously since 1979. Pediatrics in Review is owned, Pediatrics in Review is the official journal of the American Academy of Pediatrics. A monthly",
"title": ""
},
{
"docid": "e95b4393e8b3a72723c123d13be5b76b",
"text": "On-road vehicle detection is a critical operation in automotive active safety systems such as collision avoidance, merge assist, lane change assistance, etc. In this paper, we present VeDAS-Vehicle Detection using Active learning and Symmetry. VeDAS is a multipart-based vehicle detection algorithm that employs Haar-like features and Adaboost classifiers for the detection of fully and partially visible rear views of vehicles. In order to train the classifiers, a modified active learning framework is proposed that selects positive and negative samples of multiple parts in an automated manner. Furthermore, the detected parts from the classifiers are associated by using a novel iterative window search algorithm and a symmetry-based regression model to extract fully visible vehicles. The proposed method is evaluated on seven different datasets that capture varying road, traffic, and weather conditions. Detailed evaluations show that the proposed method gives high true positive rates of over 95% and performs better than existing state-of-the-art rear-view-based vehicle detection methods. Additionally, VeDAS also detects partially visible rear views of vehicles using the residues left behind after detecting the fully visible vehicles. VeDAS is able to detect partial rear views with a detection rate of 87% on a new partially visible rear-view vehicle dataset that we release as part of this paper.",
"title": ""
},
{
"docid": "6bacccbba6bbb4a8d0b6c1de25399fef",
"text": "We propose a novel method to estimate a unique and repeatable reference frame in the context of 3D object recognition from a single viewpoint based on global descriptors. We show that the ability of defining a robust reference frame on both model and scene views allows creating descriptive global representations of the object view, with the beneficial effect of enhancing the spatial descriptiveness of the feature and its ability to recognize objects by means of a simple nearest neighbor classifier computed on the descriptor space. Moreover, the definition of repeatable directions can be deployed to efficiently retrieve the 6DOF pose of the objects in a scene. We experimentally demonstrate the effectiveness of the proposed method on a dataset including 23 scenes acquired with the Microsoft Kinect sensor and 25 full-3D models by comparing the proposed approach with state-of-the-art global descriptors. A substantial improvement is presented regarding accuracy in recognition and 6DOF pose estimation, as well as in terms of computational performance.",
"title": ""
},
{
"docid": "5eb4ba54e8f1288c8fa9222d664704b1",
"text": "Common Information Model (CIM) is widely adopted by many utilities since it offers interoperability through standard information models. Storing, processing, retrieving, and providing concurrent access of the large power network models to the various power system applications in CIM framework are the current challenges faced by utility operators. As the power network models resemble largely connected-data sets, the design of CIM oriented database has to support high-speed data retrieval of the connected-data and efficient storage for processing. The graph database is gaining wide acceptance for storing and processing of largely connected-data for various applications. This paper presents a design of CIM oriented graph database (CIMGDB) for storing and processing the largely connected-data of power system applications. Three significant advantages of the CIMGDB are efficient data retrieval and storage, agility to adapt dynamic changes in CIM profile, and greater flexibility of modeling CIM unified modeling language (UML) in GDB. The CIMGDB does not need a predefined database schema. Therefore, the CIM semantics needs to be added to the artifacts of GDB for every instance of CIM objects storage. A CIM based object-graph mapping methodology is proposed to automate the process. An integration of CIMGDB and power system applications is discussed by an implementation architecture. The data-intensive network topology processing (NTP) is implemented, and demonstrated for six IEEE test networks and one practical 400 kV Maharashtra network. Results such as computation time of executing network topology processing evaluate the performance of the CIMGDB.",
"title": ""
},
{
"docid": "22e21aab5d41c84a26bc09f9b7402efa",
"text": "Skeem for their thoughtful comments and suggestions.",
"title": ""
},
{
"docid": "4a817638751fdfe46dfccc43eea76cbd",
"text": "In this article we present a classification scheme for quantum computing technologies that is based on the characteristics most relevant to computer systems architecture. The engineering trade-offs of execution speed, decoherence of the quantum states, and size of systems are described. Concurrency, storage capacity, and interconnection network topology influence algorithmic efficiency, while quantum error correction and necessary quantum state measurement are the ultimate drivers of logical clock speed. We discuss several proposed technologies. Finally, we use our taxonomy to explore architectural implications for common arithmetic circuits, examine the implementation of quantum error correction, and discuss cluster-state quantum computation.",
"title": ""
},
{
"docid": "5e058857f04db407605212a3d21358ae",
"text": "False sharing is an insidious problem for multithreaded programs running on multicore processors, where it can silently degrade performance and scalability. Previous tools for detecting false sharing are severely limited: they cannot distinguish false sharing from true sharing, have high false positive rates, and provide limited assistance to help programmers locate and resolve false sharing.\n This paper presents two tools that attack the problem of false sharing: Sheriff-Detect and Sheriff-Protect. Both tools leverage a framework we introduce here called Sheriff. Sheriff breaks out threads into separate processes, and exposes an API that allows programs to perform per-thread memory isolation and tracking on a per-page basis. We believe Sheriff is of independent interest.\n Sheriff-Detect finds instances of false sharing by comparing updates within the same cache lines by different threads, and uses sampling to rank them by performance impact. Sheriff-Detect is precise (no false positives), runs with low overhead (on average, 20%), and is accurate, pinpointing the exact objects involved in false sharing. We present a case study demonstrating Sheriff-Detect's effectiveness at locating false sharing in a variety of benchmarks.\n Rewriting a program to fix false sharing can be infeasible when source is unavailable, or undesirable when padding objects would unacceptably increase memory consumption or further worsen runtime performance. Sheriff-Protect mitigates false sharing by adaptively isolating shared updates from different threads into separate physical addresses, effectively eliminating most of the performance impact of false sharing. We show that Sheriff-Protect can improve performance for programs with catastrophic false sharing by up to 9×, without programmer intervention.",
"title": ""
}
] |
scidocsrr
|
aadecff47e9f25ef2d2dbb889c650cfd
|
Design of Compact High-Isolation Four-Way Power Combiners
|
[
{
"docid": "762a8f8ad20799be02078a02b9bafd27",
"text": "A Ka-band broadband traveling-wave power divider based on low-loss septum unsymmetrical E-plane T-junction series has been designed and fabricated. The high isolation, which guarantees the graceful degradation of a modular solid-state device system, is realized by the septum T-junction series. The attractive features of the proposed structure are easy fabrication and low loss. Moreover, a wide operation band of 30%. The simulated isolation and reflection of the outputs are better than 20 dB and 15 dB respectively from 28 GHz to 38 GHz. The measured return loss of input port is better than 19 dB and a maximum transmission coefficient amplitude imbalance of ±1dB is achieved.",
"title": ""
}
] |
[
{
"docid": "98c64622f9a22f89e3f9dd77c236f310",
"text": "After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.",
"title": ""
},
{
"docid": "75321b85809e5954e78675c8827fefd5",
"text": "Text annotations are of great use to researchers in the language sciences, and much effort has been invested in creating annotated corpora for an wide variety of purposes. Unfortunately, software support for these corpora tends to be quite limited: it is usually ad-hoc, poorly designed and documented, or not released for public use. I describe an annotation tool, the Story Workbench, which provides a generic platform for text annotation. It is free, open-source, cross-platform, and user friendly. It provides a number of common text annotation operations, including representations (e.g., tokens, sentences, parts of speech), functions (e.g., generation of initial annotations by algorithm, checking annotation validity by rule, fully manual manipulation of annotations) and tools (e.g., distributing texts to annotators via version control, merging doubly-annotated texts into a single file). The tool is extensible at many different levels, admitting new representations, algorithm, and tools. I enumerate ten important features and illustrate how they support the annotation process at three levels: (1) annotation of individual texts by a single annotator, (2) double-annotation of texts by two annotators and an adjudicator, and (3) annotation scheme development. The Story Workbench is scheduled for public release in March 2012. Text annotations are of great use to researchers in the language sciences: a large fraction of that work relies on annotated data to build, train, or test their systems. Good examples are the Penn Treebank, which catalyzed work in developing statistical syntactic parsers, and PropBank, which did the same for semantic role labeling. It is not an exaggeration to say that annotated corpora are a central resource for these fields, and are only growing in importance. Work on narrative shares many of the same problems, and as a consequence has much to gain from advances in language annotation tools and techniques. Despite the importance of annotated data, there remains a missing link: software support is not given nearly the same amount of attention as the annotations themselves. Researchers usually release only the data; if they release any tools at all, they are usually ad-hoc, poorly designed and Copyright c © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. documented, or just not released for public use. Tools do not build on one another. The language sciences need to move to a standard where, if annotated data is released, software for accessing and creating the data are released as a matter of course. Researchers should prepare for it, reviewers should demand it, and readers should expect it. One way of facilitating this is to lower the barrier for creating tools. Many of the phases of the annotation cycle are the same no matter what sort of annotation you are doing a freely available tool, or suite of tools, to support these phases would go a long way. I describe the Story Workbench (Finlayson 2008), a major step toward just such a tool suite. The Story Workbench is free, open-source, extensible, cross-platform, and user friendly. It is a working piece of software, having been in beta testing for over three years, with a public release scheduled for March 2012. It has been used by more than 12 annotators to annotate over 100k words across 17 representations. Two corpora have been created so far with it: the UMIREC corpus (Hervas and Finlayson 2010) comprising 25k words of news and folktales annotated for referring expression structure, and 18k words of Russian folktales annotated in all 17 different representations. The Story Workbench is especially interesting to researchers working on narrative. Understanding a narrative requires not just one representation, not just two, but a dozen or more. The Story Workbench was created specifically to overcome that problem, but is now finding application beyond the realm of narrative research. In particular, in the next section I describe three phases of the annotation process; many, if not most, annotation studies move through these phases. In the next section I enumerate some of the more important features of the Story Workbench, and show how these support the phases. Three Loops of the Annotation Process Conceptually, the process of producing a gold-standard annotated corpus can be split into at least three nested loops. In the widest, top-most loop the researchers design and vet the annotation scheme and annotation tool; embedded therein is the middle loop, where annotation teams produce goldannotated texts; embedded within that is the loop of the individual annotator working on individual texts. These nested loops are illustrated in Figure 1. 21 AAAI Technical Report WS-11-18",
"title": ""
},
{
"docid": "d6e565c0123049b9e11692b713674ccf",
"text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.",
"title": ""
},
{
"docid": "7dd9a917fc731dd0437626a9d8dfe53c",
"text": "With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.",
"title": ""
},
{
"docid": "07a1fca1b738cb550a7f384bd3e8de23",
"text": "American Library Association /ALA/ American Library Directory bibliographic record bibliography binding blanket order",
"title": ""
},
{
"docid": "af382dbadec10d8480b09e51503071d2",
"text": "Scent communication plays a central role in the mating behavior of many nonhuman mammals but has often been overlooked in the study of human mating. However, a growing body of evidence suggests that men may perceive women's high-fertility body scents (collected near ovulation) as more attractive than their low-fertility body scents. The present study provides a methodologically rigorous replication of this finding, while also examining several novel questions. Women collected samples of their natural body scent twice--once on a low-fertility day and once on a high-fertility day of the ovulatory cycle. Tests of luteinizing hormone confirmed that women experienced ovulation within two days of their high-fertility session. Men smelled each woman's high- and low-fertility scent samples and completed discrimination and preference tasks. At above-chance levels, men accurately discriminated between women's high- and low-fertility scent samples (61%) and chose women's high-fertility scent samples as more attractive than their low-fertility scent samples (56%). Men also rated each scent sample on sexiness, pleasantness, and intensity, and estimated the physical attractiveness of the woman who had provided the sample. Multilevel modeling revealed that, when high- and low-fertility scent samples were easier to discriminate from each other, high-fertility scent samples received even more favorable ratings compared with low-fertility scent samples. This study builds on a growing body of evidence indicating that men are attracted to cues of impending ovulation in women and raises the intriguing question of whether women's cycling hormones influence men's attraction and sexual approach behavior.",
"title": ""
},
{
"docid": "587ee07095b4bd1189e3bb0af215fa95",
"text": "This paper discusses dynamic factor analysis, a technique for estimating common trends in multivariate time series. Unlike more common time series techniques such as spectral analysis and ARIMA models, dynamic factor analysis can analyse short, non-stationary time series containing missing values. Typically, the parameters in dynamic factor analysis are estimated by direct optimisation, which means that only small data sets can be analysed if computing time is not to become prohibitively long and the chances of obtaining sub-optimal estimates are to be avoided. This paper shows how the parameters of dynamic factor analysis can be estimated using the EM algorithm, allowing larger data sets to be analysed. The technique is illustrated on a marine environmental data set.",
"title": ""
},
{
"docid": "d5a7b2c027679d016c7c1ed128e48fd8",
"text": "Figure 3: Example of phase correlation between two microphones. The peak of this function indicates the inter-channel delay. index associated with peak value of f(t). This delay estimator is computationally convenient and more robust to noise and reverberation than other approaches based on cross-correlation or adaptive ltering. In ideal conditions, the output of Equation (5) is a delta function centered on the correct delay. In real applications with a wide band signal, e.g., a speech signal, the outcome is not a perfect delta function. Rather it resembles a correlation function of a random process. The time index associated with the maximum value of the output of Equation (5) provides an estimation of the delay. The system can produce wrong answers when two or more peaks of similar amplitude are present, i.e., in highly reverber-ant conditions. The resolution in delay estimation is limited in discrete systems by the sampling frequency. In order to increase the accuracy, oversampling can be applied in the neighborhood of the peak, to achieve sub-sample precision. Fig. 3 demonstrates an example of the result of a cross-power spectrum time delay estimator. Once the relative delays associated with all considered microphone pairs are known, the source position (x s ; y s) is estimated as the point that would produce the most similar delay values to the observed ones. This optimization is performed by a downhill sim-plex algorithm 6] applied to minimize the Euclidean distance between M observed delays ^ i and the corresponding M theoretical delays i : An analysis of the impulse responses associated with all the microphones, given an acoustic source emitting at a speciic position, has shown that constructive interference phenomena occur in the presence of signiicant reverberation. In some cases, the direct wavefront happens to be weaker than a coincidence of reeections, inducing a wrong estimation of the arrival direction and leading to an incorrect result. Selecting only microphone pairs that show the highest peaks of phase correlation generally alleviates this problem. Location results obtained with this strategy show comparable performance (mean posi-Reverb. Time Average Error 10 mic pairs 4 mic pairs 0.1sec 38.4 cm 29.8 cm 0.6sec 51.3 cm 32.1 cm 1.7sec 105.0 cm 46.4 cm Table 1: Average location error using either all 10 pairs or 4 pairs of microphones. Three reverberation time conditions are considered. tion error of about 0.3 m) at reverberation times of 0.1 s and 0.6 s. …",
"title": ""
},
{
"docid": "7f5bc643261247c0f977130405c6440d",
"text": "In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.",
"title": ""
},
{
"docid": "597522575f1bc27394da2f1040e9eaa5",
"text": "Many natural language processing systems rely on machine learning models that are trained on large amounts of manually annotated text data. The lack of sufficient amounts of annotated data is, however, a common obstacle for such systems, since manual annotation of text is often expensive and time-consuming. The aim of “PAL, a tool for Pre-annotation and Active Learning” is to provide a ready-made package that can be used to simplify annotation and to reduce the amount of annotated data required to train a machine learning classifier. The package provides support for two techniques that have been shown to be successful in previous studies, namely active learning and pre-annotation. The output of the pre-annotation is provided in the annotation format of the annotation tool BRAT, but PAL is a stand-alone package that can be adapted to other formats.",
"title": ""
},
{
"docid": "4b9d994288fc555c89554cc2c7e41712",
"text": "The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.",
"title": ""
},
{
"docid": "48eacd86c14439454525e5a570db083d",
"text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.",
"title": ""
},
{
"docid": "94488dafad4441028a91d5802ec6e121",
"text": "Vulvovaginal atrophy is a common condition associated with decreased estrogenization of the vaginal tissue. Symptoms include vaginal dryness, irritation, itching, soreness, burning, dyspareunia, discharge, urinary frequency, and urgency. It can occur at any time in a woman's life cycle, although more commonly in the postmenopausal phase, during which the prevalence is approximately 50%. Despite the high prevalence and the substantial effect on quality of life, vulvovaginal atrophy often remains underreported and undertreated. This article aims to review the physiology, clinical presentation, assessment, and current recommendations for treatment, including aspects of effectiveness and safety of local vaginal estrogen therapies.",
"title": ""
},
{
"docid": "f1dbacae0f2b67555616bfc551e5a6ea",
"text": "The oscillating and swinging parts of a target observed by radar cause additional frequency modulation and induce sidebands in the target's Doppler frequency shift (micro-Doppler). This effect provides unique features for classification in radar systems. In this paper, the micro-Doppler spectra and range-Doppler matrices of single bird and bird flocks are obtained by simulations for linear FMCW radar. Obtained range-Doppler matrices are compared for single bird and bird flock under several scenarios and new features are proposed for classification.",
"title": ""
},
{
"docid": "903dc946b338c178634fcf9f14e1b1eb",
"text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "061c67c967818b1a0ad8da55345c6dcf",
"text": "The paper aims at revealing the essence and connotation of Computational Thinking. It analyzed some of the international academia’s research results of Computational Thinking. The author thinks Computational Thinking is discipline thinking or computing philosophy, and it is very critical to understand Computational Thinking to grasp the thinking’ s computational features and the computing’s thinking attributes. He presents the basic rules of screening the representative terms of Computational Thinking and lists some representative terms based on the rules. He thinks Computational Thinking is contained in the commonalities of those terms. The typical thoughts of Computational Thinking are structuralization, formalization, association-and-interaction, optimization and reuse-and-sharing. Training Computational Thinking must base on the representative terms and the typical thoughts. There are three innovations in the paper: the five rules of screening the representative terms, the five typical thoughts and the formalized description of Computational Thinking.",
"title": ""
},
{
"docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc",
"text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.",
"title": ""
},
{
"docid": "01a215b6e55fbb41c01d7443a814b8dc",
"text": "This study aims to gather and analyze published articles regarding the influence of electronic word-ofmouth (eWOM) on the hotel industry. Articles published in the last five years appearing in six different academically recognized journals of tourism have been reviewed in the present study. Analysis of these articles has identified two main lines of research: review-generating factors (previous factors that cause consumers to write reviews) and impacts of eWOM (impacts caused by online reviews) from consumer perspective and company perspective. A summary of each study’s description, methodology and main results are outlined below, as well as an analysis of findings. This study also seeks to facilitate understanding and provide baseline information for future articles related to eWOM and hotels with the intention that researchers have a “snapshot” of previous research and the results achieved to date. © 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "875b6d8155079460c89dec2a6fe75c5b",
"text": "1. Abstract The Common Lisp Object System is an object-oriented system that is based on the concepts of generic functions, multiple inheritance, and method combination. All objects in the Object System are instances of classes that form an extension to the Common Lisp type system. The Common Lisp Object System is based on a meta-object protocol that renders it possible to alter the fundamental structure of the Object System itself. The Common Lisp Object System has been proposed as a standard for ANSI Common Lisp and has been tentatively endorsed by X3J13. The Common Lisp Object System is an object-oriented programming paradigm designed for Common Lisp. The lack of a standardized object-oriented extension for Common Lisp has long been regarded as a shortcoming by the Common Lisp community. Two separate and independent groups began work on an object-oriented extension to Common Lisp several years ago. One group is Symbolics, Inc. with New Flavors, and the other is Xerox PARC with CommonLoops. During the summer of 1986, these two groups met to explore combining their designs for submission to X3J13, a technical working group charged with producing an ANSI standard for Common Lisp. At the time of the exploratory meetings between Symbolics and Xerox, the authors of this paper became involved in the technical design work. The major participants in this effort were David Moon and Sonya Keene from Symbolics, Daniel Bobrow and Gregor Kiczales from Xerox, and Richard Gabriel and Linda DeMichiel from Lucid.",
"title": ""
},
{
"docid": "af40c4fe439738a72ee6b476aeb75f82",
"text": "Object tracking is still a critical and challenging problem with many applications in computer vision. For this challenge, more and more researchers pay attention to applying deep learning to get powerful feature for better tracking accuracy. In this paper, a novel triplet loss is proposed to extract expressive deep feature for object tracking by adding it into Siamese network framework instead of pairwise loss for training. Without adding any inputs, our approach is able to utilize more elements for training to achieve more powerful feature via the combination of original samples. Furthermore, we propose a theoretical analysis by combining comparison of gradients and back-propagation, to prove the effectiveness of our method. In experiments, we apply the proposed triplet loss for three real-time trackers based on Siamese network. And the results on several popular tracking benchmarks show our variants operate at almost the same frame-rate with baseline trackers and achieve superior tracking performance than them, as well as the comparable accuracy with recent state-of-the-art real-time trackers.",
"title": ""
}
] |
scidocsrr
|
c6acf2a4f84f17af6c7c08abf5c9b079
|
Object-Oriented Modeling and Coordination of Mobile Robots
|
[
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "d54168a9d8f10b43e24ff9d2cf87c2f0",
"text": "Mobile manipulators are of high interest to industry because of the increased flexibility and effectiveness they offer. The combination and coordination of the mobility provided by a mobile platform and of the manipulation capabilities provided by a robot arm leads to complex analytical problems for research. These problems can be studied very well on the KUKA youBot, a mobile manipulator designed for education and research applications. Issues still open in research include solving the inverse kinematics problem for the unified kinematics of the mobile manipulator, including handling the kinematic redundancy introduced by the holonomic platform of the KUKA youBot. As the KUKA youBot arm has only 5 degrees of freedom, a unified platform and manipulator system is needed to compensate for the missing degree of freedom. We present the KUKA youBot as an 8 degree of freedom serial kinematic chain, suggest appropriate redundancy parameters, and solve the inverse kinematics for the 8 degrees of freedom. This enables us to perform manipulation tasks more efficiently. We discuss implementation issues, present example applications and some preliminary experimental evaluation along with discussion about redundancies.",
"title": ""
}
] |
[
{
"docid": "e1a41e2c9ed279c0997c0ba87b8c2558",
"text": "Foot morphology and function has received increasing attention from both biomechanics researchers and footwear manufacturers. In this study, 168 habitually unshod runners (90 males whose age, weight & height were 23±2.4 years, 66±7.1 kg & 1.68±0.13 m and 78 females whose age, weight & height were 22±1.8 years, 55±4.7 kg & 1.6±0.11 m) (Indians) and 196 shod runners (130 males whose age, weight & height were 24±2.6 years, 66±8.2 kg & 1.72±0.18 m and 66 females whose age, weight & height were 23±1.5 years, 54±5.6 kg & 1.62±0.15 m) (Chinese) participated in a foot scanning test using the easy-foot-scan (a three-dimensional foot scanning system) to obtain 3D foot surface data and 2D footprint imaging. Foot length, foot width, hallux angle and minimal distance from hallux to second toe were calculated to analyze foot morphological differences. This study found that significant differences exist between groups (shod Chinese and unshod Indians) for foot length (female p = 0.001), width (female p = 0.001), hallux angle (male and female p = 0.001) and the minimal distance (male and female p = 0.001) from hallux to second toe. This study suggests that significant differences in morphology between different ethnicities could be considered for future investigation of locomotion biomechanics characteristics between ethnicities and inform last shape and design so as to reduce injury risks and poor performance from mal-fit shoes.",
"title": ""
},
{
"docid": "bfa05618da56c23cca87cd820c674fdf",
"text": "Mobile and location-based media refer to technologies that can openly and dynamically portray the characteristics of the users and their mundane life. Facebook check-ins highlights physical and informational mobility of the users relating individual activities into spaces. This study explored how personality traits like extraversion and narcissism function to influence self-disclosure that, in turn, impacts the intensity of check-ins on Facebook. Using survey data collected through Facebook check-in users in Taiwan (N 1⁄4 523), the results demonstrated that although extraversion and narcissism might not directly impact check-in intensity on Facebook, the indirect effects of selfdisclosure and exhibitionism were particularly salient. Moreover, a complete path from extraversion to Facebook check-in through self-disclosure and exhibitionism was discovered. Theoretical implications on human mobility and selective self-presentation are also discussed.",
"title": ""
},
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "e6332552fb29765414020ee97184cc07",
"text": "In A History of God, Karen Armstrong describes a division, made by fourth century Christians, between kerygma and dogma: 'religious truth … capable of being expressed and defined clearly and logically,' versus 'religious insights [that] had an inner resonance that could only be apprehended by each individual in his own time during … contemplation' (Armstrong, 1993, p.114). This early dual-process theory had its roots in Plato and Aristotle, who suggested a division between 'philosophy,' which could be 'expressed in terms of reason and thus capable of proof,' and knowledge contained in myths, 'which eluded scientific demonstration' (Armstrong, 1993, 113–14). This division—between what can be known and reasoned logically versus what can only be experienced and apprehended—continued to influence Western culture through the centuries, and arguably underlies our current dual-process theories of reasoning. In psychology, the division between these two forms of understanding have been described in many different ways. The underlying theme of 'overtly reasoned' versus 'perceived, intuited' often ties these dual process theories together. In Western culture, the latter form of thinking has often been maligned (Dijksterhuis and Nordgren, 2006; Gladwell, 2005; Lieberman, 2000). Recently, cultural psychologists have suggested that although the distinction itself—between reasoned and intuited knowl-edge—may have precedents in the intellectual traditions of other cultures, the privileging of the former rather than the latter may be peculiar to Western cultures The Chinese philosophical tradition illustrates this difference of emphasis. Instead of an epistemology that was guided by abstract rules, 'the Chinese in esteeming what was immediately percepti-ble—especially visually perceptible—sought intuitive instantaneous understanding through direct perception' (Nakamura, 1960/1988, p.171). Taoism—the great Chinese philosophical school besides Confucianism—developed an epistemology that was particularly oriented towards concrete perception and direct experience (Fung, 1922; Nakamura, 1960/1988). Moreover, whereas the Greeks were concerned with definitions and devising rules for the purposes of classification, for many influential Taoist philosophers, such as Chuang Tzu, '… the problem of … how terms and attributes are to be delimited, leads one in precisely the wrong direction. Classifying or limiting knowledge fractures the greater knowledge' (Mote, 1971, p.102).",
"title": ""
},
{
"docid": "9c8e773dde5e999ac31a1a4bd279c24d",
"text": "The efficiency of wireless power transfer (WPT) systems is highly dependent on the load, which may change in a wide range in field applications. Besides, the detuning of WPT systems caused by the component tolerance and aging of inductors and capacitors can also decrease the system efficiency. In order to track the maximum system efficiency under varied loads and detuning conditions in real time, an active single-phase rectifier (ASPR) with an auxiliary measurement coil (AMC) and its corresponding control method are proposed in this paper. Both the equivalent load impedance and the output voltage can be regulated by the ASPR and the inverter, separately. First, the fundamental harmonic analysis model is established to analyze the influence of the load and the detuning on the system efficiency. Second, the soft-switching conditions and the equivalent input impedance of ASPR with different phase shifts and pulse widths are investigated in detail. Then, the analysis of the AMC and the maximum efficiency control strategy are provided in detail. Finally, an 800-W prototype is set up to validate the performance of the proposed method. The experimental results show that with 10% tolerance of the resonant capacitor in the receiver side, the system efficiency with the proposed approach reaches 91.7% at rated 800-W load and 91.1% at 300-W light load, which has an improvement by 2% and 10% separately compared with the traditional diode rectifier.",
"title": ""
},
{
"docid": "91cf217b2c5fa968bc4e893366ec53e1",
"text": "Importance\nPostpartum hypertension complicates approximately 2% of pregnancies and, similar to antepartum severe hypertension, can have devastating consequences including maternal death.\n\n\nObjective\nThis review aims to increase the knowledge and skills of women's health care providers in understanding, diagnosing, and managing hypertension in the postpartum period.\n\n\nResults\nHypertension complicating pregnancy, including postpartum, is defined as systolic blood pressure 140 mm Hg or greater and/or diastolic blood pressure 90 mm Hg or greater on 2 or more occasions at least 4 hours apart. Severe hypertension is defined as systolic blood pressure 160 mm Hg or greater and/or diastolic blood pressure 110 mm Hg or greater on 2 or more occasions repeated at a short interval (minutes). Workup for secondary causes of hypertension should be pursued, especially in patients with severe or resistant hypertension, hypokalemia, abnormal creatinine, or a strong family history of renal disease. Because severe hypertension is known to cause maternal stroke, women with severe hypertension sustained over 15 minutes during pregnancy or in the postpartum period should be treated with fast-acting antihypertension medication. Labetalol, hydralazine, and nifedipine are all effective for acute management, although nifedipine may work the fastest. For persistent postpartum hypertension, a long-acting antihypertensive agent should be started. Labetalol and nifedipine are also both effective, but labetalol may achieve control at a lower dose with fewer adverse effects.\n\n\nConclusions and Relevance\nProviders must be aware of the risks associated with postpartum hypertension and educate women about the symptoms of postpartum preeclampsia. Severe acute hypertension should be treated in a timely fashion to avoid morbidity and mortality. Women with persistent postpartum hypertension should be administered a long-acting antihypertensive agent.\n\n\nTarget Audience\nObstetricians and gynecologists, family physicians.\n\n\nLearning Objectives\nAfter completing this activity, the learner should be better able to assist patients and providers in identifying postpartum hypertension; provide a framework for the evaluation of new-onset postpartum hypertension; and provide instructions for the management of acute severe and persistent postpartum hypertension.",
"title": ""
},
{
"docid": "c42d1ee7a6b947e94eeb6c772e2b638f",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "42af6ec7bc66a2ff9aa0d7bc90f9d76a",
"text": "In this paper, we propose a novel scene detection algorithm which employs semantic, visual, textual, and audio cues. We also show how the hierarchical decomposition of the storytelling video structure can improve retrieval results presentation with semantically and aesthetically effective thumbnails. Our method is built upon two advancements of the state of the art: first is semantic feature extraction which builds video-specific concept detectors; and second is multimodal feature embedding learning that maps the feature vector of a shot to a space in which the Euclidean distance has task specific semantic properties. The proposed method is able to decompose the video in annotated temporal segments which allow us for a query specific thumbnail extraction. Extensive experiments are performed on different data sets to demonstrate the effectiveness of our algorithm. An in-depth discussion on how to deal with the subjectivity of the task is conducted and a strategy to overcome the problem is suggested.",
"title": ""
},
{
"docid": "bd963a55c28304493118028fe5f47bab",
"text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.",
"title": ""
},
{
"docid": "f59096137378d49c81bcb1de0be832b2",
"text": "Here the transformation related to the fast Fourier strategy mainly used in the field oriented well effective operations of the strategy elated to the scenario of the design oriented fashion in its implementation related to the well efficient strategy of the processing of the signal in the digital domain plays a crucial role in its analysis point of view in well oriented fashion respectively. It can also be applicable for the processing of the images and there is a crucial in its analysis in terms of the pixel wise process takes place in the system in well effective manner respectively. There is a vast number of the applications oriented strategy takes place in the system in w ell effective manner in the system based implementation followed by the well efficient analysis point of view in well stipulated fashion of the transformation related to the fast Fourier strategy plays a crucial role and some of them includes analysis of the signal, Filtering of the sound and also the compression of the data equations of the partial differential strategy plays a major role and the responsibility in its implementation scenario in a well oriented fashion respectively. There is a huge amount of the efficient analysis of the system related to the strategy of the transformation of the fast Fourier environment plays a crucial role and the responsibility for the effective implementation of the DFT in well respective fashion. Here in the present system oriented strategy DFT implementation takes place in a well explicit manner followed by the well effective analysis of the system where domain related to the time based strategy of the decimation plays a crucial role in its implementation aspect in well effective fashion respectively. Experiments have been conducted on the present method where there is a lot of analysis takes place on the large number of the huge datasets in a well oriented fashion with respect to the different environmental strategy and there is an implementation of the system in a well effective manner in terms of the improvement in the performance followed by the outcome of the entire system in well oriented fashion respectively.",
"title": ""
},
{
"docid": "4017069ba9b79f316d8cab584c06f853",
"text": "We examine the scenario in which a mobile network of robots must search, survey, or cover an environment and communication is restricted by relative location. While many algorithms choose to maintain a connected network at all times while performing such tasks, we relax this requirement and examine the use of periodic connectivity, where the network must regain connectivity at a fixed interval. We propose an online algorithm that scales linearly in the number of robots and allows for arbitrary periodic connectivity constraints. To complement the proposed algorithm, we provide theoretical inapproximability results for connectivity-constrained planning. Finally, we validate our approach in the coordinated search domain in simulation and in real-world experiments.",
"title": ""
},
{
"docid": "70260a7ce550830c7771b3e6004ebd41",
"text": "Due to the increasing requirements for transmission of images in computer, mobile environments, the research in the field of image compression has increased significantly. Image compression plays a crucial role in digital image processing, it is also very important for efficient transmission and storage of images. When we compute the number of bits per image resulting from typical sampling rates and quantization methods, we find that Image compression is needed. Therefore development of efficient techniques for image compression has become necessary .This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.",
"title": ""
},
{
"docid": "9bff76e87f4bfa3629e38621060050f7",
"text": "Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In this paper, we induce high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leverage the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We share the resulting dataset of over 5.5 million induced labels---4,000 times larger than the previous largest figure extraction dataset---with an average precision of 96.8%, to enable the development of modern data-driven methods for this task. We use this dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. The model was successfully deployed in Semantic Scholar,\\footnote\\urlhttps://www.semanticscholar.org/ a large-scale academic search engine, and used to extract figures in 13 million scientific documents.\\footnoteA demo of our system is available at \\urlhttp://labs.semanticscholar.org/deepfigures/,and our dataset of induced labels can be downloaded at \\urlhttps://s3-us-west-2.amazonaws.com/ai2-s2-research-public/deepfigures/jcdl-deepfigures-labels.tar.gz. Code to run our system locally can be found at \\urlhttps://github.com/allenai/deepfigures-open.",
"title": ""
},
{
"docid": "422183692a08138189271d4d7af407c7",
"text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.",
"title": ""
},
{
"docid": "0815549f210c57b28a7e2fc87c20f616",
"text": "Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time–frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.",
"title": ""
},
{
"docid": "d31c6830ee11fc73b53c7930ad0e638f",
"text": "This paper proposes two rectangular ring planar monopole antennas for wideband and ultra-wideband applications. Simple planar rectangular rings are used to design the planar antennas. These rectangular rings are designed in a way to achieve the wideband operations. The operating frequency band ranges from 1.85 GHz to 4.95 GHz and 3.12 GHz to 14.15 GHz. The gain varies from 1.83 dBi to 2.89 dBi for rectangular ring wideband antenna and 1.89 dBi to 5.2 dBi for rectangular ring ultra-wideband antenna. The design approach and the results are discussed.",
"title": ""
},
{
"docid": "b47d863479f1912ed8be154df188d4af",
"text": "This paper describes a new approach t o probabilistic roadmap planners (PRMs). The overall theme of the algorithm, called Lazy PRM, i s to minimize the number of collision checks performed during planning and hence minimize the running t ime of the planner. Our algorithm builds a roadmap in the configuration space, whose nodes are the user-defined initial and goal configurations and a number of randomly generated nodes. Neighboring nodes are connected by edges representing paths between the nodes. In contrast with PRMs, our planner initially assumes that all nodes and edges in the roadmap are collision-free, and searches the roadmap at hand for a shortest path between the initial and the goal node. The nodes and edges along the path are then checked for collision. If a collision with the obstacles occurs, the corresponding nodes and edges are removed fFom the roadmap. Our planner either finds a new shortest path, or first updates the roadmap with new nodes and edges, and then searches for a shortest path. The above process i s repeated until a collision-free path is returned. Lazy P R M is tailored to eficiently answer single planning queries, but can also be used for multiple queries. Experimental results presented in this paper show that our lazy method i s very eficient in practice.",
"title": ""
},
{
"docid": "0ff3e49a700a776c1a8f748d78bc4b73",
"text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.",
"title": ""
},
{
"docid": "5d934dd45e812336ad12cee90d1e8cdf",
"text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fe3dfb844ec09b743032c0475c669b2c",
"text": "The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.",
"title": ""
}
] |
scidocsrr
|
b6a50a62b6e3ebc34663addf15e1d994
|
Stable, open-loop precision manipulation with underactuated hands
|
[
{
"docid": "1a4d036d3a0074f8dce58a974bd7004e",
"text": "Humanoid robots found in research and commercial use today typically lack the ability to operate in unstructured and unknown environments. Force sensing and compliance at each robot joint can allow the robot to safely act in these environments. However, these features can be difficult to incorporate into robot designs. We present a new force sensing and compliant humanoid under development in the humanoid robotics group at MIT CSAIL. The robot, named Domo, is to be a research platform for exploring issues in general dexterous manipulation, visual perception, and learning. In this paper we describe aspects of the design, detail proposed research directions for the robot, and illustrate how the design of humanoid robots can be informed by the desired research goals.",
"title": ""
}
] |
[
{
"docid": "cfc15ed25912ac84f7c9afef93c4a0d6",
"text": "Lactate is an essential component of carbon metabolism in mammals. Recently, lactate was shown to signal through the G protein coupled receptor 81 (GPR81) and to thus modulate inflammatory processes. This study demonstrates that lactate inhibits pro-inflammatory signaling in a GPR81-independent fashion. While lipopolysaccharide (LPS) triggered expression of IL-6 and IL-12 p40, and CD40 in bone marrow-derived macrophages, lactate was able to abrogate these responses in a dose dependent manner in Gpr81-/- cells as well as in wild type cells. Macrophage activation was impaired when glycolysis was blocked by chemical inhibitors. Remarkably, lactate was found to inhibit LPS-induced glycolysis in wild type as well as in Gpr81-/- cells. In conclusion, our study suggests that lactate can induce GPR81-independent metabolic changes that modulate macrophage pro-inflammatory activation.",
"title": ""
},
{
"docid": "ffdd14d8d74a996971284a8e5e950996",
"text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.",
"title": ""
},
{
"docid": "7f04ef4eb5dc53cbfa6c8b5379a95e0e",
"text": "Memory scanning is an essential component in detecting and deactivating malware while the malware is still active in memory. The content here is confined to user-mode memory scanning for malware on 32-bit and 64-bit Windows NT based systems that are memory resident and/or persistent over reboots. Malware targeting 32-bit Windows are being created and deployed at an alarming rate today. While there are not many malware targeting 64-bit Windows yet, many of the existing Win32 malware for 32-bit Windows will work fine on 64-bit Windows due to the underlying WoW64 subsystem. Here, we will present an approach to implement user-mode memory scanning for Windows. This essentially means scanning the virtual address space of all processes in memory. In case of an infection, while the malware is still active in memory, it can significantly limit detection and disinfection. The real challenge hence actually lies in fully disinfecting the machine and restoring back to its clean state. Today’s malware apply complex anti-disinfection techniques making the task of restoring the machine to a clean state extremely difficult. Here, we will discuss some of these techniques with examples from real-world malware scenarios. Practical approaches for user-mode disinfection will be presented. By leveraging the abundance of redundant information available via various Win32 and Native API from user-mode, certain techniques to detect hidden processes will also be presented. Certain challenges in porting the memory scanner to 64-bit Windows and Vista will be discussed. The advantages and disadvantages of implementing a memory scanner in user-mode (rather than kernel-mode) will also be discussed.",
"title": ""
},
{
"docid": "eee51fc5cd3bee512b01193fa396e19a",
"text": "Croston’s method is a widely used to predict inventory demand when it is inter mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]",
"title": ""
},
{
"docid": "fb9c0650f5ac820eef3df65b7de1ff12",
"text": "Since 2013, a number of studies have enhanced the literature and have guided clinicians on viable treatment interventions outside of pharmacotherapy and surgery. Thirty-three randomized controlled trials and one large observational study on exercise and physiotherapy were published in this period. Four randomized controlled trials focused on dance interventions, eight on treatment of cognition and behavior, two on occupational therapy, and two on speech and language therapy (the latter two specifically addressed dysphagia). Three randomized controlled trials focused on multidisciplinary care models, one study on telemedicine, and four studies on alternative interventions, including music therapy and mindfulness. These studies attest to the marked interest in these therapeutic approaches and the increasing evidence base that places nonpharmacological treatments firmly within the integrated repertoire of treatment options in Parkinson's disease.",
"title": ""
},
{
"docid": "d1f89e14ff9382294b2597233b06b433",
"text": "Online referrals have become an important mechanism in leveraging consumers’ social networks to spread firms’ promotional campaigns and thus attract new customers. However, despite a robust understanding of the benefits and drivers of consumer referrals, only minimal attention has been paid towards the potential of classical promotional tactics in influencing referral behavior. Therefore, this study examines scarcity and social proof, two promotional cues which are linked to extant referral literature and are of great practical relevance, in the context of a randomized online experiment with the German startup Blinkist. Our analysis reveals that scarcity cues affect consumers' referral propensity regardless of the presence of social proof cues, but that social proof cues amplify scarcity’s effect on consumer referral propensity. We demonstrate that consumers’ perceptions of offer value drive the impact of scarcity on referral likelihood and illuminate how social proof moderates this mediating effect.",
"title": ""
},
{
"docid": "73284fdf9bc025672d3b97ca5651084a",
"text": "With continued scaling of NAND flash memory process technology and multiple bits programmed per cell, NAND flash reliability and endurance are degrading. Understanding, characterizing, and modeling the distribution of the threshold voltages across different cells in a modern multi-level cell (MLC) flash memory can enable the design of more effective and efficient error correction mechanisms to combat this degradation. We show the first published experimental measurement-based characterization of the threshold voltage distribution of flash memory. To accomplish this, we develop a testing infrastructure that uses the read retry feature present in some 2Y-nm (i.e., 20-24nm) flash chips. We devise a model of the threshold voltage distributions taking into account program/erase (P/E) cycle effects, analyze the noise in the distributions, and evaluate the accuracy of our model. A key result is that the threshold voltage distribution can be modeled, with more than 95% accuracy, as a Gaussian distribution with additive white noise, which shifts to the right and widens as P/E cycles increase. The novel characterization and models provided in this paper can enable the design of more effective error tolerance mechanisms for future flash memories.",
"title": ""
},
{
"docid": "c6e4ab5dd15f963b4594789bc2efa9a2",
"text": "This paper reports a preliminary attempt on data-driven modeling of segmental (phoneme) duration for two Indian languages Hindi and Telugu. Classification and Regression Tree (CART) based data-driven duration modeling for segmental duration prediction is presented. A number of features are proposed and their usefulness and relative contribution in segmental duration prediction is assessed. Objective evaluation of the duration models, by root mean squared prediction error (RMSE) and correlation between actual and predicted durations, is performed. The duration models developed have been implemented in an Indian language Textto-Speech synthesis system [1] being developed within Festival framework [2].",
"title": ""
},
{
"docid": "56ec8f3e88731992a028a9322dbc4890",
"text": "The term knowledge visualization has been used in many different fields with many different definitions. In this paper, we propose a new definition of knowledge visualization specifically in the context of visual analysis and reasoning. Our definition begins with the differentiation of knowledge as either explicit and tacit knowledge. We then present a model for the relationship between the two through the use visualization. Instead of directly representing data in a visualization, we first determine the value of the explicit knowledge associated with the data based on a cost/benefit analysis and display the knowledge in accordance to its importance. We propose that the displayed explicit knowledge leads us to create our own tacit knowledge through visual analytical reasoning and discovery.",
"title": ""
},
{
"docid": "b35efe68d99331d481e439ae8fbb4a64",
"text": "Semantic matching (SM) for textual information can be informally defined as the task of effectively modeling text matching using representations more complex than those based on simple and independent set of surface forms of words or stems (typically indicated as bag-of-words). In this perspective, matching named entities (NEs) implies that the associated model can both overcomes mismatch between different representations of the same entities, e.g., George H. W. Bush vs. George Bush, and carry out entity disambiguation to avoid incorrect matches between different but similar entities, e.g., the entity above with his son George W. Bush. This means that both the context and structure of NEs must be taken into account in the IR model. SM becomes even more complex when attempting to match the shared semantics between two larger pieces of text, e.g., phrases or clauses, as there is currently no theory indicating how words should be semantically composed for deriving the meaning of text. The complexity above has traditionally led to define IR models based on bag-of-word representations in the vector space model (VSM), where (i) the necessary structure is minimally taken into account by considering n-grams or phrases; and (ii) the matching coverage is increased by projecting text in latent semantic spaces or alternatively by applying query expansion. Such methods introduce a considerable amount of noise, which negatively balances the benefit of achieving better coverage in most cases, thus producing no IR system improvement. In the last decade, a new class of semantic matching approaches based on the so-called Kernel Methods (KMs) for structured data (see e.g., [4]) have been proposed. KMs also adopt scalar products (which, in this context, take the names of kernel functions) in VSM. However, KMs introduce two new important aspects: • the scalar product is implicitly computed using smart techniques, which enable the use of huge feature spaces, e.g., all possible skip n-grams; and • KMs are typically applied within supervised algorithms, e.g., SVMs, which, exploiting training data, can filter out irrelevant features and noise. In this talk, we will briefly introduce and summarize, the latest results on kernel methods for semantic matching by focusing on structural kernels. These can be applied to match syntactic and/or semantic representations of text shaped as trees. Several variants are available: the Syntactic Tree Kernels (STK), [2], the String Kernels (SK) [5] and the Partial Tree Kernels (PTK) [4]. Most interestingly, we will present tree kernels exploiting SM between words contained in a text structure, i.e., the Syntactic Semantic Tree Kernels (SSTK) [1] and the Smoothed Partial Tree Kernels (SPTK) [3]. These extend STK and PTK by allowing for soft matching (i.e., via similarity computation) between nodes associated with different but related labels, e.g., synonyms. The node similarity can be derived from manually annotated resources, e.g., WordNet or Wikipedia, as well as using corpus-based clustering approaches, e.g., latent semantic analysis (LSA). An example of the use of such kernels for question classification in the question answering domain will illustrate the potentials of their structural similarity approach.",
"title": ""
},
{
"docid": "1f3a41fc5202d636fcfe920603df57e4",
"text": "We present data on corporal punishment (CP) by a nationally representative sample of 991 American parents interviewed in 1995. Six types of CP were examined: slaps on the hand or leg, spanking on the buttocks, pinching, shaking, hitting on the buttocks with a belt or paddle, and slapping in the face. The overall prevalence rate (the percentage of parents using any of these types of CP during the previous year) was 35% for infants and reached a peak of 94% at ages 3 and 4. Despite rapid decline after age 5, just over half of American parents hit children at age 12, a third at age 14, and 13% at age 17. Analysis of chronicity found that parents who hit teenage children did so an average of about six times during the year. Severity, as measured by hitting the child with a belt or paddle, was greatest for children age 5-12 (28% of such children). CP was more prevalent among African American and low socioeconomic status parents, in the South, for boys, and by mothers. The pervasiveness of CP reported in this article, and the harmful side effects of CP shown by recent longitudinal research, indicates a need for psychology and sociology textbooks to reverse the current tendency to almost ignore CP and instead treat it as a major aspect of the socialization experience of American children; and for developmental psychologists to be cognizant of the likelihood that parents are using CP far more often than even advocates of CP recommend, and to inform parents about the risks involved.",
"title": ""
},
{
"docid": "2b095980aaccd7d35d079260738279c5",
"text": "Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance when embedded in large vocabulary continuous speech recognition (LVCSR) systems due to its capability of modeling local correlations and reducing translational variations. In all previous related works for ASR, only up to two convolutional layers are employed. In light of the recent success of very deep CNNs in image classification, it is of interest to investigate the deep structure of CNNs for speech recognition in detail. In contrast to image classification, the dimensionality of the speech feature, the span size of input feature and the relationship between temporal and spectral domain are new factors to consider while designing very deep CNNs. In this work, very deep CNNs are introduced for LVCSR task, by extending depth of convolutional layers up to ten. The contribution of this work is two-fold: performance improvement of very deep CNNs is investigated under different configurations; further, a better way to perform convolution operations on temporal dimension is proposed. Experiments showed that very deep CNNs offer a 8-12% relative improvement over baseline DNN system, and a 4-7% relative improvement over baseline CNN system, evaluated on both a 15-hr Callhome and a 51-hr Switchboard LVCSR tasks.",
"title": ""
},
{
"docid": "2a86c4904ef8059295f1f0a2efa546d8",
"text": "3D shape is a crucial but heavily underutilized cue in today’s computer vision system, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape model in the loop. Apart from object recognition on 2.5D depth maps, recovering these incomplete 3D shapes to full 3D is critical for analyzing shape variations. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses. It naturally supports joint object recognition and shape reconstruction from 2.5D depth maps, and further, as an additional application it allows active object recognition through view planning. We construct a largescale 3D CAD model dataset to train our model, and conduct extensive experiments to study our new representation.",
"title": ""
},
{
"docid": "ed8bd8f0898bae2ad0618f7bd9362c1b",
"text": "Human group activity recognition has drawn the attention of researchers worldwide because of the significant role it plays in many applications, including video surveillance and public security. Existing solutions for group activity recognition rely on human detection and tracking. To ensure high detection accuracy, current state-of-the-art tracking techniques require human supervision to identify objects of interest before automatic tracking can take place. This limitation has prevented existing approaches from being used in real-world applications. In scenarios when human supervision is unavailable, tracking algorithms could generate inaccurate trajectories and cause a decrease in performance for the existing group analysis methods. To address the aforementioned drawbacks, we investigate in this paper an end-to-end deep model, Differential Recurrent Convolutional Neural Networks (DRCNN). Our model consists of convolutional neural networks (CNN) and stacked differential long short-term memory (DLSTM) networks. It takes sequential raw video data as input and does not consider each group member as an individual object. Different from traditional non-end-to-end solutions which separate the steps of feature extraction and parameter learning, DRCNN utilizes a unified deep model to optimize the parameters of CNN and LSTM hand in hand. It thus has the potential of generating a more harmonious model. In addition, taking advantage of the semantic representation of CNN and the memory states of DLSTM, DRCNN has strong capabilities in understanding complex scene semantics and group dynamics. Extensive experimental studies indicate that the proposed technique can accomplish the task of fully automatic group activity recognition without sacrificing performance, and even outperforms the human-aided state-ofthe- art methods on two benchmark group activity datasets. To the best of our knowledge, this is the first end-to-end group activity recognition technique ever proposed.",
"title": ""
},
{
"docid": "db60a4111e93af76d55a36064c0fe0f7",
"text": "Technological advances over the last decade are changing the face of behavioral neuroscience research. Here we review recent work on the use of one such transformative tool in behavioral neuroscience research, chemogenetics (or Designer Receptors Exclusively Activated by Designer Drugs, DREADDS). As transformative technologies such as DREADDs are introduced, applied, and refined, their utility in addressing complex questions about behavior and cognition becomes clear and exciting. In the behavioral neuroscience field, remarkable new findings now regularly appear as a result of the ability to monitor and intervene in neural processes with high anatomical precision as animals behave in complex task environments. As these new tools are applied to behavioral questions, individualized procedures for their use find their way into diverse labs. Thus, \"tips of the trade\" become important for wide dissemination not only for laboratories that are using the tools but also for those who are interested in incorporating them into their own work. Our aim is to provide an up-to-date perspective on how the DREADD technique is being used for research on learning and memory, decision making, and goal-directed behavior, as well as to provide suggestions and considerations for current and future users based on our collective experience. (PsycINFO Database Record",
"title": ""
},
{
"docid": "54176f9184a42a9f92e0f3f529b20cd9",
"text": "In recent years, convolutional neural networks (CNNs) are leading the way in many computer vision tasks, such as image classification, object detection, and face recognition. In order to produce more refined semantic image segmentation, we survey the powerful CNNs and novel elaborate layers, structures and strategies, especially including those that have achieved the state-of-the-art results on the Pascal VOC 2012 semantic segmentation challenge. Moreover, we discuss their different working stages and various mechanisms to utilize the structural and contextual information in the image and feature spaces. Finally, combining some popular underlying referential methods in homologous problems, we propose several possible directions and approaches to incorporate existing effective methods as components to enhance CNNs for the segmentation of specific semantic objects.",
"title": ""
},
{
"docid": "011a9ac960aecc4a91968198ac6ded97",
"text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.",
"title": ""
},
{
"docid": "cec40a58ec9562bad90a7d6aa1f7c286",
"text": "BACKGROUND\nFalls in older people have been characterized extensively in the literature, however little has been reported regarding falls in middle-aged and younger adults. The objective of this paper is to describe the perceived cause, environmental influences and resultant injuries of falls in 1497 young (20-45 years), middle-aged (46-65 years) and older (> 65 years) men and women from the Baltimore Longitudinal Study on Aging.\n\n\nMETHODS\nA descriptive study where participants completed a fall history questionnaire describing the circumstances surrounding falls in the previous two years.\n\n\nRESULTS\nThe reporting of falls increased with age from 18% in young, to 21% in middle-aged and 35% in older adults, with higher rates in women than men. Ambulation was cited as the cause of the fall most frequently in all gender and age groups. Our population reported a higher percentage of injuries (70.5%) than previous studies. The young group reported injuries most frequently to wrist/hand, knees and ankles; the middle-aged to their knees and the older group to their head and knees. Women reported a higher percentage of injuries in all age groups.\n\n\nCONCLUSION\nThis is the first study to compare falls in young, middle and older aged men and women. Significant differences were found between the three age groups with respect to number of falls, activities engaged in prior to falling, perceived causes of the fall and where they fell.",
"title": ""
},
{
"docid": "13ec9ea20812dd75b4947b395ef1a595",
"text": "Cameras are a natural fit for micro aerial vehicles (MAVs) due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAV’s limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration underperforms. During the exploration, data is transmitted to the ground station which runs ∗http://people.inf.ethz.ch/hengli large-scale visual SLAM. We estimate the MAV’s state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.",
"title": ""
},
{
"docid": "1a5f56c7c7a9d44a762ba94297f3ca7a",
"text": "BACKGROUND\nFloods are the most common type of global natural disaster. Floods have a negative impact on mental health. Comprehensive evaluation and review of the literature are lacking.\n\n\nOBJECTIVE\nTo systematically map and review available scientific evidence on mental health impacts of floods caused by extended periods of heavy rain in river catchments.\n\n\nMETHODS\nWe performed a systematic mapping review of published scientific literature in five languages for mixed studies on floods and mental health. PUBMED and Web of Science were searched to identify all relevant articles from 1994 to May 2014 (no restrictions).\n\n\nRESULTS\nThe electronic search strategy identified 1331 potentially relevant papers. Finally, 83 papers met the inclusion criteria. Four broad areas are identified: i) the main mental health disorders-post-traumatic stress disorder, depression and anxiety; ii] the factors associated with mental health among those affected by floods; iii) the narratives associated with flooding, which focuses on the long-term impacts of flooding on mental health as a consequence of the secondary stressors; and iv) the management actions identified. The quantitative and qualitative studies have consistent findings. However, very few studies have used mixed methods to quantify the size of the mental health burden as well as exploration of in-depth narratives. Methodological limitations include control of potential confounders and short-term follow up.\n\n\nLIMITATIONS\nFloods following extreme events were excluded from our review.\n\n\nCONCLUSIONS\nAlthough the level of exposure to floods has been systematically associated with mental health problems, the paucity of longitudinal studies and lack of confounding controls precludes strong conclusions.\n\n\nIMPLICATIONS\nWe recommend that future research in this area include mixed-method studies that are purposefully designed, using more rigorous methods. Studies should also focus on vulnerable groups and include analyses of policy and practical responses.",
"title": ""
}
] |
scidocsrr
|
7de90d04268e51b946f9543ceba5ae6e
|
AutoVis: Automatic visualization
|
[
{
"docid": "db89d618c127dbf45cac1062ae5117ab",
"text": "A language-independent means of gauging topical similarity in unrestricted text is described. The method combines information derived from n-grams (consecutive sequences of n characters) with a simple vector-space technique that makes sorting, categorization, and retrieval feasible in a large multilingual collection of documents. No prior information about document content or language is required. Context, as it applies to document similarity, can be accommodated by a well-defined procedure. When an existing document is used as an exemplar, the completeness and accuracy with which topically related documents are retrieved is comparable to that of the best existing systems. The results of a formal evaluation are discussed, and examples are given using documents in English and Japanese.",
"title": ""
}
] |
[
{
"docid": "59ee62f5e0fc37156c5c1a5febc046ba",
"text": "The paper presents a method to estimate the detailed 3D body shape of a person even if heavy or loose clothing is worn. The approach is based on a space of human shapes, learned from a large database of registered body scans. Together with this database we use as input a 3D scan or model of the person wearing clothes and apply a fitting method, based on ICP (iterated closest point) registration and Laplacian mesh deformation. The statistical model of human body shapes enforces that the model stays within the space of human shapes. The method therefore allows us to compute the most likely shape and pose of the subject, even if it is heavily occluded or body parts are not visible. Several experiments demonstrate the applicability and accuracy of our approach to recover occluded or missing body parts from 3D laser scans.",
"title": ""
},
{
"docid": "e2d1af0a0e82e7bf92b9b9936d39e160",
"text": "Entity coreference is common in biomedical literature and it can affect text understanding systems that rely on accurate identification of named entities, such as relation extraction and automatic summarization. Coreference resolution is a foundational yet challenging natural language processing task which, if performed successfully, is likely to enhance such systems significantly. In this paper, we propose a semantically oriented, rule-based method to resolve sortal anaphora, a specific type of coreference that forms the majority of coreference instances in biomedical literature. The method addresses all entity types and relies on linguistic components of SemRep, a broad-coverage biomedical relation extraction system. It has been incorporated into SemRep, extending its core semantic interpretation capability from sentence level to discourse level. We evaluated our sortal anaphora resolution method in several ways. The first evaluation specifically focused on sortal anaphora relations. Our methodology achieved a F1 score of 59.6 on the test portion of a manually annotated corpus of 320 Medline abstracts, a 4-fold improvement over the baseline method. Investigating the impact of sortal anaphora resolution on relation extraction, we found that the overall effect was positive, with 50 % of the changes involving uninformative relations being replaced by more specific and informative ones, while 35 % of the changes had no effect, and only 15 % were negative. We estimate that anaphora resolution results in changes in about 1.5 % of approximately 82 million semantic relations extracted from the entire PubMed. Our results demonstrate that a heavily semantic approach to sortal anaphora resolution is largely effective for biomedical literature. Our evaluation and error analysis highlight some areas for further improvements, such as coordination processing and intra-sentential antecedent selection.",
"title": ""
},
{
"docid": "84c87c50659d18b130f4aaf8c1b3c7f1",
"text": "We describe initial work on an extension of the Kaldi toolkit that supports weighted finite-state transducer (WFST) decoding on Graphics Processing Units (GPUs). We implement token recombination as an atomic GPU operation in order to fully parallelize the Viterbi beam search, and propose a dynamic load balancing strategy for more efficient token passing scheduling among GPU threads. We also redesign the exact lattice generation and lattice pruning algorithms for better utilization of the GPUs. Experiments on the Switchboard corpus show that the proposed method achieves identical 1-best results and lattice quality in recognition and confidence measure tasks, while running 3 to 15 times faster than the single process Kaldi decoder. The above results are reported on different GPU architectures. Additionally we obtain a 46-fold speedup with sequence parallelism and multi-process service (MPS) in GPU.",
"title": ""
},
{
"docid": "04b9ced45b041360234256159cb41d95",
"text": "Because stochastic gradient descent (SGD) has shown promise optimizing neural networks with millions of parameters and few if any alternatives are known to exist, it has moved to the heart of leading approaches to reinforcement learning (RL). For that reason, the recent result from OpenAI showing that a particular kind of evolution strategy (ES) can rival the performance of SGD-based deep RL methods with large neural networks provoked surprise. This result is difficult to interpret in part because of the lingering ambiguity on how ES actually relates to SGD. The aim of this paper is to significantly reduce this ambiguity through a series of MNIST-based experiments designed to uncover their relationship. As a simple supervised problem without domain noise (unlike in most RL), MNIST makes it possible (1) to measure the correlation between gradients computed by ES and SGD and (2) then to develop an SGD-based proxy that accurately predicts the performance of different ES population sizes. These innovations give a new level of insight into the real capabilities of ES, and lead also to some unconventional means for applying ES to supervised problems that shed further light on its differences from SGD. Incorporating these lessons, the paper concludes by demonstrating that ES can achieve 99% accuracy on MNIST, a number higher than any previously published result for any evolutionary method. While not by any means suggesting that ES should substitute for SGD in supervised learning, the suite of experiments herein enables more informed decisions on the application of ES within RL and other paradigms.",
"title": ""
},
{
"docid": "51ba2c02aa4ad9b7cfb381ddae0f3dfe",
"text": "The dynamics of spontaneous fluctuations in neural activity are shaped by underlying patterns of anatomical connectivity. While numerous studies have demonstrated edge-wise correspondence between structural and functional connections, much less is known about how large-scale coherent functional network patterns emerge from the topology of structural networks. In the present study, we deploy a multivariate statistical technique, partial least squares, to investigate the association between spatially extended structural networks and functional networks. We find multiple statistically robust patterns, reflecting reliable combinations of structural and functional subnetworks that are optimally associated with one another. Importantly, these patterns generally do not show a one-to-one correspondence between structural and functional edges, but are instead distributed and heterogeneous, with many functional relationships arising from nonoverlapping sets of anatomical connections. We also find that structural connections between high-degree hubs are disproportionately represented, suggesting that these connections are particularly important in establishing coherent functional networks. Altogether, these results demonstrate that the network organization of the cerebral cortex supports the emergence of diverse functional network configurations that often diverge from the underlying anatomical substrate.",
"title": ""
},
{
"docid": "699f4b29e480d89b158326ec4c778f7b",
"text": "Much attention is currently being paid in both the academic and practitioner literatures to the value that organisations could create through the use of big data and business analytics (Gillon et al, 2012; Mithas et al, 2013). For instance, Chen et al (2012, p. 1166–1168) suggest that business analytics and related technologies can help organisations to ‘better understand its business and markets’ and ‘leverage opportunities presented by abundant data and domain-specific analytics’. Similarly, LaValle et al (2011, p. 22) report that topperforming organisations ‘make decisions based on rigorous analysis at more than double the rate of lower performing organisations’ and that in such organisations analytic insight is being used to ‘guide both future strategies and day-to-day operations’. We argue here that while there is some evidence that investments in business analytics can create value, the thesis that ‘business analytics leads to value’ needs deeper analysis. In particular, we argue here that the roles of organisational decision-making processes, including resource allocation processes and resource orchestration processes (Helfat et al, 2007; Teece, 2009), need to be better understood in order to understand how organisations can create value from the use of business analytics. Specifically, we propose that the firstorder effects of business analytics are likely to be on decision-making processes and that improvements in organisational performance are likely to be an outcome of superior decision-making processes enabled by business analytics. This paper is set out as follows. Below, we identify prior research traditions in the Information Systems (IS) literature that discuss the potential of data and analytics to create value. This is to put into perspective the current excitement around ‘analytics’ and ‘big data’, and to position those topics within prior research traditions. We then draw on a number of existing literatures to develop a research agenda to understand the relationship between business analytics, decision-making processes and organisational performance. Finally, we discuss how the three papers in this Special Issue advance the research agenda. Disciplines Engineering | Science and Technology Studies Publication Details Sharma, R., Mithas, S. and Kankanhalli, A. (2014). Transforming decision-making processes: a research agenda for understanding the impact of business analytics on organisations. European Journal of Information Systems, 23 (4), 433-441. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/3231 EJISEditorialFinal 16 May 2014 RS.docx 1 of 17",
"title": ""
},
{
"docid": "f34562a98d4a9768f08bc607aec796a5",
"text": "The greyfin croaker Pennahia anea is one of the most common croakers currently on retail sale in Hong Kong, but there are no regional studies on its biology or fishery. The reproductive biology of the species, based on 464 individuals obtained from local wet markets, was studied over 16 months (January 2008–April 2009) using gonadosomatic index (GSI) and gonad histology. Sizes used in this study ranged from 8.0 to 19.0 cm in standard length (SL). Both the larger and smaller size classes were missing from samples, implying that they are infrequently caught in the fishery. Based on GSI data, the approximate minimum sizes for male and female maturation were 12 cm SL. The size at 50% maturity for females was 14.3 cm SL, while all males in the samples were mature. Both GSI and gonad histology suggest that spawning activity occurred from March–April to June, with a peak in May. Since large croakers are declining in the local and regional fisheries, small species such as P. anea are becoming important, although they are mostly taken as bycatch. In view of unmanaged fishing pressure, and given the decline in large croakers and sizes of P. anea presently caught, proper management of the species is suggested.",
"title": ""
},
{
"docid": "e9e8fd42501c8e8f4e7d1c4ede40e758",
"text": "The paper uses a case example to present a novel way of building enterprise information systems. The objective is to bring forth the benefits of an item centric systems design in environments that require real-time material visibility, such as in logistics service provision. The methodology employed is case study and metadata modeling. Managers of SE Mäkinen, a Finnish car distribution company were interviewed on the implementation and operation of their award winning enterprise system. The case example was then analyzed using a generic metadata model of item-centric systems. The main finding of the paper is that introducing an item-centric enterprise-data model facilitated responsive service in the distribution of automobiles. The practical implications are that when starting to develop a new enterprise system, managers in logistics services should consider an item-centric design solution as an option to the conventional account-based design for enterprise-data models.",
"title": ""
},
{
"docid": "600a0b473a9396a9c098c40f83ec9273",
"text": "This paper presents two W-band waveguide bandpass filters, one fabricated using laser micromachining and the other 3-D printing. Both filters are based on coupled resonators and are designed to have a Chebyshev response. The first filter is for laser micromachining and it is designed to have a compact structure allowing the whole filter to be made from a single metal workpiece. This eliminates the need to split the filter into several layers and therefore yields an enhanced performance in terms of low insertion loss and good durability. The second filter is produced from polymer resin using a stereolithography 3-D printing technique and the whole filter is plated with copper. To facilitate the plating process, the waveguide filter consists of slots on both the broadside and narrow side walls. Such slots also reduce the weight of the filter while still retaining the filter's performance in terms of insertion loss. Both filters are fabricated and tested and have good agreement between measurements and simulations.",
"title": ""
},
{
"docid": "3125e3bdf7b3ea878a6ae054dfef49c6",
"text": "T test simulation models of pedestrian flows, we have performed experiments for corridors, bottleneck areas, and intersections. Our evaluations of video recordings show that the geometric boundary conditions are not only relevant for the capacity of the elements of pedestrian facilities, they also influence the time gap distribution of pedestrians, indicating the existence of self-organization phenomena. After calibration of suitable models, these findings can be used to improve design elements of pedestrian facilities and egress routes. It turns out that “obstacles” can stabilize flow patterns and make them more fluid. Moreover, intersecting flows can be optimized, utilizing the phenomenon of “stripe formation.” We also suggest increasing diameters of egress routes in stadia, theaters, and lecture halls to avoid long waiting times for people in the back, and shock waves due to impatience in cases of emergency evacuation. Moreover, zigzag-shaped geometries and columns can reduce the pressure in panicking crowds. The proposed design solutions are expected to increase the efficiency and safety of train stations, airport terminals, stadia, theaters, public buildings, and mass events in the future. As application examples we mention the evacuation of passenger ships and the simulation of pilgrim streams on the Jamarat bridge. Adaptive escape guidance systems, optimal way systems, and simulations of urban pedestrian flows are addressed as well.",
"title": ""
},
{
"docid": "359d3e06c221e262be268a7f5b326627",
"text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.",
"title": ""
},
{
"docid": "7579b5cb9f18e3dc296bcddc7831abc5",
"text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.",
"title": ""
},
{
"docid": "41dc9d6fd6a0550cccac1bc5ba27b11d",
"text": "A low-power forwarded-clock I/O transceiver architecture is presented that employs a high degree of output/input multiplexing, supply-voltage scaling with data rate, and low-voltage circuit techniques to enable low-power operation. The transmitter utilizes a 4:1 output multiplexing voltage-mode driver along with 4-phase clocking that is efficiently generated from a passive poly-phase filter. The output driver voltage swing is accurately controlled from 100–200 <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm mV}_{\\rm ppd}$</tex></formula> using a low-voltage pseudo-differential regulator that employs a partial negative-resistance load for improved low frequency gain. 1:8 input de-multiplexing is performed at the receiver equalizer output with 8 parallel input samplers clocked from an 8-phase injection-locked oscillator that provides more than 1UI de-skew range. In the transmitter clocking circuitry, per-phase duty-cycle and phase-spacing adjustment is implemented to allow adequate timing margins at low operating voltages. Fabricated in a general purpose 65 nm CMOS process, the transceiver achieves 4.8–8 Gb/s at 0.47–0.66 pJ/b energy efficiency for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm V}_{\\rm DD}=0.6$</tex></formula>–0.8 V.",
"title": ""
},
{
"docid": "c21392fd24107b2a187b4be820866c94",
"text": "Present-day manufacturing companies encounter a variety of challenges due to the dynamically changing industrial environment. Current control frameworks lack the adaptability and flexibility to effectively deal with challenges such as broken-down machines or altered customer orders. Multi-agent control has been proposed to improve the performance of manufacturing systems in uncertain or dynamic environments. Some multiagent architectures have been introduced with promising results. A key component of these architectures is the product agent, which is responsible for guiding a physical part through the manufacturing system based on the production requirements of the part. Even though the product agent has been previously used in multi-agent frameworks, a well-defined internal architecture for this agent has yet to be proposed. This work specifies a product agent architecture that can be utilized in multi-agent systems. The proposed architecture is tested using a manufacturing system simulation. The simulation results showcase the reactivity, proactiveness, and autonomy of the proposed product agent.",
"title": ""
},
{
"docid": "045ce09ddca696e2882413a8d251c5f6",
"text": "Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student's mark for a module, given the student's performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu- Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.",
"title": ""
},
{
"docid": "880b4ce4c8fd19191cb996aceabdf5a7",
"text": "The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200 million pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale.",
"title": ""
},
{
"docid": "f54b00ada7026004482a921bf61b634f",
"text": "The purpose of this study was to examine how 1:1 laptop initiative affected student learning at a selected rural Midwestern high school. A total of 105 high school students enrolled in 10th–12th grades during the 2008–2009 school year participated in the study. A survey instrument created by the Mitchell Institute was modified and used to collect data on student perceptions and faculty perceptions of the impact of 1:1 laptop computing on student learning and instructional integration of technology in education. Study findings suggest that integration of 1:1 laptop computing positively impacts student academic engagement and student learning. Therefore, there is need for teachers to implement appropriate computing practices to enhance student learning. Additionally, teachers need to collaborate with their students to learn and understand various instructional technology applications beyond basic Internet browsing and word processing.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "9bba22f8f70690bee5536820567546e6",
"text": "Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic, and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper, we present a new algorithm-a convexified version of maximum likelihood-for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semirandom graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs, planted clique/coloring, and so on. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors.",
"title": ""
},
{
"docid": "742808e23275a17591e3700fe21319a8",
"text": "Digital games have become a key player in the entertainment industry, attracting millions of new players each year. In spite of that, novice players may have a hard time when playing certain types of games, such as MOBAs and MMORPGs, due to their steep learning curves and not so friendly online communities. In this paper, we present an approach to help novice players in MOBA games overcome these problems. An artificial intelligence agent plays alongside the player analysing his/her performance and giving tips about the game. Experiments performed with the game League of Legends show the potential of this approach.",
"title": ""
}
] |
scidocsrr
|
acea7dbeb2be08a799c5b4828fb2085e
|
Fuzzy logic based sentiment analysis of product review documents
|
[
{
"docid": "75fd1706bb96a1888dc9939dbe5359c2",
"text": "In this paper, we present a novel approach to ide ntify feature specific expressions of opinion in product reviews with different features and mixed emotions . The objective is realized by identifying a set of potential features in the review and extract ing opinion expressions about those features by exploiting their associatio ns. Capitalizing on the view that more closely associated words come togeth er to express an opinion about a certain feature, dependency parsing i used to identify relations between the opinion expressions. The syst em learns the set of significant relations to be used by dependency parsing and a threshold parameter which allows us to merge closely associated opinio n expressions. The data requirement is minimal as thi is a one time learning of the domain independent parameters . The associations are represented in the form of a graph which is partiti oned to finally retrieve the opinion expression describing the user specified feature. We show that the system achieves a high accuracy across all domains and performs at par with state-of-the-art systems despi t its data limitations.",
"title": ""
},
{
"docid": "cd89079c74f5bb0218be67bf680b410f",
"text": "This paper illustrates a sentiment analysis approach to extract sentiments associated with polarities of positive or negative for specific subjects from a document, instead of classifying the whole document into positive or negative.The essential issues in sentiment analysis are to identify how sentiments are expressed in texts and whether the expressions indicate positive (favorable) or negative (unfavorable) opinions toward the subject. In order to improve the accuracy of the sentiment analysis, it is important to properly identify the semantic relationships between the sentiment expressions and the subject. By applying semantic analysis with a syntactic parser and sentiment lexicon, our prototype system achieved high precision (75-95%, depending on the data) in finding sentiments within Web pages and news articles.",
"title": ""
}
] |
[
{
"docid": "f3f0eee17e874fc108a250271eab5851",
"text": "In a recent paper, “Why does deep and cheap learning work so well?”, Lin and Tegmark claim to show that the mapping between deep belief networks and the variational renormalization group derived in [1] is invalid, and present a “counterexample” that claims to show that this mapping does not hold. In this comment, we show that these claims are incorrect and stem from a misunderstanding of the variational RG procedure proposed by Kadanoff. We also explain why the “counterexample” of Lin and Tegmark is compatible with the mapping proposed in [1].",
"title": ""
},
{
"docid": "9a921d579e9a9a213939b6cf9fa2ac9a",
"text": "This paper presents a generic methodology to optimize constellations based on their geometrical shaping for bit-interleaved coded modulation (BICM) systems. While the method can be applicable to any wireless standard design it has been tailored to two delivery scenarios typical of broadcast systems: 1) robust multimedia delivery and 2) UHDTV quality bitrate services. The design process is based on maximizing the BICM channel capacity for a given power constraint. The major contribution of this paper is a low complexity optimization algorithm for the design of optimal constellation schemes. The proposal consists of a set of initial conditions for a particle swarm optimization algorithm, and afterward, a customized post processing procedure for further improving the constellation alphabet. According to the broadcast application cases, the sizes of the constellations proposed range from 16 to 4096 symbols. The BICM channel capacities and performance of the designed constellations are compared to conventional quadrature amplitude modulation constellations for different application scenarios. The results show a significant improvement in terms of system performance and BICM channel capacities under additive white Gaussian noise and Rayleigh independently and identically distributed channel conditions.",
"title": ""
},
{
"docid": "4709a4e1165abb5d0018b74495218fc7",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "98b9963b0b6a731184db8e60889ca86c",
"text": "This paper presents a new spatial spectrum-sharing strategy for massive multiple-input multiple-output (MIMO) cognitive radio (CR) systems, where two CR base stations (CBS) are employed at the adjacent sides of each cell to provide a full-space coverage for the whole cell. Thanks to the high spatial resolution of massive antennas, CRs are distinguished by their angular information and their uplink/downlink channels are also represented with reduced parameter dimensions by the proposed two-dimensional spatial basis expansion model (2D-SBEM). To improve the spectral efficiency and the scheduling probability of CRs, a greedy CR scheduling algorithm is designed for the dual CBS system. As the proposed strategy is mainly based on angular information and since the angle reciprocity holds for two frequency carriers with moderate distance, it can be applied for both TDD and FDD systems.",
"title": ""
},
{
"docid": "8e76f262cd8cfe37f0f52f3c59baf80a",
"text": "The growth of the Internet has created tremendous opportunities for online collaborations. These often involve collaborative optimizations where the two parties are, for example, jointly minimizing costs without violating their own particular constraints (e.g., one party may have too much inventory, another too little inventory but too much production capacity, etc). Many of these optimizations can be formulated as linear programming problems, or, rather, as collaborative linear programming, in which two parties need to jointly optimize based on their own private inputs. It is often important to have online collaboration techniques and protocols that carry this out without either party revealing to the other anything about their own private inputs to the optimization (other than, unavoidably, what can be deduced from the collaboratively computed optimal solution). For example, two organizations who jointly invest in a project may want to minimize some linear objective function while satisfying both organizations' private and confidential constraints. Constraints are usually private when they reveal too much about the organizations' financial health, its future business strategy, etc. Linear programming problems have been widely studied in the literature. However, the existing solutions (e.g., the simplex method) do not extend to the above-mentioned framework in which the linear constraints are shared by the two parties, who do not want to disclose their own to the other party. In this paper, we give an efficient protocol for solving linear programming problems in the honest-but-curious model, such that neither party reveals anything about their private input to the other party (other than what can be inferred from the result). The amount of communication and computation done by our protocol is proportional to the time complexity of the simplex method, a widely used linear programming algorithm. We also provide a practical solution that prevents certain malicious behavior of the participants. The use of the known general circuit-simulation solutions to secure function evaluation is unacceptable for the simplex method, as it implies an exponential size circuit",
"title": ""
},
{
"docid": "0d5ba680571a9051e70ababf0c685546",
"text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization",
"title": ""
},
{
"docid": "1f7c3215ccac1716cb659bcc92f49c49",
"text": "We present a novel approach for the compensation of temporal brightness variations (commonly referred to as flicker) in archived film sequences. The proposed method is motivated by fundamental principles of photographic image registration and provides a substantial level of adaptation to temporal but also spatial variations of picture brightness. Additionally, our scheme provides an efficient mechanism for the adaptive estimation of flicker compensation profile, which makes it suitable for the compensation of long duration film sequences while it addresses problems arising from scene motion and illumination using a novel motion-compensated graylevel tracing approach. We present experimental evidence which suggests that our method offers high levels of performance and compares favorably with competing state-of-the-art techniques for flicker compensation.",
"title": ""
},
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "dddc0b6196a81de7c24c8bfc9dc0af7e",
"text": "Microblog, such as Weibo and Twitter, has become an important platform where people share their opinions. Much research has been done to detect topics and events in microblogs. Due to the dynamic nature of events, it is more crucial to monitor the evolution and trace the development of the events. People pay more attention to the whole evolution chain of the events rather than a single event. In this paper, we propose a method to automatically discover event evolution chain in microblogs based on multiple similarity measures including contents, locations and participants. We build a 5-tuple event description model specifically for events detected from microblogs and analyze their relationships. Inverted index and locality-sensitive hashing are used to improve the efficiency of the algorithm. Experiment shows that our method gain a 143.33% speed up against method without locality-sensitive hashing. In comparison with the ground truth and a baseline method, the result illustrates that it effectively covers ground truth and outperforms the baseline method especially in dealing with the long-term spanning events.",
"title": ""
},
{
"docid": "2abfa229fa2d315d9c1550549a9deb42",
"text": "Twenty-five adolescents reported their daily activities and the quality of their experiences for a total of 753 times during a normal week, in response to random beeps transmitted by an electronic paging device. In this sample adolescents were found to spend most of their time either in conversation with peers or in watching television. Negative affects were prevalent in most activities involving socialization into adult roles. Television viewing appears to be an affectless state associated with deviant behavior and antisocial personality traits. The research suggests the importance of a systemic approach which studies persons' activities and experiences in an ecological context. The experiential sampling method described in this paper provides a tool for collecting such systemic data.",
"title": ""
},
{
"docid": "9971017e8ad4cda361e98e3c56185587",
"text": "Improvements in stable, or dispositional, mindfulness are often assumed to accrue from mindfulness training and to account for many of its beneficial effects. However, research examining these assumptions has produced mixed findings, and the relation between dispositional mindfulness and mindfulness training is actively debated. A comprehensive meta-analysis was conducted on randomized controlled trials (RCTs) of mindfulness training published from 2003-2014 to investigate whether (a) different self-reported mindfulness scale dimensions change as a result of mindfulness training, (b) key aspects of study design (e.g., control condition type, population type, and intervention type) moderate training-related changes in dispositional mindfulness scale dimensions, and (c) changes in mindfulness scale dimensions are associated with beneficial changes in mental health outcomes. Scales from widely used dispositional mindfulness measures were combined into 5 categories for analysis: Attention, Description, Nonjudgment, Nonreactivity, and Observation. A total of 88 studies (n = 5,787) were included. Changes in scale dimensions of mindfulness from pre to post mindfulness training produced mean difference effect sizes ranging from small to moderate (g = 0.28-0.49). Consistent with the theorized role of improvements in mindfulness in training outcomes, changes in dispositional mindfulness scale dimensions were moderately correlated with beneficial intervention outcomes (r = .27-0.30), except for the Observation dimension (r = .16). Overall, moderation analyses revealed inconsistent results, and limitations of moderator analyses suggest important directions for future research. We discuss how the findings can inform the next generation of mindfulness assessment. (PsycINFO Database Record",
"title": ""
},
{
"docid": "b55a0ae61e2b0c36b5143ef2b7b2dbf0",
"text": "This study reports a comparison of screening tests for dyslexia, dyspraxia and Meares-Irlen (M-I) syndrome in a Higher Education setting, the University of Worcester. Using a sample of 74 volunteer students, we compared the current tutor-delivered battery of 15 subtests with a computerized test, the Lucid Adult Dyslexia Screening test (LADS), and both of these with data on assessment outcomes. The sensitivity of this tutor battery was higher than LADS in predicting dyslexia, dyspraxia or M-I syndrome (91% compared with 66%) and its specificity was lower (79% compared with 90%). Stepwise logistic regression on these tests was used to identify a better performing subset of tests, when combined with a change in practice for M-I syndrome screening. This syndrome itself proved to be a powerful discriminator for dyslexia and/or dyspraxia, and we therefore recommend it as the first stage in a two-stage screening process. The specificity and sensitivity of the new battery, the second part of which comprises LADS plus four of the original tutor delivered subtests, provided the best overall performance: 94% sensitivity and 92% specificity. We anticipate that the new two-part screening process would not take longer to complete.",
"title": ""
},
{
"docid": "9a52461cbd746e4e1df5748af37b58ed",
"text": "Irony is a pervasive aspect of many online texts, one made all the more difficult by the absence of face-to-face contact and vocal intonation. As our media increasingly become more social, the problem of irony detection will become even more pressing. We describe here a set of textual features for recognizing irony at a linguistic level, especially in short texts created via social media such as Twitter postings or ‘‘tweets’’. Our experiments concern four freely available data sets that were retrieved from Twitter using content words (e.g. ‘‘Toyota’’) and user-generated tags (e.g. ‘‘#irony’’). We construct a new model of irony detection that is assessed along two dimensions: representativeness and relevance. Initial results are largely positive, and provide valuable insights into the figurative issues facing tasks such as sentiment analysis, assessment of online reputations, or decision making.",
"title": ""
},
{
"docid": "7b239e83dea095bad2229d66596982c5",
"text": "In this paper, we discuss the application of concept of data quality to big data by highlighting how much complex is to define it in a general way. Already data quality is a multidimensional concept, difficult to characterize in precise definitions even in the case of well-structured data. Big data add two further dimensions of complexity: (i) being “very” source specific, and for this we adopt the interesting UNECE classification, and (ii) being highly unstructured and schema-less, often without golden standards to refer to or very difficult to access. After providing a tutorial on data quality in traditional contexts, we analyze big data by providing insights into the UNECE classification, and then, for each type of data source, we choose a specific instance of such a type (notably deep Web data, sensor-generated data, and Twitters/short texts) and discuss how quality dimensions can be defined in these cases. The overall aim of the paper is therefore to identify further research directions in the area of big data quality, by providing at the same time an up-to-date state of the art on data quality.",
"title": ""
},
{
"docid": "855b35e6e4c6f147de71bf0864184d56",
"text": "Leveraging large data sets, deep Convolutional Neural Networks (CNNs) achieve state-of-the-art recognition accuracy. Due to the substantial compute and memory operations, however, they require significant execution time. The massive parallel computing capability of GPUs make them as one of the ideal platforms to accelerate CNNs and a number of GPU-based CNN libraries have been developed. While existing works mainly focus on the computational efficiency of CNNs, the memory efficiency of CNNs have been largely overlooked. Yet CNNs have intricate data structures and their memory behavior can have significant impact on the performance. In this work, we study the memory efficiency of various CNN layers and reveal the performance implication from both data layouts and memory access patterns. Experiments show the universal effect of our proposed optimizations on both single layers and various networks, with up to 27.9× for a single layer and up to 5.6× on the whole networks.",
"title": ""
},
{
"docid": "59370193760b0bebaf530ce669e4ef80",
"text": "AlGaN/GaN HEMT using field plate and recessed gate for X-band application was developed on SiC substrate. Internal matching circuits were designed to achieve high gain at 8 GHz for the developed device with single chip and four chips combining, respectively. The internally matched 5.52 mm single chip AlGaN/GaN HEMT exhibited 36.5 W CW output power with a power added efficiency (PAE) of 40.1% and power density of 6.6 W/mm at 35 V drain bias voltage (Vds). The device with four chips combining demonstrated a CW over 100 W across the band of 7.7-8.2 GHz, and an maximum CW output power of 119.1 W with PAE of 38.2% at Vds =31.5 V. This is the highest output power for AlGaN/GaN HEMT operated at X-band to the best of our knowledge.",
"title": ""
},
{
"docid": "2931d312ab452a78f82bdd5e6709fb5e",
"text": "When a piece of malicious information becomes rampant in an information diffusion network, can we identify the source node that originally introduced the piece into the network and infer the time when it initiated this? Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred. This is a very challenging problem since typically only incomplete traces are observed and we need to unroll the incomplete traces into the past in order to pinpoint the source. In this paper, we tackle this problem by developing a twostage framework, which first learns a continuoustime diffusion network model based on historical diffusion traces and then identifies the source of an incomplete diffusion trace by maximizing the likelihood of the trace under the learned model. Experiments on both large synthetic and realworld data show that our framework can effectively “go back to the past”, and pinpoint the source node and its initiation time significantly more accurately than previous state-of-the-arts.",
"title": ""
},
{
"docid": "3c95e090ab4e57f2fd21543226ad55ae",
"text": "Increase in the area and neuron number of the cerebral cortex over evolutionary time systematically changes its computational properties. One of the fundamental developmental mechanisms generating the cortex is a conserved rostrocaudal gradient in duration of neuron production, coupled with distinct asymmetries in the patterns of axon extension and synaptogenesis on the same axis. A small set of conserved sensorimotor areas with well-defined thalamic input anchors the rostrocaudal axis. These core mechanisms organize the cortex into two contrasting topographic zones, while systematically amplifying hierarchical organization on the rostrocaudal axis in larger brains. Recent work has shown that variation in 'cognitive control' in multiple species correlates best with absolute brain size, and this may be the behavioral outcome of this progressive organizational change.",
"title": ""
},
{
"docid": "bb0ac3d88646bf94710a4452ddf50e51",
"text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension",
"title": ""
},
{
"docid": "ee25e4acd98193e7dc3f89f3f98e42e0",
"text": "Kempe et al. [4] (KKT) showed the problem of influence maximization is NP-hard and a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, it has two major sources of inefficiency. First, finding the expected spread of a node set is #P-hard. Second, the basic greedy algorithm is quadratic in the number of nodes. The first source is tackled by estimating the spread using Monte Carlo simulation or by using heuristics[4, 6, 2, 5, 1, 3]. Leskovec et al. proposed the CELF algorithm for tackling the second. In this work, we propose CELF++ and empirically show that it is 35-55% faster than CELF.",
"title": ""
}
] |
scidocsrr
|
252e7dba8872f8e44c91746d91bb4531
|
Cachet: a decentralized architecture for privacy preserving social networking with caching
|
[
{
"docid": "f3ec87229acd0ec98c044ad42fd9fec1",
"text": "Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.",
"title": ""
}
] |
[
{
"docid": "b716af4916ac0e4a0bf0b040dccd352b",
"text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.",
"title": ""
},
{
"docid": "19b283a1438058088f9f9e337dd5aac7",
"text": "Analysis on Web search query logs has revealed that there is a large portion of entity-bearing queries, reflecting the increasing demand of users on retrieving relevant information about entities such as persons, organizations, products, etc. In the meantime, significant progress has been made in Web-scale information extraction, which enables efficient entity extraction from free text. Since an entity is expected to capture the semantic content of documents and queries more accurately than a term, it would be interesting to study whether leveraging the information about entities can improve the retrieval accuracy for entity-bearing queries. In this paper, we propose a novel retrieval approach, i.e., latent entity space (LES), which models the relevance by leveraging entity profiles to represent semantic content of documents and queries. In the LES, each entity corresponds to one dimension, representing one semantic relevance aspect. We propose a formal probabilistic framework to model the relevance in the high-dimensional entity space. Experimental results over TREC collections show that the proposed LES approach is effective in capturing latent semantic content and can significantly improve the search accuracy of several state-of-the-art retrieval models for entity-bearing queries.",
"title": ""
},
{
"docid": "a02cd3bccf9c318f0c7a01fa84bc0f8e",
"text": "In the last several years, differential privacy has become the leading framework for private data analysis. It provides bounds on the amount that a randomized function can change as the result of a modification to one record of a database. This requirement can be satisfied by using the exponential mechanism to perform a weighted choice among the possible alternatives, with better options receiving higher weights. However, in some situations the number of possible outcomes is too large to compute all weights efficiently. We present the subsampled exponential mechanism, which scores only a sample of the outcomes. We show that it still preserves differential privacy, and fulfills a similar accuracy bound. Using a clustering application, we show that the subsampled exponential mechanism outperforms a previously published private algorithm and is comparable to the full exponential mechanism but more scalable.",
"title": ""
},
{
"docid": "36371909115d45074f709b090f46b644",
"text": "For many years Round the World racers and leading yacht owners have appreciated the benefit of carbon. Carbon fiber spars are around 50% lighter and considerably stronger than traditional aluminum masts. The result is increased speed, and the lighter mast also gives the boat a lower centre of gravity and so heeling and pitching is reduced. The recent spate of carbon mast failures has left concerns amongst the general yachting public about the reliability of the concept and ultimately the material itself. The lack of knowledge about loads acting on the mast prevents designers from coming with an optimum design. But a new program, the \"Smart Mast\" program, developed by two of Britain's leading marine companies, has been able to monitor loads acting on a mast in real-time with an optical fiber system. This improvement could possibly be a revolution in the design of racing yachts carbon masts and fill the design data shortage. Some other evolutions in the rigging design also appeared to be of interest, like for example the free-standing mast or a video system helping the helmsman to use its sails at their maximum. Thesis supervisor: Jerome J. Connor Title: Professor of Civil and Environmental Engineering",
"title": ""
},
{
"docid": "4f59e141ffc88aaed620ca58522e8f03",
"text": "Undergraduate volunteers rated a series of words for pleasantness while hearing a particular background music. The subjects in Experiment 1 received, immediately or after a 48-h delay, an unexpected word-recall test in one of the following musical cue contexts: same cue (S), different cue (D), or no cue (N). For immediate recall, context dependency (S-D) was significant but same-cue facilitation (S-N) was not. No cue effects at all were found for delayed recall, and there was a significant interaction between cue and retention interval. A similar interaction was also found in Experiment 3, which was designed to rule out an alternative explanation with respect to distraction. When the different musical selection was changed specifically in either tempo or form (genre), only pieces having an altered tempo produced significantly lower immediate recall compared with the same pieces (Experiment 2). The results support a stimulus generalization view of music-dependent memory.",
"title": ""
},
{
"docid": "28d75588fdb4ff45929da124b001e8cc",
"text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch",
"title": ""
},
{
"docid": "28b23fc65a17b2b29e4e2a6b78ab401b",
"text": "In 1980, the N400 event-related potential was described in association with semantic anomalies within sentences. When, in 1992, a second waveform, the P600, was reported in association with syntactic anomalies and ambiguities, the story appeared to be complete: the brain respected a distinction between semantic and syntactic representation and processes. Subsequent studies showed that the P600 to syntactic anomalies and ambiguities was modulated by lexical and discourse factors. Most surprisingly, more than a decade after the P600 was first described, a series of studies reported that semantic verb-argument violations, in the absence of any violations or ambiguities of syntax can evoke robust P600 effects and no N400 effects. These observations have raised fundamental questions about the relationship between semantic and syntactic processing in the brain. This paper provides a comprehensive review of the recent studies that have demonstrated P600s to semantic violations in light of several proposed triggers: semantic-thematic attraction, semantic associative relationships, animacy and semantic-thematic violations, plausibility, task, and context. I then discuss these findings in relation to a unifying theory that attempts to bring some of these factors together and to link the P600 produced by semantic verb-argument violations with the P600 evoked by unambiguous syntactic violations and syntactic ambiguities. I suggest that normal language comprehension proceeds along at least two competing neural processing streams: a semantic memory-based mechanism, and a combinatorial mechanism (or mechanisms) that assigns structure to a sentence primarily on the basis of morphosyntactic rules, but also on the basis of certain semantic-thematic constraints. I suggest that conflicts between the different representations that are output by these distinct but interactive streams lead to a continued combinatorial analysis that is reflected by the P600 effect. I discuss some of the implications of this non-syntactocentric, dynamic model of language processing for understanding individual differences, language processing disorders and the neuroanatomical circuitry engaged during language comprehension. Finally, I suggest that that these two processing streams may generalize beyond the language system to real-world visual event comprehension.",
"title": ""
},
{
"docid": "e43242ed17a0b2fa9fca421179135ce1",
"text": "Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",
"title": ""
},
{
"docid": "38524d91bcff648f96f5d693425dff7f",
"text": "This paper presents a predictive current control method and its application to a voltage source inverter. The method uses a discrete-time model of the system to predict the future value of the load current for all possible voltage vectors generated by the inverter. The voltage vector which minimizes a quality function is selected. The quality function used in this work evaluates the current error at the next sampling time. The performance of the proposed predictive control method is compared with hysteresis and pulsewidth modulation control. The results show that the predictive method controls very effectively the load current and performs very well compared with the classical solutions",
"title": ""
},
{
"docid": "874973c7a28652d5d9859088b965e76c",
"text": "Recommender systems are commonly defined as applications that e-commerce sites exploit to suggest products and provide consumers with information to facilitate their decision-making processes.1 They implicitly assume that we can map user needs and constraints, through appropriate recommendation algorithms, and convert them into product selections using knowledge compiled into the intelligent recommender. Knowledge is extracted from either domain experts (contentor knowledge-based approaches) or extensive logs of previous purchases (collaborative-based approaches). Furthermore, the interaction process, which turns needs into products, is presented to the user with a rationale that depends on the underlying recommendation technology and algorithms. For example, if the system funnels the behavior of other users in the recommendation, it explicitly shows reviews of the selected products or quotes from a similar user. Recommender systems are now a popular research area2 and are increasingly used by e-commerce sites.1 For travel and tourism,3 the two most successful recommender system technologies (see Figure 1) are Triplehop’s TripMatcher (used by www. ski-europe.com, among others) and VacationCoach’s expert advice platform, MePrint (used by travelocity.com). Both of these recommender systems try to mimic the interactivity observed in traditional counselling sessions with travel agents when users search for advice on a possible holiday destination. From a technical viewpoint, they primarily use a content-based approach, in which the user expresses needs, benefits, and constraints using the offered language (attributes). The system then matches the user preferences with items in a catalog of destinations (described with the same language). VacationCoach exploits user profiling by explicitly asking the user to classify himself or herself in one profile (for example, as a “culture creature,” “beach bum,” or “trail trekker”), which induces implicit needs that the user doesn’t provide. The user can even input precise profile information by completing the appropriate form. TripleHop’s matching engine uses a more sophisticated approach to reduce user input. It guesses importance of attributes that the user does not explicitly mention. It then combines statistics on past user queries with a prediction computed as a weighted average of importance assigned by similar users.4",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "941e74dd8a4d9728daf9dff564186c07",
"text": "The recent application of RNN encoder-decoder models has re sulted in substantial progress in fully data-driven dialogue systems, but evalua tion remains a challenge. An adversarial loss could be a way to directly evaluate the ex t nt to which generated dialogue responses sound like they came from a human. Th is could reduce the need for human evaluation, while more directly evaluati ng on a generative task. In this work, we investigate this idea by training an RN N to discriminate a dialogue model’s samples from human-generated samples. A lthough we find some evidence this setup could be viable, we also note that ma ny issues remain in its practical application. We discuss both aspects and conc lude that future work is warranted.",
"title": ""
},
{
"docid": "45a98a82d462d8b12445cbe38f20849d",
"text": "Proliferative verrucous leukoplakia (PVL) is an aggressive form of oral leukoplakia that is persistent, often multifocal, and refractory to treatment with a high risk of recurrence and malignant transformation. This article describes the clinical aspects and histologic features of a case that demonstrated the typical behavior pattern in a long-standing, persistent lesion of PVL of the mandibular gingiva and that ultimately developed into squamous cell carcinoma. Prognosis is poor for this seemingly harmless-appearing white lesion of the oral mucosa.",
"title": ""
},
{
"docid": "a08ae7da309e4f34308fa627b231cdea",
"text": "The rapid development of social networks makes it easy for people to communicate online. However, social networks always suffer from social spammers due to their openness. Spammers deliver information for economic purposes, and they pose threats to the security of social networks. To maintain the long-term running of online social networks, many detection methods are proposed. But current methods normally use high dimension features with supervised learning algorithms to find spammers, resulting in low detection performance. To solve this problem, in this paper, we first apply the Laplacian score method, which is an unsupervised feature selection method, to obtain useful features. Based on the selected features, the semi-supervised ensemble learning is then used to train the detection model. Experimental results on the Twitter dataset show the efficiency of our approach after feature selection. Moreover, the proposed method remains high detection performance in the face of limited labeled data.",
"title": ""
},
{
"docid": "e90eb3e0104b3407df4ec00628cc5bba",
"text": "Microalgae are a promising alternative source of lipid for biodiesel production. One of the most important decisions is the choice of species to use. High lipid productivity is a key desirable characteristic of a species for biodiesel production. This paper reviews information available in the literature on microalgal growth rates, lipid content and lipid productivities for 55 species of microalgae, including 17 Chlorophyta, 11 Bacillariophyta and five Cyanobacteria as well as other taxa. The data available in the literature are far from complete and rigorous comparison across experiments carried out under different conditions is not possible. However, the collated information provides a framework for decision-making and a starting point for further investigation of species selection. Shortcomings in the current dataset are highlighted. The importance of lipid productivity as a selection parameter over lipid content and growth rate individually is demonstrated.",
"title": ""
},
{
"docid": "54fa080265b45a8a542bb47dce75ce11",
"text": "The aims of this research were to investigate the applicability of the Systematic Literature Review (SLR) process within the constraints of a 13-week master’s level project and to aggregate evidence about the effectiveness of pair programming for teaching introductory programming. It was found that, with certain modifications to the process, it was possible to undertake an SLR within a limited time period and to produce valid results. Based on pre-defined inclusion and exclusion criteria, the student found 28 publications reporting empirical studies of pair programming, of which nine publications were used for data extraction and analysis. Results of the review indicates that whilst pair programming has little effect on the marks obtained for examinations and assignments, it can significantly improve the pass and retention rates and the students’ confidence and enjoyment of programming. Following the student study, experienced reviewers re-applied the inclusion and exclusion criteria to the 28 publications and carried out data extraction and synthesis using the resulting papers. A comparison of the student’s results and those of the experienced reviewers is presented.",
"title": ""
},
{
"docid": "458470e18ce2ab134841f76440cfdc2b",
"text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",
"title": ""
},
{
"docid": "0c9bbeaa783b2d6270c735f004ecc47f",
"text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: rlevine@csom.umn.edu.",
"title": ""
},
{
"docid": "f8cd8b54218350fa18d4d59ca0a58a05",
"text": "This study provides conceptual and empirical arguments why an assessment of applicants' procedural knowledge about interpersonal behavior via a video-based situational judgment test might be valid for academic and postacademic success criteria. Four cohorts of medical students (N = 723) were followed from admission to employment. Procedural knowledge about interpersonal behavior at the time of admission was valid for both internship performance (7 years later) and job performance (9 years later) and showed incremental validity over cognitive factors. Mediation analyses supported the conceptual link between procedural knowledge about interpersonal behavior, translating that knowledge into actual interpersonal behavior in internships, and showing that behavior on the job. Implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "f4b5b71398e3a40c76b1f58d3f05a83d",
"text": "Creativity and innovation in any organization are vital to its successful performance. The authors review the rapidly growing body of research in this area with particular attention to the period 2002 to 2013, inclusive. Conceiving of both creativity and innovation as being integral parts of essentially the same process, we propose a new, integrative definition. We note that research into creativity has typically examined the stage of idea generation, whereas innovation studies have commonly also included the latter phase of idea implementation. The authors discuss several seminal theories of creativity and innovation, then apply a comprehensive levels-of-analysis framework to review extant research into individual, team, organizational, and multi-level innovation. Key measurement characteristics of the reviewed studies are then noted. In conclusion, we propose a guiding framework for future research comprising eleven major themes and sixty specific questions for future studies. INNOVATION AND CREATIVITY 3 INNOVATION AND CREATIVITY IN ORGANIZATIONS: A STATE-OF-THE-SCIENCE REVIEW, PROSPECTIVE COMMENTARY, AND",
"title": ""
}
] |
scidocsrr
|
1ade87b7ce7334e27e7a8e328e0febd8
|
Word forms - not just their lengths- are optimized for efficient communication
|
[
{
"docid": "d1d3607b8a5cb0158d00de9e6d366f85",
"text": "This paper investigates the role of resource allocation as a source of processing difficulty in human sentence comprehension. The paper proposes a simple information-theoretic characterization of processing difficulty as the work incurred by resource reallocation during parallel, incremental, probabilistic disambiguation in sentence comprehension, and demonstrates its equivalence to the theory of Hale [Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL (Vol. 2, pp. 159-166)], in which the difficulty of a word is proportional to its surprisal (its negative log-probability) in the context within which it appears. This proposal subsumes and clarifies findings that high-constraint contexts can facilitate lexical processing, and connects these findings to well-known models of parallel constraint-based comprehension. In addition, the theory leads to a number of specific predictions about the role of expectation in syntactic comprehension, including the reversal of locality-based difficulty patterns in syntactically constrained contexts, and conditions under which increased ambiguity facilitates processing. The paper examines a range of established results bearing on these predictions, and shows that they are largely consistent with the surprisal theory.",
"title": ""
}
] |
[
{
"docid": "39c0d4c998a81a5de43ff99646a67624",
"text": "Internet of Things (IoT) has recently emerged as an enabling technology for context-aware and interconnected “smart things.” Those smart things along with advanced power engineering and wireless communication technologies have realized the possibility of next generation electrical grid, smart grid, which allows users to deploy smart meters, monitoring their electric condition in real time. At the same time, increased environmental consciousness is driving electric companies to replace traditional generators with renewable energy sources which are already productive in user’s homes. One of the most incentive ways is for electric companies to institute electricity buying-back schemes to encourage end users to generate more renewable energy. Different from the previous works, we consider renewable energy buying-back schemes with dynamic pricing to achieve the goal of energy efficiency for smart grids. We formulate the dynamic pricing problem as a convex optimization dual problem and propose a day-ahead time-dependent pricing scheme in a distributed manner which provides increased user privacy. The proposed framework seeks to achieve maximum benefits for both users and electric companies. To our best knowledge, this is one of the first attempts to tackle the time-dependent problem for smart grids with consideration of environmental benefits of renewable energy. Numerical results show that our proposed framework can significantly reduce peak time loading and efficiently balance system energy distribution.",
"title": ""
},
{
"docid": "194c1a9a16ee6dad00c41544fca74371",
"text": "Computers are not (yet?) capable of being reasonable any more than is a Second Lieutenant. Against stupidity, the Gods themselves contend in vain. Banking systems include the back-end bookkeeping systems that record customers' account details and transaction processing systems such as cash machine networks and high-value interbank money transfer systems that feed them with data. They are important for a number of reasons. First, bookkeeping was for many years the main business of the computer industry, and banking was its most intensive area of application. Personal applications such as Netscape and Powerpoint might now run on more machines, but accounting is still the critical application for the average business. So the protection of bookkeeping systems is of great practical importance. It also gives us a well-understood model of protection in which confidentiality plays almost no role, but where the integrity of records (and their immutability once made) is of paramount importance. Second, transaction processing systems—whether for small debits such as $50 cash machine withdrawals or multimillion-dollar wire transfers—were the applications that launched commercial cryptography. Banking applications drove the development not just of encryption algorithms and protocols, but also of the supporting technologies, such as tamper-resistant cryptographic processors. These processors provide an important and interesting example of a trusted computing base that is quite different from",
"title": ""
},
{
"docid": "945cf1645df24629842c5e341c3822e7",
"text": "Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.",
"title": ""
},
{
"docid": "3c3d3f63f39230fa337f5faa704026a0",
"text": "Multirate adaptive filtering is related to the problem of reconstructing a high-resolution signal from two or more observations that are sampled at different rates. A popular existing method for solving this problem uses the multirate adaptive filter structure that is based on the least mean squares (LMS) approach. However, its low convergence rate restricts the use of this method. In this study, a multirate normalized LMS (NLMS) filter is proposed as an alternative to that of LMS based filter, for the reconstruction of the high-resolution signal from several low-resolution noisy observations. In the simulation example performed on an audio signal, it is observed that the proposed method leads to the better results than the existing method especially in the convergence rate.",
"title": ""
},
{
"docid": "f34af647319436085ab8e667bab795b0",
"text": "In the transition from industrial to service robotics, robo ts will have to deal with increasingly unpredictable and variable environments. We present a system that is able to recognize objects of a certain class in an image and to identify their parts for potential interactions. The metho d can recognize objects from arbitrary viewpoints and generalizes to instances that have never been observed during training, even if they are partially occluded and appear against cluttered backgrounds. Our approach builds on the Implicit Shape Model of Leibe et al. (2008). We extend it to couple recognition to the provision of meta-data useful for a task and to the case of multiple viewpoints by integrating it with the dense multi-view correspondence finder of Ferrari et al. (2006). Meta-data can be part labels but also depth estimates, information on material types, or any other pixelwise annotation. We present experimental results on wheelchairs, cars, and motorbikes.",
"title": ""
},
{
"docid": "107436d5f38f3046ef28495a14cc5caf",
"text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.",
"title": ""
},
{
"docid": "fc3d4b4ac0d13b34aeadf5806013689d",
"text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.",
"title": ""
},
{
"docid": "60736095287074c8a81c9ce5afa93f75",
"text": "The visualization of high-quality isosurfaces at interactive rates is an important tool in many simulation and visualization applications. Today, isosurfaces are most often visualized by extracting a polygonal approximation that is then rendered via graphics hardware or by using a special variant of preintegrated volume rendering. However, these approaches have a number of limitations in terms of the quality of the isosurface, lack of performance for complex data sets, or supported shading models. An alternative isosurface rendering method that does not suffer from these limitations is to directly ray trace the isosurface. However, this approach has been much too slow for interactive applications unless massively parallel shared-memory supercomputers have been used. In this paper, we implement interactive isosurface ray tracing on commodity desktop PCs by building on recent advances in real-time ray tracing of polygonal scenes and using those to improve isosurface ray tracing performance as well. The high performance and scalability of our approach will be demonstrated with several practical examples, including the visualization of highly complex isosurface data sets, the interactive rendering of hybrid polygonal/isosurface scenes, including high-quality ray traced shading effects, and even interactive global illumination on isosurfaces.",
"title": ""
},
{
"docid": "fd3dd59550806b93a625f6e6750e888f",
"text": "Location-based services have become widely available on mobile devices. Existing methods employ a pull model or user-initiated model, where a user issues a query to a server which replies with location-aware answers. To provide users with instant replies, a push model or server-initiated model is becoming an inevitable computing model in the next-generation location-based services. In the push model, subscribers register spatio-textual subscriptions to capture their interests, and publishers post spatio-textual messages. This calls for a high-performance location-aware publish/subscribe system to deliver publishers' messages to relevant subscribers.In this paper, we address the research challenges that arise in designing a location-aware publish/subscribe system. We propose an rtree based index structure by integrating textual descriptions into rtree nodes. We devise efficient filtering algorithms and develop effective pruning techniques to improve filtering efficiency. Experimental results show that our method achieves high performance. For example, our method can filter 500 tweets in a second for 10 million registered subscriptions on a commodity computer.",
"title": ""
},
{
"docid": "34e73a1b7bb2f2c9549219d8194c924b",
"text": "•At the beginning of training, a high learning rate or small batch size influences SGD to visit flatter loss regions. •The evolution of the largest eigenvalues always follow a similar pattern, with a fast increase in the first epochs and a steady decrease thereafter, where the peak value is determined by the learning rate and batch size. •By altering the learning rate in the sharpest direction, SGD can be steered towards regions which are an order of magnitude sharper with similar generalization.",
"title": ""
},
{
"docid": "0884651e01add782a7d58b40f6ba078f",
"text": "Several statistics have been published dealing with failure causes of high voltage rotating machines i n general and power generators in particular [1 4]. Some of the se statistics only specify the part of the machine which failed without giving any deeper insight in the failure mechanism. Other publications distinguish between the damage which caused the machine to fail and the root cause which effect ed the damage. The survey of 1199 hydrogenerators c ar ied out by the CIGRE study committee SC11, EG11.02 provides an ex mple of such an investigation [5]. It gives det ail d results of 69 incidents. 56% of the failed machines showed an insulation damage, other major types being mecha ni al, thermal and bearing damages (Figure 1a). Root causes which led to these damages are subdivided into 7 differen t groups (Figure 1b).",
"title": ""
},
{
"docid": "232e86b8786e188c4de32c740c5e78e4",
"text": "We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to min x∈ℝ n ‖Ax - b‖2, where A ∈ ℝ m × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK's DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster.",
"title": ""
},
{
"docid": "1caaac35c25cd9efb729b57e59c41be5",
"text": "The design of elastic file synchronization services like Dropbox is an open and complex issue yet not unveiled by the major commercial providers, as it includes challenges like fine-grained programmable elasticity and efficient change notification to millions of devices. In this paper, we propose a novel architecture for file synchronization which aims to solve the above two major challenges. At the heart of our proposal lies ObjectMQ, a lightweight framework for providing programmatic elasticity to distributed objects using messaging. The efficient use of indirect communication: i) enables programmatic elasticity based on queue message processing, ii) simplifies change notifications offering simple unicast and multicast primitives; and iii) provides transparent load balancing based on queues.\n Our reference implementation is StackSync, an open source elastic file synchronization Cloud service developed in the context of the FP7 project CloudSpaces. StackSync supports both predictive and reactive provisioning policies on top of ObjectMQ that adapt to real traces from the Ubuntu One service. The feasibility of our approach has been extensively validated with an open benchmark, including commercial synchronization services like Dropbox or OneDrive.",
"title": ""
},
{
"docid": "b990a21742a1db59811d636368527ab0",
"text": "We describe a high-performance implementation of the lattice Boltzmann method (LBM) for sparse geometries on graphic processors. In our implementation we cover the whole geometry with a uniform mesh of small tiles and carry out calculations for each tile independently with proper data synchronization at the tile edges. For this method, we provide both a theoretical analysis of complexity and the results for real implementations involving two-dimensional (2D) and three-dimensional (3D) geometries. Based on the theoretical model, we show that tiles offer significantly smaller bandwidth overheads than solutions based on indirect addressing. For 2D lattice arrangements, a reduction in memory usage is also possible, although at the cost of diminished performance. We achieved a performance of 682 MLUPS on GTX Titan (72 percent of peak theoretical memory bandwidth) for the D3Q19 lattice arrangement and double-precision data.",
"title": ""
},
{
"docid": "b62dac4ee86feccd03d4878c4dbfb2d2",
"text": "We propose a novel framework for abnormal event detection in video that requires no training sequences. Our framework is based on unmasking, a technique previously used for authorship verification in text documents, which we adapt to our task. We iteratively train a binary classifier to distinguish between two consecutive video sequences while removing at each step the most discriminant features. Higher training accuracy rates of the intermediately obtained classifiers represent abnormal events. To the best of our knowledge, this is the first work to apply unmasking for a computer vision task. We compare our method with several state-of-the-art supervised and unsupervised methods on four benchmark data sets. The empirical results indicate that our abnormal event detection framework can achieve state-of-the-art results, while running in real-time at 20 frames per second.",
"title": ""
},
{
"docid": "c09d57ca9130dc39bd51acb5628e99d0",
"text": "The goal of the DECODA project is to reduce the development cost of Speech Analytics systems by reducing the need for manual annotation. This project aims to propose robust speech data mining tools in the framework of call-center monitoring and evaluation, by means of weakly supervised methods. The applicative framework of the project is the call-center of the RATP (Paris public transport authority). This project tackles two very important open issues in the development of speech mining methods from spontaneous speech recorded in call-centers : robustness (how to extract relevant information from very noisy and spontaneous speech messages) and weak supervision (how to reduce the annotation effort needed to train and adapt recognition and classification models). This paper describes the DECODA corpus collected at the RATP during the project. We present the different annotation levels performed on the corpus, the methods used to obtain them, as well as some evaluation of the quality of the annotations produced.",
"title": ""
},
{
"docid": "a133a0fe8c4edd7ca6f9dc1689550794",
"text": "Although research on interpersonal forgiveness is burgeoning, there is little conceptual or empirical scholarship on self–forgiveness. To stimulate research on this topic, a conceptual analysis of self–forgiveness is offered in which self–forgiveness is defined and distinguished from interpersonal forgiveness and pseudo self–forgiveness. The conditions under which self–forgiveness is appropriate also are identified. A theoretical model describing the processes involved in self–forgiveness following the perpetration of an interpersonal transgression is outlined and the proposed emotional, social–cognitive, and offense–related determinants of self–forgiveness are described. The limitations of the model and its implications for future research are explored.",
"title": ""
},
{
"docid": "60f2baba7922543e453a3956eb503c05",
"text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.",
"title": ""
},
{
"docid": "b2b78842769d602fd38173c3e4acb247",
"text": "Recent advancements in generative adversarial networks (GANs), using deep convolutional models, have supported the development of image generation techniques able to reach satisfactory levels of realism. Further improvements have been proposed to condition GANs to generate images matching a specific object category or a short text description. In this work, we build on the latter class of approaches and investigate the possibility of driving and conditioning the image generation process by means of brain signals recorded, through an electroencephalograph (EEG), while users look at images from a set of 40 ImageNet object categories with the objective of generating the seen images. To accomplish this task, we first demonstrate that brain activity EEG signals encode visually-related information that allows us to accurately discriminate between visual object categories and, accordingly, we extract a more compact class-dependent representation of EEG data using recurrent neural networks. Afterwards, we use the learned EEG manifold to condition image generation employing GANs, which, during inference, will read EEG signals and convert them into images. We tested our generative approach using EEG signals recorded from six subjects while looking at images of the aforementioned 40 visual classes. The results show that for classes represented by well-defined visual patterns (e.g., pandas, airplane, etc.), the generated images are realistic and highly resemble those evoking the EEG signals used for conditioning GANs, resulting in an actual reading-the-mind process.",
"title": ""
},
{
"docid": "dc72881043c7aa01ecec7bb7edfa8daf",
"text": "Image colorization is the task to color a grayscale image with limited color cues. In this work, we present a novel method to perform image colorization using sparse representation. Our method first trains an over-complete dictionary in YUV color space. Then taking a grayscale image and a small subset of color pixels as inputs, our method colorizes overlapping image patches via sparse representation; it is achieved by seeking sparse representations of patches that are consistent with both the grayscale image and the color pixels. After that, we aggregate the colorized patches with weights to get an intermediate result. This process iterates until the image is properly colorized. Experimental results show that our method leads to high-quality colorizations with small number of given color pixels. To demonstrate one of the applications of the proposed method, we apply it to transfer the color of one image onto another to obtain a visually pleasing image.",
"title": ""
}
] |
scidocsrr
|
c40420cdb325d1b6aece4bf97c6d93c7
|
Energy-efficient image compression algorithm for high-frame rate multi-view wireless capsule endoscopy
|
[
{
"docid": "ad20d4392d675241323e0da179c89038",
"text": "This paper presents new concepts and techniques for implementing encoders for Reed-Solomon codes, with or without interleaving. Reed-Solomon encoders based on these concepts and techniques often require substantially less hardware than even linear cyclic binary codes of comparable redundancy. A CODEWORD of a cyclic code is a sequence of characters which can be viewed as the coefficients of a polynomial n-1 c(x) = 2 c,x’. i=o The characters C,,, C,,-2, Cn-s,. . . , C,, Co are elements in a finite field. In this paper, we consider only fields of order 2”, where m m ight be any integer. A sequence of n characters is a codeword if and only if its corresponding polynomial, C(x), is a mu ltiple of the code’s generator polynomial, g(x). Let deg g(x) = n k. The common method of encoding a cyclic code is to regard q-,9 cn-2,* * * 9 C,-, as message characters, and to divide the polynomial",
"title": ""
}
] |
[
{
"docid": "c2cd6967d28547139c4cfdb2468c6b2d",
"text": "Palletizing tasks are necessary to promote efficiency of storage and shipping. These tasks, however, involve some of the most monotonous and physically demanding labor in the factory. Thus, many types of robot palletizing systems have been developed, although many robot motion commands still depend on the teach pendent. That is, the operator inputs the motion command lines one by one. This is very troublesome and most importantly, the user must know how to type the code. We propose a new GUI for the palletizing system that can be used more conveniently. To do this, we used the PLP \"Fast Algorithm\" and 3-D auto-patterning visualization. The 3-D patterning process includes the following. First, an operator can identify the results of the task and edit them. Second, the operator passes the position values of objects to a robot simulator. Using those positions, a palletizing operation can be simulated. We used the wide used industrial model and analyzed the kinematics and dynamics to create a robot simulator. In this paper we propose a 3-D patterning algorithm, 3-D robot-palletizing simulator, and modified trajectory generation algorithm, \"Overlapped method\" to reduce the computing load.",
"title": ""
},
{
"docid": "140a9255e8ee104552724827035ee10a",
"text": "Our goal is to design architectures that retain the groundbreaking performance of CNNs for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks",
"title": ""
},
{
"docid": "9837d57d6c8be2ecd9440deee8990d17",
"text": "Named entity recognition (NER) is an important task in NLP, which is all the more challenging in conversational domain with their noisy facets. Moreover, conversational texts are often available in limited amount, making supervised tasks infeasible. To learn from small data, strong inductive biases are required. Previous work relied on hand-crafted features to encode these biases until transfer learning emerges. Here, we explore a transfer learning method, namely language model pretraining, on NER task in Indonesian conversational texts. We utilize large unlabeled data (generic domain) to be transferred to conversational texts, enabling supervised training on limited in-domain data. We report two transfer learning variants, namely supervised model fine-tuning and unsupervised pretrained LM fine-tuning. Our experiments show that both variants outperform baseline neural models when trained on small data (100 sentences), yielding an absolute improvement of 32 points of test F1 score. Furthermore, we find that the pretrained LM encodes part-of-speech information which is a strong predictor for NER.",
"title": ""
},
{
"docid": "aa418cfd93eaba0d47084d0b94be69b8",
"text": "Single-trial classification of Event-Related Potentials (ERPs) is needed in many real-world brain-computer interface (BCI) applications. However, because of individual differences, the classifier needs to be calibrated by using some labeled subject specific training samples, which may be inconvenient to obtain. In this paper we propose a weighted adaptation regularization (wAR) approach for offline BCI calibration, which uses data from other subjects to reduce the amount of labeled data required in offline single-trial classification of ERPs. Our proposed model explicitly handles class-imbalance problems which are common in many real-world BCI applications. War can improve the classification performance, given the same number of labeled subject-specific training samples, or, equivalently, it can reduce the number of labeled subject-specific training samples, given a desired classification accuracy. To reduce the computational cost of wAR, we also propose a source domain selection (SDS) approach. Our experiments show that wARSDS can achieve comparable performance with wAR but is much less computationally intensive. We expect wARSDS to find broad applications in offline BCI calibration.",
"title": ""
},
{
"docid": "9b55e6dc69517848ae5e5040cd9d0d55",
"text": "In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.",
"title": ""
},
{
"docid": "892a469512f840ced1bf0fe82243b369",
"text": "The HIV-1 pandemic affecting over 37 million people worldwide continues, with nearly one-half of the infected population on highly active antiretroviral therapy (HAART). Major therapeutic challenges remain because of the emergence of drug-resistant HIV-1 strains, limitations because of safety and toxicity with current HIV-1 drugs, and patient compliance for lifelong, daily treatment regimens. Nonnucleoside reverse transcriptase inhibitors (NNRTIs) that target the viral polymerase have been a key component of the current HIV-1 combination drug regimens; however, these issues hamper them. Thus, the development of novel more effective NNRTIs as anti-HIV-1 agents with fewer long-term liabilities, efficacy on new drug-resistant HIV-1 strains, and less frequent dosing is crucial. Using a computational and structure-based design strategy to guide lead optimization, a 5 µM virtual screening hit was transformed to a series of very potent nanomolar to picomolar catechol diethers. One representative, compound I, was shown to have nanomolar activity in HIV-1-infected T cells, potency on clinically relevant HIV-1 drug-resistant strains, lack of cytotoxicity and off-target effects, and excellent in vivo pharmacokinetic behavior. In this report, we show the feasibility of compound I as a late-stage preclinical candidate by establishing synergistic antiviral activity with existing HIV-1 drugs and clinical candidates and efficacy in HIV-1-infected humanized [human peripheral blood lymphocyte (Hu-PBL)] mice by completely suppressing viral loads and preventing human CD4+ T-cell loss. Moreover, a long-acting nanoformulation of compound I [compound I nanoparticle (compound I-NP)] in poly(lactide-coglycolide) (PLGA) was developed that shows sustained maintenance of plasma drug concentrations and drug efficacy for almost 3 weeks after a single dose.",
"title": ""
},
{
"docid": "d90a66cf63abdc1d0caed64812de7043",
"text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.",
"title": ""
},
{
"docid": "59e02bc986876edc0ee0a97fd4d12a28",
"text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "377cab312d5e262a5363e6cf5b5c64de",
"text": "Electroencephalography (EEG) has been instrumental in making discoveries about cognition, brain function, and dysfunction. However, where do EEG signals come from and what do they mean? The purpose of this paper is to argue that we know shockingly little about the answer to this question, to highlight what we do know, how important the answers are, and how modern neuroscience technologies that allow us to measure and manipulate neural circuits with high spatiotemporal accuracy might finally bring us some answers. Neural oscillations are perhaps the best feature of EEG to use as anchors because oscillations are observed and are studied at multiple spatiotemporal scales of the brain, in multiple species, and are widely implicated in cognition and in neural computations.",
"title": ""
},
{
"docid": "9b644efe43114fc93e3dd5d591699d31",
"text": "Previous research has shown that hidden Markov model (HMM) analysis is useful for detecting certain challenging classes of malware. In this research, we consider the related problem of malware classification based on HMMs. We train multiple HMMs on a variety of compilers and malware generators. More than 8,000 malware samples are then scored against these models and separated into clusters based on the resulting scores. We observe that the clustering results could be used to classify the malware samples into their appropriate families with good accuracy. Since none of the malware families in the test set were used to generate the HMMs, these results indicate that our approach can effective classify previously unknown malware, at least in some cases. Thus, such a clustering strategy could serve as a useful tool in malware analysis and classification.",
"title": ""
},
{
"docid": "6f45b4858c33d88472c131f379fd3edf",
"text": "Shadow maps are the current technique for generating high quality real-time dynamic shadows. This article gives a ‘practical’ introduction to shadow mapping (or projection mapping) with numerous simple examples and source listings. We emphasis some of the typical limitations and common pitfalls when implementing shadow mapping for the first time and how the reader can overcome these problems using uncomplicated debugging techniques. A scene without shadowing is life-less and flat objects seem decoupled. While different graphical techniques add a unique effect to the scene, shadows are crucial and when not present create a strange and mood-less aura.",
"title": ""
},
{
"docid": "50b91bfdedbf9435761433b944d0f965",
"text": "This paper describes a control scheme of a high frequency, high power current source inverter using static induction transistors to suppress the surge voltage and to reduce the switching loss during the commutation of current. The inverter is operated at a leading power factor, which requires the phase angle of the output current to be adjusted to each specific load point by the controller. The stable operation is verified by the experiments under the commutation inductance, 1.8µH, i.e., 18% reactance (130kHz, 250V, 30A base). As a result, the switching loss is estimated to be half of the conducting loss and the efficiency of the inverter to be 95%.",
"title": ""
},
{
"docid": "63c1080df773ff57e3af8468e8d31d35",
"text": "This report refers to a body of investigations performed in support of experiments aboard the Space Shuttle, and designed to counteract the symptoms of Space Adapatation Syndrome, which resemble those of motion sickness on Earth. For these supporting studies we examined the autonomic manifestations of earth-based motion sickness. Heart rate, respiration rate, finger pulse volume and basal skin resistance were measured on 127 men and women before, during and after exposure to nauseogenic rotating chair tests. Significant changes in all autonomic responses were observed across the tests (p<.05). Significant differences in autonomic responses among groups divided according to motion sickness susceptibility were also observed (p<.05). Results suggest that the examination of autonomic responses as an objective indicator of motion sickness malaise is warranted and may contribute to the overall understanding of the syndrome on Earth and in Space. DESCRIPTORS: heart rate, respiration rate, finger pulse volume, skin resistance, biofeedback, motion sickness.",
"title": ""
},
{
"docid": "1e4cf4cce07a24916e99c43aa779ac54",
"text": "Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to longterm sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M) to describe videos, which builds a visual and textual shared memory to model the longterm visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specifically, similar to [10], the proposed M attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the stateof-the-art methods in terms of BLEU and METEOR.",
"title": ""
},
{
"docid": "e7eb22e4ac65696e3bb2a2611a28e809",
"text": "Cuckoo search (CS) is an efficient swarm-intelligence-based algorithm and significant developments have been made since its introduction in 2009. CS has many advantages due to its simplicity and efficiency in solving highly non-linear optimisation problems with real-world engineering applications. This paper provides a timely review of all the state-of-the-art developments in the last five years, including the discussions of theoretical background and research directions for future development of this powerful algorithm.",
"title": ""
},
{
"docid": "16924ee2e6f301d962948884eeafc934",
"text": "Companies have realized they need to hire data scientists, academic institutions are scrambling to put together data-science programs, and publications are touting data science as a hot-even \"sexy\"-career choice. However, there is confusion about what exactly data science is, and this confusion could lead to disillusionment as the concept diffuses into meaningless buzz. In this article, we argue that there are good reasons why it has been hard to pin down exactly what is data science. One reason is that data science is intricately intertwined with other important concepts also of growing importance, such as big data and data-driven decision making. Another reason is the natural tendency to associate what a practitioner does with the definition of the practitioner's field; this can result in overlooking the fundamentals of the field. We believe that trying to define the boundaries of data science precisely is not of the utmost importance. We can debate the boundaries of the field in an academic setting, but in order for data science to serve business effectively, it is important (i) to understand its relationships to other important related concepts, and (ii) to begin to identify the fundamental principles underlying data science. Once we embrace (ii), we can much better understand and explain exactly what data science has to offer. Furthermore, only once we embrace (ii) should we be comfortable calling it data science. In this article, we present a perspective that addresses all these concepts. We close by offering, as examples, a partial list of fundamental principles underlying data science.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "8f750438e7d78873fd33174d2e347ea5",
"text": "This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.",
"title": ""
}
] |
scidocsrr
|
9fbc5689a724b70baf9e878d57039171
|
FEXT Crosstalk Cancellation for High-Speed Serial Link Design
|
[
{
"docid": "6883add239f58223ef1941d5044d4aa8",
"text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.",
"title": ""
}
] |
[
{
"docid": "736a454a8aa08edf645312cecc7925c3",
"text": "This paper describes an <i>analogy ontology</i>, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's <i>structure-mapping</i> theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.",
"title": ""
},
{
"docid": "ef742ded3107fe9c5812a7c866835117",
"text": "Much commentary has been circulating in academe regarding the research skills, or lack thereof, in members of ‘‘Generation Y,’’ the generation born between 1980 and 1994. The students currently on college campuses, as well as those due to arrive in the next few years, have grown up in front of electronic screens: television, movies, video games, computer monitors. It has been said that student critical thinking and other cognitive skills (as well as their physical well-being) are suffering because of the large proportion of time spent in sedentary pastimes, passively absorbing words and images, rather than in reading. It may be that students’ cognitive skills are not fully developing due to ubiquitous electronic information technologies. However, it may also be that academe, and indeed the entire world, is currently in the middle of a massive and wideranging shift in the way knowledge is disseminated and learned.",
"title": ""
},
{
"docid": "9ca90172c5beff5922b4f5274ef61480",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.",
"title": ""
},
{
"docid": "e632dfe8a37846339ceb44ae4f406a1a",
"text": "Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users’ queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations. We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun’s approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "203f34a946e00211ebc6fce8e2a061ed",
"text": "We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.",
"title": ""
},
{
"docid": "02647d7ab54cc2ae1af5ce156e63f742",
"text": "In intelligent transportation systems (ITS), transportation infrastructure is complimented with information and communication technologies with the objectives of attaining improved passenger safety, reduced transportation time and fuel consumption and vehicle wear and tear. With the advent of modern communication and computational devices and inexpensive sensors it is possible to collect and process data from a number of sources. Data fusion (DF) is collection of techniques by which information from multiple sources are combined in order to reach a better inference. DF is an inevitable tool for ITS. This paper provides a survey of how DF is used in different areas of ITS. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7a18b4e266cb353e523addfacbdf5bdf",
"text": "The field of image composition is constantly trying to improve the ways in which an image can be altered and enhanced. While this is usually done in the name of aesthetics and practicality, it also provides tools that can be used to maliciously alter images. In this sense, the field of digital image forensics has to be prepared to deal with the influx of new technology, in a constant arms-race. In this paper, the current state of this armsrace is analyzed, surveying the state-of-the-art and providing means to compare both sides. A novel scale to classify image forensics assessments is proposed, and experiments are performed to test composition techniques in regards to different forensics traces. We show that even though research in forensics seems unaware of the advanced forms of image composition, it possesses the basic tools to detect it.",
"title": ""
},
{
"docid": "5d13c7c50cb43de80df7b6f02c866dab",
"text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, even in the black-box case, where the attacker is limited to solely query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or estimating gradients from the output scores. We introduce GenAttack, a gradient-free optimization technique which uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on the MNIST, CIFAR-10, and ImageNet datasets show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than existing approaches. For example, in our CIFAR-10 experiments, GenAttack required roughly 2,568 times less queries than the current state-of-the-art black-box attack. Furthermore, we show that GenAttack can successfully attack both the state-of-the-art ImageNet defense, ensemble adversarial training, and non-differentiable, randomized input transformation defenses. GenAttack’s success against ensemble adversarial training demonstrates that its query efficiency enables it to exploit the defense’s weakness to direct black-box attacks. GenAttack’s success against non-differentiable input transformations indicates that its gradient-free nature enables it to be applicable against defenses which perform gradient masking/obfuscation to confuse the attacker. Our results suggest that evolutionary algorithms open up a promising area of research into effective gradient-free black-box attacks.",
"title": ""
},
{
"docid": "5cac184d3eb964a51722321096918ffb",
"text": "We propose an effective technique to solving review-level sentiment classification problem by using sentence-level polarity correction. Our polarity correction technique takes into account the consistency of the polarities (positive and negative) of sentences within each product review before performing the actual machine learning task. While sentences with inconsistent polarities are removed, sentences with consistent polarities are used to learn state-of-the-art classifiers. The technique achieved better results on different types of products reviews and outperforms baseline models without the correction technique. Experimental results show an average of 82% F-measure on four different product review domains.",
"title": ""
},
{
"docid": "d9a113b6b09874a4cbd9bf2f006504a6",
"text": "Attracting, motivating and retaining knowledge workers have become important in a knowledge-based and tight labour market, where changing knowledge management practices and global convergence of technology has redefined the nature of work. While individualisation of employment practices and team-based work may provide personal and organisational flexibilities, aligning HR and organisational strategies for competitive advantage has become more prominent. This exploratory study identifies the most and least effective HR strategies used by knowledge intensive firms (KIFs) in Singapore for attracting, motivating and retaining these workers. The most popular strategies were not always the most effective, and there appear to be distinctive ‘bundles’ of HR practices for managing knowledge workers. These vary according to whether ownership is foreign or local. A schema, based on statistically significant findings, for improving the effectiveness of these practices in managing knowledge workers is proposed. Cross-cultural research is necessary to establish the extent of diffusion of these practices. Contact: Frank M. Horwitz, Graduate School of Business, Breakwater Campus, University of Cape Town, Private Bag Rondebosch, Cape Town 7700 South Africa. Email: fhorwitz@gsb.uct.ac.za",
"title": ""
},
{
"docid": "d43a673f2731e5eff98313ff4b574de0",
"text": "Modern Information Retrieval (IR) systems, such as search engines, recommender systems, and conversational agents, are best thought of as interactive systems. And their development is best thought of as a two-stage development process: offline development followed by continued online adaptation and development based on interactions with users. In this opinion paper, we take a closer look at existing IR textbooks and teaching materials, and examine to which degree they cover the offline and online stages of the IR system development process. We notice that current teaching materials in IR focus mostly on search and on the offline development phase. Other scenarios of interacting with information are largely absent from current IR teaching materials, as is the (interactive) online development phase. We identify a list of scenarios and a list of topics that we believe are essential to any modern set of IR teaching materials that claims to fully cover IR system development. In particular, we argue for more attention, in basic IR teaching materials, to scenarios such as recommender systems, and to topics such as query and interaction mining and understanding, online evaluation, and online learning to rank.",
"title": ""
},
{
"docid": "c906700f507ee49361ef9b67aad29fed",
"text": "We propose a novel algorithm for visual question answering based on a recurrent deep neural network, where every module in the network corresponds to a complete answering unit with attention mechanism by itself. The network is optimized by minimizing loss aggregated from all the units, which share model parameters while receiving different information to compute attention probability. For training, our model attends to a region within image feature map, updates its memory based on the question and attended image feature, and answers the question based on its memory state. This procedure is performed to compute loss in each step. The motivation of this approach is our observation that multi-step inferences are often required to answer questions while each problem may have a unique desirable number of steps, which is difficult to identify in practice. Hence, we always make the first unit in the network solve problems, but allow it to learn the knowledge from the rest of units by backpropagation unless it degrades the model. To implement this idea, we early-stop training each unit as soon as it starts to overfit. Note that, since more complex models tend to overfit on easier questions quickly, the last answering unit in the unfolded recurrent neural network is typically killed first while the first one remains last. We make a single-step prediction for a new question using the shared model. This strategy works better than the other options within our framework since the selected model is trained effectively from all units without overfitting. The proposed algorithm outperforms other multi-step attention based approaches using a single step prediction in VQA dataset.",
"title": ""
},
{
"docid": "8108f8c3d53f44ca3824f4601aacdce1",
"text": "This paper presents a robust multi-class multi-object tracking (MCMOT) formulated by a Bayesian filtering framework. Multiobject tracking for unlimited object classes is conducted by combining detection responses and changing point detection (CPD) algorithm. The CPD model is used to observe abrupt or abnormal changes due to a drift and an occlusion based spatiotemporal characteristics of track states. The ensemble of convolutional neural network (CNN) based object detector and Lucas-Kanede Tracker (KLT) based motion detector is employed to compute the likelihoods of foreground regions as the detection responses of different object classes. Extensive experiments are performed using lately introduced challenging benchmark videos; ImageNet VID and MOT benchmark dataset. The comparison to state-of-the-art video tracking techniques shows very encouraging results.",
"title": ""
},
{
"docid": "8066246656f6a9a3060e42efae3b197f",
"text": "The paper describes the engineering and design of a doubly fed induction generator (DFIG), using back-to-back PWM voltage-source converters in the rotor circuit. A vector-control scheme for the supply-side PWM converter results in independent control of active and reactive power drawn from the supply, while ensuring sinusoidal supply currents. Vector control of the rotor-connected converter provides for wide speed-range operation; the vector scheme is embedded in control loops which enable optimal speed tracking for maximum energy capture from the wind. An experimental rig, which represents a 1.5 kW variable speed wind-energy generation system is described, and experimental results are given that illustrate the excellent performance characteristics of the system. The paper considers a grid-connected system; a further paper will describe a stand-alone system.",
"title": ""
},
{
"docid": "62ea6783f6a3e6429621286b4a1f068d",
"text": "Aviation delays inconvenience travelers and result in financial losses for stakeholders. Without complex data pre-processing, delay data collected by the existing IATA delay coding system are inadequate to support advanced delay analytics, e.g. large-scale delay propagation tracing in an airline network. Consequently, we developed three new coding schemes aiming at improving the current IATA system. These schemes were tested with specific analysis tasks using simulated delay data and were benchmarked against the IATA system. It was found that a coding scheme with a well-designed reporting style can facilitate automated data analytics and data mining, and an improved grouping of delay codes can minimise potential confusion at the data entry and recording stages. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9c262b845fff31abd1cbc2932957030d",
"text": "Dixon's method for computing multivariate resultants by simultaneously eliminating many variables is reviewed. The method is found to be quite restrictive because often the Dixon matrix is singular, and the Dixon resultant vanished identically yielding no information about solutions for many algebraic and geometry problems. We extend Dixon's method for the case when the Dixon matrix is singular, but satisfies a condition. An efficient algorithm is developed based on the proposed extension for extracting conditions for the existence of affine solutions of a finite set of polynomials. Using this algorithm, numerous geometric and algebraic identities are derived for examples which appear intractable with other techniques of triangulation such as the successive resultant method, the Gro¨bner basis method, Macaulay resultants and Characteristic set method. Experimental results suggest that the resultant of a set of polynomials which are symmetric in the variables is relatively easier to compute using the extended Dixon's method.",
"title": ""
},
{
"docid": "a8a998c3cf52a205ee2c4fd5b93ed9e6",
"text": "ive Text Summarization with Quasi-Recurrent Neural Networks Peter Adelson Department of Computer Science Stanford University University padelson@stanford.edu Sho Arora Department of Computer Science Stanford University University shoarora@stanford.edu Jeff Hara Department of Computer Science Stanford University University jhara18@stanford.edu",
"title": ""
},
{
"docid": "b6d655df161d6c47675e9cb17173a521",
"text": "Nigeria is considered as one of the many countries in sub-Saharan Africa with a weak economy and gross deficiencies in technology and engineering. Available data from international monitoring and regulatory organizations show that technology is pivotal to determining the economic strengths of nations all over the world. Education is critical to technology acquisition, development, dissemination and adaptation. Thus, this paper seeks to critically assess and discuss issues and challenges facing technological advancement in Nigeria, particularly in the education sector, and also proffers solutions to resuscitate the Nigerian education system towards achieving national technological and economic sustainability such that Nigeria can compete favourably with other technologicallydriven economies of the world in the not-too-distant future. Keywords—Economically weak countries, education, globalization and competition, technological advancement.",
"title": ""
}
] |
scidocsrr
|
015763e31c5b099dd25f0da5b04a766a
|
Automatic Image Cropping: A Computational Complexity Study
|
[
{
"docid": "c35619bf5830f6415a1c2f80cbaea31b",
"text": "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.",
"title": ""
},
{
"docid": "8e1c820f4981b5ef8b8ec25be25d2ecc",
"text": "As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.",
"title": ""
}
] |
[
{
"docid": "d3c83c600637d9aedd293f2d1b20caaa",
"text": "We introduce an end-to-end private deep learning framework, applied to the task of predicting 30-day readmission from electronic health records. By using differential privacy during training and homomorphic encryption during inference, we demonstrate that our proposed pipeline could maintain high performance while providing robust privacy guarantees against information leak from data transmission or attacks against the model. We also explore several techniques to address the privacy-utility trade-off in deploying neural networks with privacy mechanisms, improving the accuracy of differentially-private training and the computation cost of encrypted operations using ideas from both machine learning and cryptography.",
"title": ""
},
{
"docid": "2ada0c045f1f844063629889c6eef679",
"text": "Fine-grained address space layout randomization has recently been proposed as a method of efficiently mitigating ROP attacks. In this paper, we introduce a design and implementation of a framework based on a runtime strategy that undermines the benefits of fine-grained ASLR. Specifically, we abuse a memory disclosure to map an application’s memory layout on-the-fly, dynamically discover gadgets and construct the desired exploit payload, and finish our goals by using virtual function call mechanism—all with a script environment at the time an exploit is launched. We demonstrate the effectiveness of our framework by using it in conjunction with a real-world exploit against Internet Explorer and other applications protected by fine-grained ASLR. Moreover, we provide evaluations that demonstrate the practicality of run-time code reuse attacks. Our work shows that such a framework is effective and fine-grained ASLR may not be as promising as first thought. Keywords-code reuse; security; dynamic; fine-grained ASLR",
"title": ""
},
{
"docid": "3b09ca926dc51289d96935ec69aa70a8",
"text": "It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms.",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "a6e09b646c68dec48b003060f402d427",
"text": "This research explores the relationship between permeability and crack width in cracked, steel fiber-reinforced con addition, it inspects the influence of steel fiber reinforcement on concrete permeability. The feedback-controlled splitting tension ~also known as the Brazilian test ! is used to induce cracks of up to 500 mm ~0.02 in.! in concrete specimens without reinforcement, and w steel fiber reinforcement volumes of both 0.5 and 1%. The cracks relax after induced cracking. The steel fibers decrease the pe of specimens with relaxed cracks larger than 100 mm. DOI: 10.1061/ ~ASCE!0899-1561~2002!14:4~355! CE Database keywords: Permeability; Cracking; Fiber reinforced materials; Concrete. ular ties ani gh an rela rtie ect ue es, bar bilto be ica tly tee de elp in ding utful ucete, ber s. 00, n then",
"title": ""
},
{
"docid": "810a4573ca075d83e8bf2ece4fafe236",
"text": "IN this chapter we analyze four paradigms that currently are competing, or have until recently competed, for acceptance as the paradigm of choice in informing and guiding inquiry, especially qualitative inquiry: positivism, postpositivism, critical theory and related ideological positions, and constructivism. We acknowledge at once our own commitment to constructivism (which we earlier called \"naturalistic inquiry\"; Lincoln & Guba, 1985); the reader may wish to take that fact into account in judging the appropriateness and usefulness of our analysis. Although the title of this volume, Handbook of Qualitative Research, implies that the term qualitative is an umbrella term superior to the term paradigm (and, indeed, that usage is not uncommon), it is our position that it is a term that ought to be reserved for a description of types of methods. From our perspective, both qualitative and quantitative methods may be used appropriately with any research paradigm. Questions of method are secondary to questions of paradigm, which we define as the basic belief system or worldview that guides the investigator, not only in choices of method but in ontologicallyandepistemologicallyfundamentalways. It is certainly the case that interest in alternative paradigms has been stimulated by a growing dissatisfaction with the patent overemphasis on quantitative methods. But as efforts were made to build a case for a renewed interest in qualitative approaches, it became clear that the metaphysical assumptions undergirding the conventional paradigm (the \"received view\") must be seriously questioned. Thus the emphasis of this chapter is on paradigms, their assumptions, and the implications of those assumptions for a variety of research issues, not on the relative utility of qualitative versus quantitative methods. Nevertheless, as discussions of paradigms/methods over the past decade have often begun with a consideration of problems associated with overquantification, we will also begin there, shifting only later to our predominant interest.",
"title": ""
},
{
"docid": "4b25c7e58f49784d525398f4611b7ffa",
"text": "In this work, we studied the extraction process of papain, present in the latex of papaya fruit (Carica papaya L.) cv. Maradol. The variables studied in the extraction of papain were: latex:alcohol ratio (1:2.1 and 1:3) and drying method (vacuum and refractance window). Papain enzyme responses were obtained in terms of enzymatic activity and yield of the extraction process. The best result in terms of enzyme activity and yield was obtained by vacuum drying and a latex:alcohol ratio of 1:3. The enzyme obtained was characterized by physicochemical and microbiological properties and, enzymatic activity when compared with a commercial sample used as standard.",
"title": ""
},
{
"docid": "d0bacaa267599486356c175ca5419ede",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "5a1f1e50dd67b60e9582061f2ec4cc41",
"text": "This paper is about the role of the operating system (OS) within computer nodes of network audio systems. While many efforts in the network-audio community focus on low-latency network protocols, here we highlight the importance of the OS for network audio applications. We present Tessellation, an experimental OS tailored to multicore processors. We show how specific OS features, such as guaranteed resource allocation and customizable user-level runtimes, can help ensure quality-of-service (QoS) guarantees for data transmission and audio signal processing, especially in scenarios where network bandwidth and processing resources are shared between applications. To demonstrate performance isolation and service guarantees, we benchmark Tessellation under different conditions using a resource-demanding network audio application. Our results show that Tessellation can be used to create low-latency network audio systems.",
"title": ""
},
{
"docid": "ddc18f2d129d95737b8f0591560d202d",
"text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.",
"title": ""
},
{
"docid": "5591247b2e28f436da302757d3f82122",
"text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.",
"title": ""
},
{
"docid": "916ed07d14ce5fc4fc6531889e91673a",
"text": "In this experience sampling study, the authors examined the role of organizational leaders in employees' emotional experiences. Data were collected from health care workers 4 times a day for 2 weeks. Results indicate supervisors were associated with employee emotions in 3 ways: (a) Employees experienced fewer positive emotions when interacting with their supervisors as compared with interactions with coworkers and customers; (b) employees with supervisors high on transformational leadership experienced more positive emotions throughout the workday, including interactions with coworkers and customers; and (c) employees who regulated their emotions experienced decreased job satisfaction and increased stress, but those with supervisors high on transformational leadership were less likely to experience decreased job satisfaction. The results also suggest that the effects of emotional regulation on stress are long lasting (up to 2 hr) and not easily reduced by leadership behaviors.",
"title": ""
},
{
"docid": "6888ef53d5a1496608d6bb103a2c4603",
"text": "Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These results suggest new avenues for early screening and detection of mental illness.",
"title": ""
},
{
"docid": "ef31d8b3cd83aeb109f62fde4cd8bc8a",
"text": "Many existing knowledge bases (KBs), including Freebase, Yago, and NELL, rely on a fixed ontology, given as an input to the system, which defines the data to be cataloged in the KB, i.e., a hierarchy of categories and relations between them. The system then extracts facts that match the predefined ontology. We propose an unsupervised model that jointly learns a latent ontological structure of an input corpus, and identifies facts from the corpus that match the learned structure. Our approach combines mixed membership stochastic block models and topic models to infer a structure by jointly modeling text, a latent concept hierarchy, and latent semantic relationships among the entities mentioned in the text. As a case study, we apply the model to a corpus of Web documents from the software domain, and evaluate the accuracy of the various components of the learned ontology.",
"title": ""
},
{
"docid": "48d6e8658a2b8b13510426a6da9a5095",
"text": "A double-discone antenna for an ultra-wideband frequency scan is presented. An exquisite assembly of two inverse-feeding discone antennas shows a 30:1 broad bandwidth with VSWR below 2.5 and an omnidirectional radiation pattern. These features make the proposed antenna very suitable for both the UWB system antenna and the wideband scan antenna. © 2004 Wiley Periodicals, Inc. Microwave Opt Technol Lett 42: 113–115, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.20224",
"title": ""
},
{
"docid": "738ef4264c3901bbbd1c5963e5bd0c31",
"text": "This paper introduces an optimized multi-task novel 4 DOF pole climbing/manipulating robot for construction works. The robot can travel along poles with bends, branches and step changes in cross section. It is also able to perform manipulation, repair, testing and maintenance tasks after reaching the target point on the pole. A hybrid serial/parallel mechanism, providing 2 translations and 2 rotations, have been designed as the main part of the mechanism. Optimization of this robot contains workspace optimization of the proposed mechanism and decreasing the total time of reaching the target point, has been established with genetic algorithm method.",
"title": ""
},
{
"docid": "7175aec1d7bb360ff6f1ca8c276e01f3",
"text": "Plastic pollution and its environmental effects has received global attention the recent years. However, limited attention has so far been directed towards how plastics are regulated in a life cycle perspective and how regulatory gaps can be addressed in order to limit and prevent environmental exposure and hazards of macro- and microplastics. In this paper, we map European regulation taking outset in the life cycle perspective of plastic carrier bags: from plastic bag production to when it enters the environment. Relevant regulatory frameworks, directives and authorities along the life cycle are identified and their role in regulation of plastics is discussed. Most important regulations were identified as: the EU chemical Regulation, the Packaging and Packaging Waste Directive including the amending Directive regarding regulation of the consumption of lightweight plastic carrier bags, the Waste Framework Directive and the Directive on the Landfill of Waste. The main gaps identified relate to lack of clear definitions of categories of polymers, unambitious recycling rates and lack of consideration of macro- and microplastics in key pieces of legislation. We recommend that polymers are categorized according to whether they are polymers with the same monomer constituents (homopolymers) or with different monomer constituents (copolymers) and that polymers are no longer exempt from registration and evaluation under REACH. Plastics should furthermore have the same high level of monitoring and reporting requirements as hazardous waste involving stricter requirements to labelling, recordkeeping, monitoring and control over the whole lifecycle. Finally, we recommend that more ambitious recycle and recovery targets are set across the EU. Regulation of the consumption of lightweight plastic carrier bags should also apply to heavyweight plastic carrier bags. Last, the Marine and Water Framework Directives should specifically address plastic waste affecting water quality.",
"title": ""
},
{
"docid": "5a9b5313575208b0bdf8ffdbd4e271f5",
"text": "A new method for the design of predictive controllers for SISO systems is presented. The proposed technique allows uncertainties and constraints to be concluded in the design of the control law. The goal is to design, at each sample instant, a predictive feedback control law that minimizes a performance measure and guarantees of constraints are satisfied for a set of models that describes the system to be controlled. The predictive controller consists of a finite horizon parametric-optimization problem with an additional constraint over the manipulated variable behavior. This is an end-constraint based approach that ensures the exponential stability of the closed-loop system. The inclusion of this additional constraint, in the on-line optimization algorithm, enables robust stability properties to be demonstrated for the closedloop system. This is the case even though constraints and disturbances are present. Finally, simulation results are presented using a nonlinear continuous stirred tank reactor model.",
"title": ""
},
{
"docid": "a21b903991c17ae608ede08284bd484f",
"text": "In most conventional EV applications, a central high speed electric motor is mechanically coupled to the wheels by a single speed reduction gearbox and a mechanical differential. An innovative alternative utilizes low speed, high torque, gearless, electric motors, mounted completely inside the rim of the wheels, to provide instantaneous torque and eliminate driveline transmission losses. These in-wheel motors have many advantages, including no mechanical linkages and independent and precise torque control of each wheel. Furthermore, advanced control functions like Antilock Braking System (ABS), Anti Slip Regulation (ASR), Electronic Stability Program (ESP), and steering assistance can be easily integrated. In this paper, various motors and control strategies for such in-wheel motor drives for 2-wheel and 4-wheel drive vehicles have been presented.",
"title": ""
},
{
"docid": "7e8f116433e530032d31938703af1cd3",
"text": "Background. This systematic review and meta-analysis Tathiane Larissa Lenzi, MSc, PhD; Anelise Fernandes Montagner, MSc, PhD; Fabio Zovico Maxnuck Soares, PhD; Rachel de Oliveira Rocha, MSc, PhD evaluated the effectiveness of professional topical fluoride application (gels or varnishes) on the reversal treatment of incipient enamel carious lesions in primary or permanent",
"title": ""
}
] |
scidocsrr
|
f4d28aca55780377c404ac40188843e7
|
A boosted decision tree approach using Bayesian hyper-parameter optimization for credit scoring
|
[
{
"docid": "a9dd71d336baa0ea78ceb0435be67f67",
"text": "In current credit ratings models, various accounting-based information are usually selected as prediction variables, based on historical information rather than the market’s assessment for future. In the study, we propose credit rating prediction model using market-based information as a predictive variable. In the proposed method, Moody’s KMV (KMV) is employed as a tool to evaluate the market-based information of each corporation. To verify the proposed method, using the hybrid model, which combine random forests (RF) and rough set theory (RST) to extract useful information for credit rating. The results show that market-based information does provide valuable information in credit rating predictions. Moreover, the proposed approach provides better classification results and generates meaningful rules for credit ratings. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4eda5bc4f8fa55ae55c69f4233858fc7",
"text": "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "1c832140fce684c68fd91779d62596e3",
"text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.",
"title": ""
},
{
"docid": "f11db33a0eb2ab985189866e2a57c7e2",
"text": "Age estimation based on the human face remains a significant problem in computer vision and pattern recognition. In order to estimate an accurate age or age group of a facial image, most of the existing algorithms require a huge face data set attached with age labels. This imposes a constraint on the utilization of the immensely unlabeled or weakly labeled training data, e.g., the huge amount of human photos in the social networks. These images may provide no age label, but it is easy to derive the age difference for an image pair of the same person. To improve the age estimation accuracy, we propose a novel learning scheme to take advantage of these weakly labeled data through the deep convolutional neural networks. For each image pair, Kullback–Leibler divergence is employed to embed the age difference information. The entropy loss and the cross entropy loss are adaptively applied on each image to make the distribution exhibit a single peak value. The combination of these losses is designed to drive the neural network to understand the age gradually from only the age difference information. We also contribute a data set, including more than 100 000 face images attached with their taken dates. Each image is both labeled with the timestamp and people identity. Experimental results on two aging face databases show the advantages of the proposed age difference learning system, and the state-of-the-art performance is gained.",
"title": ""
},
{
"docid": "7a8fbfe463f6d5c61df7db1c1d2670c9",
"text": "State-of-the-art autonomous driving systems rely heavily on detailed and highly accurate prior maps. However, outside of small urban areas, it is very challenging to build, store, and transmit detailed maps since the spatial scales are so large. Furthermore, maintaining detailed maps of large rural areas can be impracticable due to the rapid rate at which these environments can change. This is a significant limitation for the widespread applicability of autonomous driving technology, which has the potential for an incredibly positive societal impact. In this paper, we address the problem of autonomous navigation in rural environments through a novel mapless driving framework that combines sparse topological maps for global navigation with a sensor-based perception system for local navigation. First, a local navigation goal within the sensor view of the vehicle is chosen as a waypoint leading towards the global goal. Next, the local perception system generates a feasible trajectory in the vehicle frame to reach the waypoint while abiding by the rules of the road for the segment being traversed. These trajectories are updated to remain in the local frame using the vehicle's odometry and the associated uncertainty based on the least-squares residual and a recursive filtering approach, which allows the vehicle to navigate road networks reliably, and at high speed, without detailed prior maps. We demonstrate the performance of the system on a full-scale autonomous vehicle navigating in a challenging rural environment and benchmark the system on a large amount of collected data.",
"title": ""
},
{
"docid": "a4c76e58074a42133a59a31d9022450d",
"text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.",
"title": ""
},
{
"docid": "7ecfea8abc9ba29719cdd4bf02e99d5d",
"text": "The literature shows an increase in blended learning implementations (N = 74) at faculties of education in Turkey whereas pre-service and in-service teachers’ ICT competencies have been identified as one of the areas where they are in need of professional development. This systematic review was conducted to find out the impact of blended learning on academic achievement and attitudes at teacher education programs in Turkey. 21 articles and 10 theses complying with all pre-determined criteria (i.e., studies having quantitative research design or at least a quantitative aspect conducted at pre-service teacher education programs) included within the scope of this review. With regard to academic achievement, it was synthesized that majority of the studies confirmed its positive impact on attaining course outcomes. Likewise, blended learning environment was revealed to contribute pre-service teachers to develop positive attitudes towards the courses. It was also concluded that face-to-face aspect of the courses was favoured considerably as it enhanced social interaction between peers and teachers. Other benefits of blended learning were listed as providing various materials, receiving prompt feedback, and tracking progress. Slow internet access, connection failure and anxiety in some pre-service teachers on using ICT were reported as obstacles. Regarding the positive results of blended learning and the significance of ICT integration, pre-service teacher education curricula are suggested to be reconstructed by infusing ICT into entire program through blended learning rather than delivering isolated ICT courses which may thus serve for prospective teachers as catalysts to integrate the use of ICT in their own teaching.",
"title": ""
},
{
"docid": "dcd82cbb2b89585c69b6483b6e77050f",
"text": "In recent years, the technological advances in mapping genes have made it increasingly easy to store and use a wide variety of biological data. Such data are usually in the form of very long strings for which it is difficult to determine the most relevant features for a classification task. For example, a typical DNA string may be millions of characters long, and there may be thousands of such strings in a database. In many cases, the classification behavior of the data may be hidden in the compositional behavior of certain segments of the string which cannot be easily determined apriori. Another problem which complicates the classification task is that in some cases the classification behavior is reflected in global behavior of the string, whereas in others it is reflected in local patterns. Given the enormous variation in the behavior of the strings over different data sets, it is useful to develop an approach which is sensitive to both the global and local behavior of the strings for the purpose of classification. For this purpose, we will exploit the multi-resolution property of wavelet decomposition in order to create a scheme which can mine classification characteristics at different levels of granularity. The resulting scheme turns out to be very effective in practice on a wide range of problems.",
"title": ""
},
{
"docid": "9015694bb7ce25a0fa9684636e8b9380",
"text": "To provide efficient tools for the capture and modeling of acceptable virtual human poses, we propose a method for constraining the underlying joint structures based on real life data. Current tools for delimiting valid postures often employ techniques that do not represent joint limits in an intuitively satisfying manner, and furthermore are seldom directly derived from experimental data. Here, we propose a semi-automatic scheme for determining ball-and-socket joint limits by actual measurement and apply it to modeling the shoulder complex, which—along with the hip complex—can be approximated by a 3 degree-of-freedom ball-and-socket joint. Our first step is to measure the joint motion range using optical motion capture. We next convert the recorded values to joint poses encoded using a coherent quaternion field representation of the joint orientation space. Finally, we obtain a closed, continuous implicit surface approximation for the quaternion orientation-space boundary whose interior represents the complete space of valid orientations, enabling us to project invalid postures to the closest valid ones. The work reported here was supported in part by the Swiss National Science Foundation.",
"title": ""
},
{
"docid": "3f3a017d93588f19eb59a93ccd587902",
"text": "n this work we propose a novel Hough voting approach for the detection of free-form shapes in a 3D space, to be used for object recognition tasks in 3D scenes with a significant degree of occlusion and clutter. The proposed method relies on matching 3D features to accumulate evidence of the presence of the objects being sought in a 3D Hough space. We validate our proposal by presenting a quantitative experimental comparison with state-of-the-art methods as well as by showing how our method enables 3D object recognition from real-time stereo data.",
"title": ""
},
{
"docid": "acacc206bd12bf787026c1cc0ff41ab9",
"text": "This paper presents a fruit size detecting and grading system based on image processing. After capturing the fruit side view image, some fruit characters is extracted by using detecting algorithms. According to these characters, grading is realized. Experiments show that this embedded grading system has the advantage of high accuracy of grading, high speed and low cost. It will have a good prospect of application in fruit quality detecting and grading areas.",
"title": ""
},
{
"docid": "996eb4470d33f00ed9cb9bcc52eb5d82",
"text": "Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary.",
"title": ""
},
{
"docid": "7be6ee5dee7fc6b64da29e0b60814fee",
"text": "J. P. Guilford (1950) asked in his inaugural address to the American Psychological Association why schools were not producing more creative persons. He also asked, “Why is there so little apparent correlation between education and creative productiveness” (p. 444)? This article presents a review of past and current research on the relation of education to creativity in students of preschool age through age 16 in U.S. public schools. Several models of creative thinking are presented (e.g., Guilford, 1985; Renzulli, 1992; Runco & Chand, 1995), as well as techniques for developing creativity (e.g., Davis, 1982; Sternberg & Williams, 1996). Some research presented indicates a relation between creativity and learning (e.g., Karnes et al., 1961; Torrance, 1981). Implications for research and practice",
"title": ""
},
{
"docid": "351faf9d58bd2a2010766acff44dadbc",
"text": "صلاخلا ـ ة : ىلع قوفي ةيبرعلا ةغللاب نيثدحتملا ددع نأ نم مغرلا يتئام تنإ يف ةلوذبملا دوهجلا نأ لاإ ،صخش نويلم ةليلق ةيبوساحلا ةيبرعلا ةيوغللا رداصملا جا ادج ب ةيبوساحلا ةيبرعلا مجاعملا لاجم يف ةصاخ . بلغأ نإ ةيفاآ تسيل يهف اذلو ،ةيبنجأ تاغلل امنإ ،ةيبرعلا ةغلل لصلأا يف ممصت مل ةدوجوملا دوهجلا يبرعلا عمتجملا تاجايتحا دسل . فدهي حرتقم ضرع ىلإ ثحبلا اذه لأ جذومن ساح مجعم ةينقت ىلع ينبم يبو \" يجولوتنلأا \" اهيلع دمتعت يتلا ةيساسلأا تاينقتلا نم ةثيدح ةينقت يهو ، ةينقت \" ةيللادلا بيولا \" ام لاجم يف تاقلاعلاو ميهافملل يللادلا يفرعملا ليثمتلاب ىنعت ، . دقو ءانب مت لأا جذومن ةيرظن ساسأ ىلع \" ةيللادلا لوقحلا \" تايوغللا لاجم يف ةفورعملا ، و ت م اهساسأ ىلع ينب يتلا تانايبلا ءاقتسا لأا جذومن نم \" نامزلا ظافلأ \" يف \" ميركلا نآرقلا \" ، يذلا اهلامآو اهيقر يف ةيبرعلا هيلإ تلصو ام قدأ دعي . اذه لثم رفوت نإ لأا جذومن اعفان نوكيس ةيبرعلا ةغلل ةيبرعلا ةغللا لاجم يف ةيبوساحلا تاقيبطتلل . مت دقو م ضرع ثحبلا اذه يف ءانب ةيجهنمل لصف لأا جذومن اهيلإ لصوتلا مت يتلا جئاتنلاو .",
"title": ""
},
{
"docid": "6480f98a792ca9cdb961e85357a73461",
"text": "Since its first use in the steroid field in the late 1950s, the use of fluorine in medicinal chemistry has become commonplace, with the small electronegative fluorine atom being a key part of the medicinal chemist's repertoire of substitutions used to modulate all aspects of molecular properties including potency, physical chemistry and pharmacokinetics. This review will highlight the special nature of fluorine, drawing from a survey of marketed fluorinated pharmaceuticals and the medicinal chemistry literature, to illustrate key concepts exploited by medicinal chemists in their attempts to optimize drug molecules. Some of the potential pitfalls in the use of fluorine will also be highlighted.",
"title": ""
},
{
"docid": "2e167507f8b44e783d60312c0d71576d",
"text": "The goal of this paper is to study different techniques to predict stock price movement using the sentiment analysis from social media, data mining. In this paper we will find efficient method which can predict stock movement more accurately. Social media offers a powerful outlet for people’s thoughts and feelings it is an enormous ever-growing source of texts ranging from everyday observations to involved discussions. This paper contributes to the field of sentiment analysis, which aims to extract emotions and opinions from text. A basic goal is to classify text as expressing either positive or negative emotion. Sentiment classifiers have been built for social media text such as product reviews, blog posts, and even twitter messages. With increasing complexity of text sources and topics, it is time to re-examine the standard sentiment extraction approaches, and possibly to redefine and enrich the definition of sentiment. Next, unlike sentiment analysis research to date, we examine sentiment expression and polarity classification within and across various social media streams by building topical datasets within each stream. Different data mining methods are used to predict market more efficiently along with various hybrid approaches. We conclude that stock prediction is very complex task and various factors should be considered for forecasting the market more accurately and efficiently.",
"title": ""
},
{
"docid": "67265d70b2d704c0ab2898c933776dc2",
"text": "The intima-media thickness (IMT) of the common carotid artery (CCA) is widely used as an early indicator of cardiovascular disease (CVD). Typically, the IMT grows with age and this is used as a sign of increased risk of CVD. Beyond thickness, there is also clinical interest in identifying how the composition and texture of the intima-media complex (IMC) changed and how these textural changes grow into atherosclerotic plaques that can cause stroke. Clearly though texture analysis of ultrasound images can be greatly affected by speckle noise, our goal here is to develop effective despeckle noise methods that can recover image texture associated with increased rates of atherosclerosis disease. In this study, we perform a comparative evaluation of several despeckle filtering methods, on 100 ultrasound images of the CCA, based on the extracted multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) texture features and visual image quality assessment by two clinical experts. Texture features were extracted from the automatically segmented IMC for three different age groups. The despeckle filters hybrid median and the homogeneous mask area filter showed the best performance by improving the class separation between the three age groups and also yielded significantly improved image quality.",
"title": ""
},
{
"docid": "86cb943d46574ee94a4e1ceaf36a9759",
"text": "Yearly there's an influx of over three million Muslims to Makkah., Saudi Arabia to perform Hajj. As this large group of pilgrims move between the different religious sites safety and security becomes an issue of main concern. This research looks into the integration of different mobile technologies to serve the purpose of crowd management., people tracking and location based services. It explores the solution to track the movement of pilgrims via RFID technology. A location aware mobile solution will also be integrated into this. This will be made available to pilgrims with smartphones to enhance the accuracy and tracking time of the pilgrims and provide them with location based services for Hajj.",
"title": ""
},
{
"docid": "488e0161ee2a95c1c4082fc6981ae414",
"text": "Information networks that can be extracted from many domains are widely studied recently. Different functions for mining these networks are proposed and developed, such as ranking, community detection, and link prediction. Most existing network studies are on homogeneous networks, where nodes and links are assumed from one single type. In reality, however, heterogeneous information networks can better model the real-world systems, which are typically semi-structured and typed, following a network schema. In order to mine these heterogeneous information networks directly, we propose to explore the meta structure of the information network, i.e., the network schema. The concepts of meta-paths are proposed to systematically capture numerous semantic relationships across multiple types of objects, which are defined as a path over the graph of network schema. Meta-paths can provide guidance for search and mining of the network and help analyze and understand the semantic meaning of the objects and relations in the network. Under this framework, similarity search and other mining tasks such as relationship prediction and clustering can be addressed by systematic exploration of the network meta structure. Moreover, with user’s guidance or feedback, we can select the best meta-path or their weighted combination for a specific mining task.",
"title": ""
},
{
"docid": "59733877083c5d22bef27af90ac79907",
"text": "We review the past 25 years of research into time series forecasting. In this silver jubilee issue, we naturally highlight results published in journals managed by the International Institute of Forecasters (Journal of Forecasting 1982–1985 and International Journal of Forecasting 1985–2005). During this period, over one third of all papers published in these journals concerned time series forecasting. We also review highly influential works on time series forecasting that have been published elsewhere during this period. Enormous progress has been made in many areas, but we find that there are a large number of topics in need of further development. We conclude with comments on possible future research directions in this field. D 2006 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c34b29ded9da50d97d8f9077386dcc48",
"text": "Following much work in linguistic theory, it is hypothesized that the language faculty has a modular structure and consists of two basic components, a lexicon of (structured) entries and a computational system of combinatorial operations to form larger linguistic expressions from lexical entries. This target article provides evidence for the dual nature of the language faculty by describing recent results of a multidisciplinary investigation of German inflection. We have examined: (1) its linguistic representation, focussing on noun plurals and verb inflection (participles), (2) processes involved in the way adults produce and comprehend inflected words, (3) brain potentials generated during the processing of inflected words, and (4) the way children acquire and use inflection. It will be shown that the evidence from all these sources converges and supports the distinction between lexical entries and combinatorial operations. Our experimental results indicate that adults have access to two distinct processing routes, one accessing (irregularly) inflected entries from the mental lexicon and another involving morphological decomposition of (regularly) inflected words into stem + affix representations. These two processing routes correspond to the dual structure of the linguistic system. Results from event-related potentials confirm this linguistic distinction at the level of brain structures. In children's language, we have also found these two processes to be clearly dissociated; regular and irregular inflection are used under different circumstances, and the constraints under which children apply them are identical to those of the adult linguistic system. Our findings will be explained in terms of a linguistic model that maintains the distinction between the lexicon and the computational system but replaces the traditional view of the lexicon as a simple list of idiosyncrasies with the notion of internally structured lexical representations.",
"title": ""
},
{
"docid": "0793d82c1246c777dce673d8f3146534",
"text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.",
"title": ""
}
] |
scidocsrr
|
5c3ca8edfdc934af6b0dbc53a5618913
|
Neural networks for sentiment analysis on Twitter
|
[
{
"docid": "1dd8fdb5f047e58f60c228e076aa8b66",
"text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.",
"title": ""
},
{
"docid": "4bcdc83f93bec38616eea1acec30d512",
"text": "Sentiment analysis deals with identifying and classifying opinions or sentiments expressed in source text. Social media is generating a vast amount of sentiment rich data in the form of tweets, status updates, blog posts etc. Sentiment analysis of this user generated data is very useful in knowing the opinion of the crowd. Twitter sentiment analysis is difficult compared to general sentiment analysis due to the presence of slang words and misspellings. The maximum limit of characters that are allowed in Twitter is 140. Knowledge base approach and Machine learning approach are the two strategies used for analyzing sentiments from the text. In this paper, we try to analyze the twitter posts about electronic products like mobiles, laptops etc using Machine Learning approach. By doing sentiment analysis in a specific domain, it is possible to identify the effect of domain information in sentiment classification. We present a new feature vector for classifying the tweets as positive, negative and extract peoples' opinion about products.",
"title": ""
},
{
"docid": "5dc4dfc2d443c31332c70a56c2d70c7d",
"text": "Sentiment analysis or opinion mining is an important type of text analysis that aims to support decision making by extracting and analyzing opinion oriented text, identifying positive and negative opinions, and measuring how positively or negatively an entity (i.e., people, organization, event, location, product, topic, etc.) is regarded. As more and more users express their political and religious views on Twitter, tweets become valuable sources of people's opinions. Tweets data can be efficiently used to infer people's opinions for marketing or social studies. This paper proposes a Tweets Sentiment Analysis Model (TSAM) that can spot the societal interest and general people's opinions in regard to a social event. In this paper, Australian federal election 2010 event was taken as an example for sentiment analysis experiments. We are primarily interested in the sentiment of the specific political candidates, i.e., two primary minister candidates - Julia Gillard and Tony Abbot. Our experimental results demonstrate the effectiveness of the system.",
"title": ""
}
] |
[
{
"docid": "c81967de1aee76b9937cbdcba3e07996",
"text": "The combination of strength (ST) and plyometric training (PT) has been shown to be effective for improving sport-specific performance. However, there is no consensus about the most effective way to combine these methods in the same training session to produce greater improvements in neuromuscular performance of soccer players. Thus, the purpose of this study was to compare the effects of different combinations of ST and PT sequences on strength, jump, speed, and agility capacities of elite young soccer players. Twenty-seven soccer players (age: 18.9 ± 0.6 years) participated in an 8-week resistance training program and were divided into 3 groups: complex training (CP) (ST before PT), traditional training (TD) (PT before ST), and contrast training (CT) (ST and PT performed alternately, set by set). The experimental design took place during the competitive period of the season. The ST composed of half-squat exercises performed at 60-80% of 1 repetition maximum (1RM); the PT composed of drop jump exercises executed in a range from 30 to 45 cm. After the experimental period, the maximum dynamic strength (half-squat 1RM) and vertical jump ability (countermovement jump height) increased similarly and significantly in the CP, TD, and CT (48.6, 46.3, and 53% and 13, 14.2, and 14.7%, respectively). Importantly, whereas the TD group presented a significant decrease in sprinting speed in 10 (7%) and 20 m (6%), the other groups did not show this response. Furthermore, no significant alterations were observed in agility performance in any experimental group. In conclusion, in young soccer players, different combinations and sequences of ST and PT sets result in similar performance improvements in muscle strength and jump ability. However, it is suggested that the use of the CP and CT methods is more indicated to maintain/maximize the sprint performance of these athletes.",
"title": ""
},
{
"docid": "897a6d208785b144b5d59e4f346134cd",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "1274656b97db1f736944c174a174925d",
"text": "In full-duplex systems, due to the strong self-interference signal, system nonlinearities become a significant limiting factor that bounds the possible cancellable self-interference power. In this paper, a self-interference cancellation scheme for full-duplex orthogonal frequency division multiplexing systems is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the distortion caused by the transmitter and receiver nonlinearities. An iterative technique is used to jointly estimate the self-interference channel and the nonlinearity coefficients required to suppress the distortion signal. The performance is numerically investigated showing that the proposed scheme achieves a performance that is less than 0.5dB off the performance of a linear full-duplex system.",
"title": ""
},
{
"docid": "352bb91483434e2a43cefb4f4a2c06e7",
"text": "Formal Semantics and Distributional Semantics are two very influential semantic frameworks in Computational Linguistics. Formal Semantics is based on a symbolic tradition and centered around the inferential properties of language. Distributional Semantics is statistical and data-driven, and focuses on aspects of meaning related to descriptive content. The two frameworks are complementary in their strengths, and this has motivated interest in combining them into an overarching semantic framework: a “Formal Distributional Semantics.” Given the fundamentally different natures of the two paradigms, however, building an integrative framework poses significant theoretical and engineering challenges. The present issue of Computational Linguistics advances the state of the art in Formal Distributional Semantics; this introductory article explains the motivation behind it and summarizes the contributions of previous work on the topic, providing the necessary background for the articles that follow.",
"title": ""
},
{
"docid": "a4227042d8601ba03601ea48f251c57a",
"text": "Published data is prone to privacy attacks. Sanitization methods aim to prevent these attacks while maintaining usefulness of the data for legitimate users. Quantifying the trade-off between usefulness and privacy of published data has been the subject of much research in recent years. We propose a pragmatic framework for evaluating sanitization systems in real-life and use data mining utility as a universal measure of usefulness and privacy. We propose a definition for data mining utility that can be tuned to capture the needs of data users and the adversaries' intentions in a setting that is specified by a database, a candidate sanitization method, and privacy and utility concerns of data owner. We use this framework to evaluate and compare privacy and utility offered by two well-known sanitization methods, namely k-anonymity and ε-differential privacy, when UCI's \"Adult\" dataset and the Weka data mining package is used, and utility and privacy measures are defined for users and adversaries. In the case of k-anonymity, we compare our results with the recent work of Brickell and Shmatikov (KDD 2008), and show that using data mining algorithms increases their proposed adversarial gains.",
"title": ""
},
{
"docid": "de2ed315762d3f0ac34fe0b77567b3a2",
"text": "A study in vitro of specimens of human aortic and common carotid arteries was carried out to determine the feasibility of direct measurement (i.e., not from residual lumen) of arterial wall thickness with B mode real-time imaging. Measurements in vivo by the same technique were also obtained from common carotid arteries of 10 young normal male subjects. Aortic samples were classified as class A (relatively normal) or class B (with one or more atherosclerotic plaques). In all class A and 85% of class B arterial samples a characteristic B mode image composed of two parallel echogenic lines separated by a hypoechoic space was found. The distance between the two lines (B mode image of intimal + medial thickness) was measured and correlated with the thickness of different combinations of tunicae evaluated by gross and microscopic examination. On the basis of these findings and the results of dissection experiments on the intima and adventitia we concluded that results of B mode imaging of intimal + medial thickness did not differ significantly from the intimal + medial thickness measured on pathologic examination. With respect to the accuracy of measurements obtained by B mode imaging as compared with pathologic findings, we found an error of less than 20% for measurements in 77% of normal and pathologic aortic walls. In addition, no significant difference was found between B mode-determined intimal + medial thickness in the common carotid arteries evaluated in vitro and that determined by this method in vivo in young subjects, indicating that B mode imaging represents a useful approach for the measurement of intimal + medial thickness of human arteries in vivo.",
"title": ""
},
{
"docid": "be3d420dee60602b50a5ae5923c86a88",
"text": "We introduce the concept of dynamically growing a neural network during training. In particular, an untrainable deep network starts as a trainable shallow network and newly added layers are slowly, organically added during training, thereby increasing the network's depth. This is accomplished by a new layer, which we call DropIn. The DropIn layer starts by passing the output from a previous layer (effectively skipping over the newly added layers), then increasingly including units from the new layers for both feedforward and backpropagation. We show that deep networks, which are untrainable with conventional methods, will converge with DropIn layers interspersed in the architecture. In addition, we demonstrate that DropIn provides regularization during training in an analogous way as dropout. Experiments are described with the MNIST dataset and various expanded LeNet architectures, CIFAR-10 dataset with its architecture expanded from 3 to 11 layers, and on the ImageNet dataset with the AlexNet architecture expanded to 13 layers and the VGG 16-layer architecture.",
"title": ""
},
{
"docid": "20e504a115a1448ea366eae408b6391f",
"text": "Clustering algorithms have emerged as an alternative powerful meta-learning tool to accurately analyze the massive volume of data generated by modern applications. In particular, their main goal is to categorize data into clusters such that objects are grouped in the same cluster when they are similar according to specific metrics. There is a vast body of knowledge in the area of clustering and there has been attempts to analyze and categorize them for a larger number of applications. However, one of the major issues in using clustering algorithms for big data that causes confusion amongst practitioners is the lack of consensus in the definition of their properties as well as a lack of formal categorization. With the intention of alleviating these problems, this paper introduces concepts and algorithms related to clustering, a concise survey of existing (clustering) algorithms as well as providing a comparison, both from a theoretical and an empirical perspective. From a theoretical perspective, we developed a categorizing framework based on the main properties pointed out in previous studies. Empirically, we conducted extensive experiments where we compared the most representative algorithm from each of the categories using a large number of real (big) data sets. The effectiveness of the candidate clustering algorithms is measured through a number of internal and external validity metrics, stability, runtime, and scalability tests. In addition, we highlighted the set of clustering algorithms that are the best performing for big data.",
"title": ""
},
{
"docid": "8bea5537a77141073b95182d71c73d15",
"text": "Recent advances in technology have enabled social media services to support space-time indexed data, and internet users from all over the world have created a large volume of time-stamped, geo-located data. Such spatiotemporal data has immense value for increasing situational awareness of local events, providing insights for investigations and understanding the extent of incidents, their severity, and consequences, as well as their time-evolving nature. In analyzing social media data, researchers have mainly focused on finding temporal trends according to volume-based importance. Hence, a relatively small volume of relevant messages may easily be obscured by a huge data set indicating normal situations. In this paper, we present a visual analytics approach that provides users with scalable and interactive social media data analysis and visualization including the exploration and examination of abnormal topics and events within various social media data sources, such as Twitter, Flickr and YouTube. In order to find and understand abnormal events, the analyst can first extract major topics from a set of selected messages and rank them probabilistically using Latent Dirichlet Allocation. He can then apply seasonal trend decomposition together with traditional control chart methods to find unusual peaks and outliers within topic time series. Our case studies show that situational awareness can be improved by incorporating the anomaly and trend examination techniques into a highly interactive visual analysis process.",
"title": ""
},
{
"docid": "dffee91cca8a8f2cf95e30d84fc104fa",
"text": "It is possible to associate to a hybrid system a single topological space its underlying topological space. Simultaneously, every hybrid system has a graph as its indexing object its underlying graph. Here we discuss the relationship between the underlying topological space of a hybrid system, its underlying graph and Zeno behavior. When each domain is contractible and the reset maps are homotopic to the identity map, the homology of the underlying topological space is isomorphic to the homology of the underlying graph; the nonexistence of Zeno is implied when the first homology is trivial. Moreover, the first homology is trivial when the null space of the incidence matrix is trivial. The result is an easy way to verify the nonexistence of Zeno behavior.",
"title": ""
},
{
"docid": "d642a490c3a4bd8e97d2e78e98dc577a",
"text": "We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture (Kingma & Welling, 2014; Rezende et al., 2014) with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the “Maximum Mean Discrepancy” (MMD) (Gretton et al., 2006) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.",
"title": ""
},
{
"docid": "3e1b4fb4ac5222c70b871ebb7ea43408",
"text": "Modern graph embedding procedures can efficiently extract features of nodes from graphs with millions of nodes. The features are later used as inputs for downstream predictive tasks. In this paper we propose GEMSEC a graph embedding algorithm which learns a clustering of the nodes simultaneously with computing their features. The procedure places nodes in an abstract feature space where the vertex features minimize the negative log likelihood of preserving sampled vertex neighborhoods, while the nodes are clustered into a fixed number of groups in this space. GEMSEC is a general extension of earlier work in the domain as it is an augmentation of the core optimization problem of sequence based graph embedding procedures and is agnostic of the neighborhood sampling strategy. We show that GEMSEC extracts high quality clusters on real world social networks and is competitive with other community detection algorithms. We demonstrate that the clustering constraint has a positive effect on representation quality and also that our procedure learns to embed and cluster graphs jointly in a robust and scalable manner.",
"title": ""
},
{
"docid": "3d7eb095e68a9500674493ee58418789",
"text": "Hundreds of scholarly studies have investigated various aspects of the immensely popular Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none of them has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and also provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment, but also for more serious topics such as health information and legal background. Scholars, librarians and students are common users of Wikipedia, and it provides a unique opportunity for educating students in digital",
"title": ""
},
{
"docid": "14679a23d6f0d7b8652c74b7ab9a4a03",
"text": "The JPEG baseline standard for image compression employs a block Discrete Cosine Transform (DCT) and uniform quantization. For a monochrome image, a single quantization matrix is allowed, while for a color image, distinct matrices are allowed for each color channel.. Here we describe a method, called DCTune, for design of color quantization matrices that is based on a model of the visibility of quantization artifacts. The model describes artifact visibility as a function of DCT frequency, color channel, and display resolution and brightness. The model also describes summation of artifacts over space and frequency, and masking of artifacts by the image itself. The DCTune matrices are different from the de facto JPEG matrices, and appear to provide superior visual quality at equal bit-rates.",
"title": ""
},
{
"docid": "2e5ce96ba3c503704a9152ae667c24ec",
"text": "We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).",
"title": ""
},
{
"docid": "c7add5ca57003fc82c0a4c8be7e15373",
"text": "Along with advances in information technology, cybercrime techniques also increased. There are several forms of attacks on data and information, such as hackers, crackers, Trojans, etc. The Symantec Intelligence report edition on August 2012 indicated that the attacker selected the target of attacks. The type of data is valuable and confidential. The Hackers selected the target to attack or steal interest information the first and they did not just taking random from a large amount of data. This indication worried because hackers stealing the data more planned. Therefore, today many systems reinforced with various efforts to maintain data security and overcome these attacks. Necessary methods to secure electronic messages that do not fall on those who are not authorized. One alternative is steganography. Cryptography and Steganography are the two major techniques for secret communication. Cryptography converts information from its original form (plaintext) into unreadable form (cipher text); where as in steganography is the art of hiding messages within other data without changing the data to it attaches, so data before and after the process of hiding almost look like the same. There are many different techniques are available for cryptography and steganography. The cryptography suspicion against disguised message is easily recognizable, because of the message disguised by changing the original message becomes as if illegible. While further reduce suspicion steganography disguised as a message hidden in the file. The research designed the application of steganography using Least Significant Bit (LSB) in which the previous message is encrypted using the Advanced Encryption Standard algorithm (AES) and it can restore the previously hidden data. The messages in this form application and hidden text on media digital image so as not to arouse suspicion. The result of research shown the steganography is expected to hide the secret message, so the message is not easy to know other people who are not eligible.",
"title": ""
},
{
"docid": "85f9eb1b79ba0bc11e275c8a48731e8f",
"text": "OBJECTIVES\nThe long-term effects of amino acid-based formula (AAF) in the treatment of cow's milk allergy (CMA) are largely unexplored. The present study comparatively evaluates body growth and protein metabolism in CMA children treated with AAF or with extensively hydrolyzed whey formula (eHWF), and healthy controls.\n\n\nMETHODS\nA 12-month multicenter randomized control trial was conducted in outpatients with CMA (age 5-12 m) randomized in 2 groups, treated with AAF (group 1) and eHWF (group 2), and compared with healthy controls (group 3) fed with follow-on (if age <12 months) or growing-up formula (if age >12 months). At enrolment (T0), after 3 (T3), 6 (T6), and 12 months (T12) a clinical evaluation was performed. At T0 and T3, in subjects with CMA serum levels of albumin, urea, total protein, retinol-binding protein, and insulin-like growth factor 1 were measured.\n\n\nRESULTS\nTwenty-one subjects in group 1 (61.9% boys, age 6.5 ± 1.5 months), 19 in group 2 (57.9% boys, age 7 ± 1.7 months) and 25 subjects in group 3 (48% boys, age 5.5 ± 0.5 months) completed the study. At T0, the weight z score was similar in group 1 (-0.74) and 2 (-0.76), with differences compared to group 3 (-0.17, P < 0.05). At T12, the weight z score value was similar between the 3 groups without significant differences. There were no significant changes in protein metabolism in children in groups 1 and 2.\n\n\nCONCLUSION\nLong-term treatment with AAF is safe and allows adequate body growth in children with CMA.",
"title": ""
},
{
"docid": "a9f2acbe4bd04abc678316970828ef6d",
"text": "— Choosing a university is one of the most important decisions that affects future of young student. This decision requires considering a number of criteria not only numerical but also linguistic. Istanbul is the first alternative for young students' university choice in Turkey. As well as the state universities, the private universities are also so popular in this city. In this paper, a ranking method that manages to choice of university selection is created by using technique for order preference by similarity to ideal solution (TOPSIS) method based on type-2 fuzzy set. This method has been used for ranking private universities in Istanbul.",
"title": ""
},
{
"docid": "e0b3aaf13df7c2eaed13bccb3fac89dd",
"text": "This study examines the adoption of biometrics technology in the Canadian banking industry. By comparing Canadian banks with the financial institutions in other countries in terms of their adoption to biometrics, it explores current status of adoption to biometrics technology by Canadian banks and the potential future consequences as a result of their delayed adoption. Through literature review, this study first provides various aspects of biometrics technologies; technical specifications, performance metrics, types, current applications, and some issues and concerns with this technology. Second, it discusses current and future potential applications of biometrics in the banking industry. Then, the study provides a context with which to analyze the extent of the popularity and practicality of the technology in Canadian banks. Finally, based on qualitative interviews with biometric researchers and professionals in Canada, this study reports the research findings with regards to the following four topics: 1) Integration and accessibility to biometrics, 2) Public opinion and concern, 3) Transition of technologies, and 4) Current accommodations for biometrics, followed by discussion of the contributions of this study.",
"title": ""
}
] |
scidocsrr
|
95c08a0b74af9ce74406f32ceee019af
|
Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms
|
[
{
"docid": "75a1832a5fdd9c48f565eb17e8477b4b",
"text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.",
"title": ""
}
] |
[
{
"docid": "16c87d75564404d52fc2abac55297931",
"text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.",
"title": ""
},
{
"docid": "e3be398845434f3cd927a38bc4d4455f",
"text": "Purpose Although extensive research exists regarding job satisfaction, many previous studies used a more restrictive, quantitative methodology. The purpose of this qualitative study is to capture the perceptions of hospital nurses within generational cohorts regarding their work satisfaction. Design/methodology/approach A preliminary qualitative, phenomenological study design explored hospital nurses' work satisfaction within generational cohorts - Baby Boomers (1946-1964), Generation X (1965-1980) and Millennials (1981-2000). A South Florida hospital provided the venue for the research. In all, 15 full-time staff nurses, segmented into generational cohorts, participated in personal interviews to determine themes related to seven established factors of work satisfaction: pay, autonomy, task requirements, administration, doctor-nurse relationship, interaction and professional status. Findings An analysis of the transcribed interviews confirmed the importance of the seven factors of job satisfaction. Similarities and differences between the generational cohorts related to a combination of stages of life and generational attributes. Practical implications The results of any qualitative research relate only to the specific venue studied and are not generalizable. However, the information gleaned from this study is transferable and other organizations are encouraged to conduct their own research and compare the results. Originality/value This study is unique, as the seven factors from an extensively used and highly respected quantitative research instrument were applied as the basis for this qualitative inquiry into generational cohort job satisfaction in a hospital setting.",
"title": ""
},
{
"docid": "f9a3645848af9620d35c2163e3b4cbf9",
"text": "Our professional services was released with a hope to function as a complete on-line digital catalogue that gives access to multitude of PDF file e-book collection. You might find many different types of e-publication as well as other literatures from your papers data base. Particular preferred subject areas that distribute on our catalog are popular books, answer key, examination test question and solution, information paper, training information, test sample, end user manual, user manual, support instructions, fix guide, and many others.",
"title": ""
},
{
"docid": "8404b6b5abcbb631398898e81beabea1",
"text": "As a result of agricultural intensification, more food is produced today than needed to feed the entire world population and at prices that have never been so low. Yet despite this success and the impact of globalization and increasing world trade in agriculture, there remain large, persistent and, in some cases, worsening spatial differences in the ability of societies to both feed themselves and protect the long-term productive capacity of their natural resources. This paper explores these differences and develops a countryxfarming systems typology for exploring the linkages between human needs, agriculture and the environment, and for assessing options for addressing future food security, land use and ecosystem service challenges facing different societies around the world.",
"title": ""
},
{
"docid": "b91833ae4e659fc1a0943eadd5da955d",
"text": "In this paper, we present a factor graph framework to solve both estimation and deterministic optimal control problems, and apply it to an obstacle avoidance task on Unmanned Aerial Vehicles (UAVs). We show that factor graphs allow us to consistently use the same optimization method, system dynamics, uncertainty models and other internal and external parameters, which potentially improves the UAV performance as a whole. To this end, we extended the modeling capabilities of factor graphs to represent nonlinear dynamics using constraint factors. For inference, we reformulate Sequential Quadratic Programming as an optimization algorithm on a factor graph with nonlinear constraints. We demonstrate our framework on a simulated quadrotor in an obstacle avoidance application.",
"title": ""
},
{
"docid": "866abb0de36960fba889282d67ce9dbd",
"text": "We present our experience with the use of local fasciocutaneous V-Y advancement flaps in the reconstruction of 10 axillae in 6 patients for large defects following wide excision of long-standing Hidradenitis suppurativa of the axilla. The defects were closed with local V-Y subcutaneous island flaps. A single flap from the chest wall was sufficient for moderate defects. However, for larger defects, an additional flap was taken from the medial side of the ipsilateral arm. The donor defects could be closed primarily in all the patients. The local areas of the lateral chest wall and the medial side of the arm have a plentiful supply of cutaneous perforators and the flaps can be designed in a V-Y fashion without resorting to preoperative marking of the perforator. The flaps were freed sufficiently to allow adequate movement for closure of the defects. Although no attempt was made to identify the perforators specifically, many perforators were seen entering the flap. Some perforators can be safely divided to increase reach of the flap. All the flaps survived completely. A follow up of 2.5 years is presented.",
"title": ""
},
{
"docid": "144b42f486b8148e2a019cbab611e83c",
"text": "As traditional horn antennas that could be used as feeds for reflector antenna systems, substrate integrate waveguide (SIW) horns could be used as feeds for planar antennas and arrays. In this letter, a phase-and-amplitude-corrected SIW horn by metal-via arrays is presented as a planar feeding structure. The compact SIW horn is applied to feed two 1×8 antipodal linearly tapered slot antenna (ALTSA) arrays, forming sum and difference beams at X-band. The measured gain of the sum beam is 12.84 dBi, the half-power beamwidth is 18.6°, the FTBR is 15.07 dB, and the sidelobe level is -20.79 dB at 10.1 GHz. The null depth of the difference beam is -44.24 dB. Good agreement between the simulation and the measured results is obtained.",
"title": ""
},
{
"docid": "02621546c67e6457f350d0192b616041",
"text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.",
"title": ""
},
{
"docid": "3c75d05e1b6abf2cb03573e1162954a7",
"text": "With the increasing popularity of portable camera devices and embedded visual processing, text extraction from natural scene images has become a key problem that is deemed to change our everyday lives via novel applications such as augmented reality. Text extraction from natural scene images algorithms is generally composed of the following three stages: (i) detection and localization, (ii) text enhancement to variations in the font size and color, text alignment, illumination change and reflections. This paper aims to classify and assess the latest algorithms. More specifically, we draw attention to studies on the first two steps in the extraction process, since OCR is a well-studied area where powerful algorithms already exist. This paper offers to the researchers a link to public image database for the algorithm assessment of text extraction from natural scene images. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e71535db0a0501dbd605259187a2e3b6",
"text": "This paper reports on a shared task involving the assignment of emotions to suicide notes. Two features distinguished this task from previous shared tasks in the biomedical domain. One is that it resulted in the corpus of fully anonymized clinical text and annotated suicide notes. This resource is permanently available and will (we hope) facilitate future research. The other key feature of the task is that it required categorization with respect to a large set of labels. The number of participants was larger than in any previous biomedical challenge task. We describe the data production process and the evaluation measures, and give a preliminary analysis of the results. Many systems performed at levels approaching the inter-coder agreement, suggesting that human-like performance on this task is within the reach of currently available technologies.",
"title": ""
},
{
"docid": "84a5b8425a25a599372af026ba60a29e",
"text": "OBJECTIVES\nMindfulness-based stress reduction (MBSR) has been found to reduce psychological distress and improve psychological adjustment in medical, psychiatric, and nonclinical samples. We examined its effects on several processes, attitudes, and behavior patterns related to emotion regulation.\n\n\nDESIGN\nFifty-six adults were randomly assigned to MBSR or to a waiting list (WL).\n\n\nRESULTS\nCompared with WL completers (n = 21), MBSR completers (n = 20) reported significantly greater increases in trait mindfulness and decreases in absent-mindedness, greater increases in self-compassion, and decreases in fear of emotions, suppression of anger, aggressive anger expression, worry, and difficulties regulating emotions. The WL group subsequently received MBSR, and the two groups combined showed significant changes on all of these variables from pre-MBSR to post-MBSR, and on all except the 2 anger variables from pre-test to 2-month follow-up, as well as significant reductions in rumination.\n\n\nCONCLUSION\nAn 8-week mindfulness training program might increase mindful awareness in daily life and have beneficial impact on clinically relevant emotion regulation processes.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "7eb150a364984512de830025a6e93e0c",
"text": "The mobile ecosystem is characterized by a large and complex network of companies interacting with each other, directly and indirectly, to provide a broad array of mobile products and services to end-customers. With the convergence of enabling technologies, the complexity of the mobile ecosystem is increasing multifold as new actors are emerging, new relations are formed, and the traditional distribution of power is shifted. Drawing on theories of complex systems, interfirm relationships, and the creative art and science of network visualization, this paper identifies key catalysts and develops a method to effectively map the complex structure and dynamics of over 7,000 global companies and 18,000 relationships in the mobile ecosystem. Our visual approach enables decision makers to explore the complexity of interfirm relations in the mobile ecosystem, understand their firmpsilas competitive position in a network context, and identify patterns that may influence their choice of innovation strategy or business models.",
"title": ""
},
{
"docid": "3d8f937692b9c0e2bb2c5b0148e1ef2c",
"text": "BACKGROUND\nAttenuated peripheral perfusion in patients with advanced chronic heart failure (CHF) is partially the result of endothelial dysfunction. This has been causally linked to an impaired endogenous regenerative capacity of circulating progenitor cells (CPC). The aim of this study was to elucidate whether exercise training (ET) affects exercise intolerance and left ventricular (LV) performance in patients with advanced CHF (New York Heart Association class IIIb) and whether this is associated with correction of peripheral vasomotion and induction of endogenous regeneration.\n\n\nMETHODS AND RESULTS\nThirty-seven patients with CHF (LV ejection fraction 24+/-2%) were randomly assigned to 12 weeks of ET or sedentary lifestyle (control). At the beginning of the study and after 12 weeks, maximal oxygen consumption (Vo(2)max) and LV ejection fraction were determined; the number of CD34(+)/KDR(+) CPCs was quantified by flow cytometry and CPC functional capacity was determined by migration assay. Flow-mediated dilation was assessed by ultrasound. Capillary density was measured in skeletal muscle tissue samples. In advanced CHF, ET improved Vo(2)max by +2.7+/-2.2 versus -0.8+/-3.1 mL/min/kg in control (P=0.009) and LV ejection fraction by +9.4+/-6.1 versus -0.8+/-5.2% in control (P<0.001). Flow-mediated dilation improved by +7.43+/-2.28 versus +0.09+/-2.18% in control (P<0.001). ET increased the number of CPC by +83+/-60 versus -6+/-109 cells/mL in control (P=0.014) and their migratory capacity by +224+/-263 versus -12+/-159 CPC/1000 plated CPC in control (P=0.03). Skeletal muscle capillary density increased by +0.22+/-0.10 versus -0.02+/-0.16 capillaries per fiber in control (P<0.001).\n\n\nCONCLUSIONS\nTwelve weeks of ET in patients with advanced CHF is associated with augmented regenerative capacity of CPCs, enhanced flow-mediated dilation suggestive of improvement in endothelial function, skeletal muscle neovascularization, and improved LV function. Clinical Trial Registration- http://www.clinicaltrials.gov. Unique Identifier: NCT00176384.",
"title": ""
},
{
"docid": "00632bdf7d05bf2365549fa6c59a4ea4",
"text": "BACKGROUND\nLabial adhesion is relatively common, but the condition is little known among doctors and parents. The article assesses treatment in the specialist health service.\n\n\nMATERIAL AND METHOD\nThe treatment and course are assessed in 105 girls in the age group 0 – 15 years who were referred to St. Olavs Hospital in the period 2004 – 14.\n\n\nRESULTS\nThe majority of the girls (n = 63) were treated topically with oestrogen cream. In 26 of 51 girls (51 %) for whom the final result is known, the adhesion opened after one treatment. When 1 – 4 oestrogen treatments were administered, the introitus had opened completely in two out of three (65 %). Fewer than half of those who received supplementary surgical treatment achieved permanent opening.\n\n\nINTERPRETATION\nTreatment for labial adhesion had a limited effect in this study. As the literature suggests that the condition results in few symptoms and resolves spontaneously in virtually all girls in puberty, no compelling medical reason exists for opening the adhesion in asymptomatic girls. It is important that doctors are aware of the condition in order to prevent misdiagnosis and to provide parents with adequate information. For parents it is important to know that spontaneous resolution may result in soreness and dysuria. Knowledge of the condition can most likely prevent unnecessary worry.",
"title": ""
},
{
"docid": "30821343b881c7fcdbe6ddacb820089b",
"text": "For construction projects involving transient ‘virtual organisations’ composed of non-collocated team-members, the adoption of concurrent engineering principles is seen as vital. An important aspect of concurrent engineering in construction is the need for an effective communications infrastructure between team members. Traditionally, such communication has been handled through person-to-person meetings, however the complexity of construction projects has grown and, as a result, reliance on new information and communications technologies is becoming increasingly necessary. Hence, within a concurrent engineering setting, there is the need for an integrated information and collaboration environment that will create a persistent space to support interaction between project personnel throughout all phases of construction projects. This joint initiative between the Massachusetts Institute of Technology (MIT), Loughborough University, British Telecommunications plc. (BT) and Kajima Corporation explores computer-supported mechanisms for enhancing distributed engineering collaboration. The goal of this paper is to develop a set of requirements, a system architecture and a system prototype to facilitate computer-supported collaboration among distributed teams. The prototype consists of a comprehensive working collaborative system built from the integration of complementary standalone applications. These applications are the CAIRO system, developed at the Massachusetts Institute of Technology and the Telepresence system developed by Loughborough University and BT.",
"title": ""
},
{
"docid": "bf11d9a1ef46b24f5d13dc119e715005",
"text": "This paper explores the relationship between the three beliefs about online shopping ie. perceived usefulness, perceived ease of use and perceived enjoyment and intention to shop online. A sample of 150 respondents was selected using a purposive sampling method whereby the respondents have to be Internet users to be included in the survey. A structured, self-administered questionnaire was used to elicit responses from these respondents. The findings indicate that perceived ease of use (β = 0.70, p<0.01) and perceived enjoyment (β = 0.32, p<0.05) were positively related to intention to shop online whereas perceived usefulness was not significantly related to intention to shop online. Furthermore, perceived ease of use (β = 0.78, p<0.01) was found to be a significant predictor of perceived usefulness. This goes to show that ease of use and enjoyment are the 2 main drivers of intention to shop online. Implications of the findings for developers are discussed further.",
"title": ""
},
{
"docid": "5691ca09e609aea46b9fd5e7a83d165a",
"text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.",
"title": ""
},
{
"docid": "b71a1bbe3c13e8619eab45009aecead4",
"text": "Fraud detection is interesting research topic and it not only needs data mining techniques but also needs a lot of inputs from domain experts. In health care claims, relationships between physicians and patients form complex communities structures and these communities could lead to potential fraud discoveries. Traditionally, researchers have focused on clustering physicians and patients and tried to find the suspicious communities. In this paper, we studied and discussed different types of relationships and focus on small but exclusive relationships that are suspicious and may indicate potential health care frauds. We developed two algorithms to detect these small and exclusive communities. These algorithms can be applied to larger dataset and are highly scalable. We tested these algorithms with a set of synthesized datasets. These synthesized datasets were created to resemble the real health care claims datasets and used to test the fraud detection algorithms. The test results show the these algorithms are very efficient and can evaluate the communities structures of 50,000 providers in about 1 minute.",
"title": ""
},
{
"docid": "77a247205e5dc5de0d179b8313adfc9d",
"text": "Social media such as tweets are emerging as platforms contributing to situational awareness during disasters. Information shared on Twitter by both affected population (e.g., requesting assistance, warning) and those outside the impact zone (e.g., providing assistance) would help first responders, decision makers, and the public to understand the situation first-hand. Effective use of such information requires timely selection and analysis of tweets that are relevant to a particular disaster. Even though abundant tweets are promising as a data source, it is challenging to automatically identify relevant messages since tweet are short and unstructured, resulting to unsatisfactory classification performance of conventional learning-based approaches. Thus, we propose a simple yet effective algorithm to identify relevant messages based on matching keywords and hashtags, and provide a comparison between matching-based and learning-based approaches. To evaluate the two approaches, we put them into a framework specifically proposed for analyzing diaster-related tweets. Analysis results on eleven datasets with various disaster types show that our technique provides relevant tweets of higher quality and more interpretable results of sentiment analysis tasks when compared to learning approach.",
"title": ""
}
] |
scidocsrr
|
89415a8777033ae0381ac42c715413f8
|
An efficient P300-based brain–computer interface for disabled subjects
|
[
{
"docid": "2c3bdb3dc3bf4aedc36a49e82a2dca50",
"text": "We report the implementation of a text input application (speller) based on the P300 event related potential. We obtain high accuracies by using an SVM classifier and a novel feature. These techniques enable us to maintain fast performance without sacrificing the accuracy, thus making the speller usable in an online mode. In order to further improve the usability, we perform various studies on the data with a view to minimizing the training time required. We present data collected from nine healthy subjects, along with the high accuracies (of the order of 95% or more) measured online. We show that the training time can be further reduced by a factor of two from its current value of about 20 min. High accuracy, fast learning, and online performance make this P300 speller a potential communication tool for severely disabled individuals, who have lost all other means of communication and are otherwise cut off from the world, provided their disability does not interfere with the performance of the speller.",
"title": ""
}
] |
[
{
"docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c",
"text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.",
"title": ""
},
{
"docid": "68d1308ae2d1bdb4604d7d90c10166f1",
"text": "Smoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data. Derivatives can be estimated from the data by differentiating the resulting (nearly) optimally smoothed spline. We consider the model yi=g(ti)+e~, i= 1, 2 . . . . . n, tie[0 , 1], where geW2 ~'~) = {f: j; f , , .... f(mi~ abs. cont., f(m~ ~2 [0, 1 ] }, and the {el} are random errors with E e i=0, E eie~=a z 6~j. The error variance a 2 may be unknown. As an estimate ofg we take the solution g,, a to the problem: Find f ~ W2 (\"~ to minimize 1 1_ ~ (f(t j) y~)2 + 2 S (f(\")(u)) 2 du. The function g,, a is a smoothing polynomial n j = l 0 spline of degree 2m-1 . The,parameter 2 controls the tradeoff between the 1 \"roughness\" of the solution, as measured by S [f(m)(u)]2 du, and the infidelity to 0",
"title": ""
},
{
"docid": "3744510fa3cec75c1ccb5abbdb9d71ed",
"text": "49 Abstract— Typically, computer viruses and other malware are detected by searching for a string of bits found in the virus or malware. Such a string can be viewed as a \" fingerprint \" of the virus identified as the signature of the virus. The technique of detecting viruses using signatures is known as signature based detection. Today, virus writers often camouflage their viruses by using code obfuscation techniques in an effort to defeat signature-based detection schemes. So-called metamorphic viruses transform their code as they propagate, thus evading detection by static signature-based virus scanners, while keeping their functionality but differing in internal structure. Many dynamic analysis based detection have been proposed to detect metamorphic viruses but dynamic analysis technique have limitations like difficult to learn normal behavior, high run time overhead and high false positive rate compare to static detection technique. A similarity measure method has been successfully applied in the field of document classification problem. We want to apply similarity measures methods on static feature, API calls of executable to classify it as malware or benign. In this paper we present limitations of signature based detection for detecting metamorphic viruses. We focus on statically analyzing an executable to extract API calls and count the frequency this API calls to generate the feature set. These feature set is used to classify unknown executable as malware or benign by applying various similarity function. I. INTRODUCTION In today's age, where a majority of the transactions involving sensitive information access happen on computers and over the internet, it is absolutely imperative to treat information security as a concern of paramount importance. Computer viruses and other malware have been in existence from the very early days of the personal computer and continue to pose a threat to home and enterprise users alike. A computer virus by definition is \" A program that recursively and explicitly copies a possibly evolved version of itself \" [1]. A virus copies itself to a host file or system area. Once it gets control, it multiplies itself to form newer generations. A virus may carry out damaging activities on the host machine such as corrupting or erasing files, overwriting the whole hard disk, or crashing the computer. These viruses remain harmless but",
"title": ""
},
{
"docid": "6087e066b04b9c3ac874f3c58979f89a",
"text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.",
"title": ""
},
{
"docid": "64d53035eb919d5e27daef6b666b7298",
"text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.",
"title": ""
},
{
"docid": "a4b1a04647b8d4f8a9cc837304c7cbae",
"text": "The human brain automatically attempts to interpret the physical visual inputs from our eyes in terms of plausible motion of the viewpoint and/or of the observed object or scene [Ellis 1938; Graham 1965; Giese and Poggio 2003]. In the physical world, the rules that define plausible motion are set by temporal coherence, parallax, and perspective projection. Our brain, however, refuses to feel constrained by the unrelenting laws of physics in what it deems plausible motion. Image metamorphosis experiments, in which unnatural, impossible in-between images are interpolated, demonstrate that under certain circumstances, we willingly accept chimeric images as plausible transition stages between images of actual, known objects [Beier and Neely 1992; Seitz and Dyer 1996]. Or think of cartoon animations which for the longest time were hand-drawn pieces of art that didn't need to succumb to physical correctness. The goal of our work is to exploit this freedom of perception for space-time interpolation, i.e., to generate transitions between still images that our brain accepts as plausible motion in a moving 3D world.",
"title": ""
},
{
"docid": "0faa6fa4bd010586c24630278592492b",
"text": "Packet classification on multiple header fields is one of the basic techniques used in network devices such as routers and firewalls, and usually the most computation intensive task among others. To determine what action needs to be taken to a packet, a network device responsible for packet classification must identify the packet's property, such as associated packet flow, based on multiple fields of its header. Fast packet classification on multiple fields is known to be difficult mathematically and expensive practically. In this paper, we describe and discuss a fast packet classification algorithm using a multiple stage reduction scheme similar to the previously well-known recursive flow classification (RFC) algorithm. The proposed hierarchical space mapping (HSM) algorithm requires much less memory usage than RFC while keeps average search time on the same order. HSM has been proved to be very effective with commercial products in real networks.",
"title": ""
},
{
"docid": "0cd5813a069c8955871784cd3e63aa83",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "ee9f21361d01a8c678fece3c425f35c2",
"text": "Probabilistic model-based clustering, based on nite mixtures of multivariate models, is a useful framework for clustering data in a statistical context. This general framework can be directly extended to clustering of sequential data, based on nite mixtures of sequential models. In this paper we consider the problem of tting mixture models where both multivariate and sequential observations are present. A general EM algorithm is discussed and experimental results demonstrated on simulated data. The problem is motivated by the practical problem of clustering individuals into groups based on both their static characteristics and their dynamic behavior.",
"title": ""
},
{
"docid": "22dbd531b0769ad678533beba78fe12b",
"text": "A axial-force/torque motor (AFTM) establishes a completely new bearingless drive concept. The presented Lorentz-force-type actuator features a compact and integrated design using a very specific permanent-magnet excitation system and a concentric nonoverlapping air-gap stator winding. The end windings of the bent air-core coils, which are shaped in a circumferential rotor direction, provide active axial suspension forces. Thus, no additional (bearing) coils are needed for stable axial levitation. The four remaining degrees of freedom of the rotor are stabilized by passive magnetic ring bearings. This paper concentrates on the determination of the lumped parameters for the dynamic system modeling of the AFTM. After introducing a coordinate transformation for the decoupling of the control variables, the axial suspension force, and the drive torque, the relations for coil dimensioning are developed, followed by a discussion of the coil turn number selection process. Active levitation forces and drive torque specifications both must be concurrently fulfilled at a nominal rotor speed with only one common winding system, respecting several electrical, thermal, and mechanical boundaries likewise. Provided that the stator winding topology is designed properly, a simple closed-loop control strategy permits the autonomous manipulation of both control variables. A short presentation of the first experimental setup highlights the possible fields of application for the compact drive concept.",
"title": ""
},
{
"docid": "82f3404012290778ef6392ec240c358b",
"text": "A ball segway is a ballbot-type robot that has a car-like structure. It can move with three omnidirectional-wheel mechanisms to drive the ball while maintaining balance. To obtain stable balancing and transferring simultaneously of the 2D ball segway which is an underactuated system, a control law is designed based on energy method. The energy storage function is formulated to prove the passivity property of the system. Simulation results show the effectiveness of our approach.",
"title": ""
},
{
"docid": "b79b32bd7a4809cb45bfbaa7d5381648",
"text": "We projected future prevalence and BMI distribution based on national survey data (National Health and Nutrition Examination Study) collected between 1970s and 2004. Future obesity-related health-care costs for adults were estimated using projected prevalence, Census population projections, and published national estimates of per capita excess health-care costs of obesity/overweight. The objective was to illustrate potential burden of obesity prevalence and health-care costs of obesity and overweight in the United States that would occur if current trends continue. Overweight and obesity prevalence have increased steadily among all US population groups, but with notable differences between groups in annual increase rates. The increase (percentage points) in obesity and overweight in adults was faster than in children (0.77 vs. 0.46-0.49), and in women than in men (0.91 vs. 0.65). If these trends continue, by 2030, 86.3% adults will be overweight or obese; and 51.1%, obese. Black women (96.9%) and Mexican-American men (91.1%) would be the most affected. By 2048, all American adults would become overweight or obese, while black women will reach that state by 2034. In children, the prevalence of overweight (BMI >/= 95th percentile, 30%) will nearly double by 2030. Total health-care costs attributable to obesity/overweight would double every decade to 860.7-956.9 billion US dollars by 2030, accounting for 16-18% of total US health-care costs. We continue to move away from the Healthy People 2010 objectives. Timely, dramatic, and effective development and implementation of corrective programs/policies are needed to avoid the otherwise inevitable health and societal consequences implied by our projections .",
"title": ""
},
{
"docid": "315fe02072069d3fe7f2a03f251dde31",
"text": "We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n2) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.",
"title": ""
},
{
"docid": "0e12ea5492b911c8879cc5e79463c9fa",
"text": "In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.",
"title": ""
},
{
"docid": "269e2f8bca42d5369f9337aea6191795",
"text": "Today, exposure to new and unfamiliar environments is a necessary part of daily life. Effective communication of location-based information through location-based services has become a key concern for cartographers, geographers, human-computer interaction and professional designers alike. Recently, much attention was directed towards Augmented Reality (AR) interfaces. Current research, however, focuses primarily on computer vision and tracking, or investigates the needs of urban residents, already familiar with their environment. Adopting a user-centred design approach, this paper reports findings from an empirical mobile study investigating how tourists acquire knowledge about an unfamiliar urban environment through AR browsers. Qualitative and quantitative data was used in the development of a framework that shifts the perspective towards a more thorough understanding of the overall design space for such interfaces. The authors analysis provides a frame of reference for the design and evaluation of mobile AR interfaces. The authors demonstrate the application of the framework with respect to optimization of current design of AR.",
"title": ""
},
{
"docid": "fbff176c8731cdb9dcbf354cf72b3148",
"text": "Polar code, newly formulated by Erdal Arikan, has got a wide recognition from the information theory community. Polar code achieves the capacity of the class of symmetric binary memory less channels. In this paper, we propose efficient hardware architecture on a FPGA platform using Xilinx Virtex VI for implementing the advanced encoding and decoding schemes. The performance of the proposed architecture out performs the existing techniques such as: successive cancellation decoder, list successive cancellation, belief propagation etc; with respect to bit error rate and resource utilization.",
"title": ""
},
{
"docid": "0305bac1e39203b49b794559bfe0b376",
"text": "The emerging field of semantic web technologies promises new stimulus for Software Engineering research. However, since the underlying concepts of the semantic web have a long tradition in the knowledge engineering field, it is sometimes hard for software engineers to overlook the variety of ontology-enabled approaches to Software Engineering. In this paper we therefore present some examples of ontology applications throughout the Software Engineering lifecycle. We discuss the advantages of ontologies in each case and provide a framework for classifying the usage of ontologies in Software Engineering.",
"title": ""
},
{
"docid": "340f64ed182a54ef617d7aa2ffeac138",
"text": "Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.",
"title": ""
},
{
"docid": "89eaafb816877a6c4139c30aea0ac8d8",
"text": "We have developed several digital heritage interfaces that utilize Web3D, virtual and augmented reality technologies for visualizing digital heritage in an interactive manner through the use of several different input devices. We propose in this paper an integration of these technologies to provide a novel multimodal mixed reality interface that facilitates the implementation of more interesting digital heritage exhibitions. With such exhibitions participants can switch dynamically between virtual web-based environments to indoor augmented reality environments as well as make use of various multimodal interaction techniques to better explore heritage information in the virtual museum. The museum visitor can potentially experience their digital heritage in the physical sense in the museum, then explore further through the web, visualize this heritage in the round (3D on the web), take that 3D artifact into the augmented reality domain (the real world) and explore it further using various multimodal interfaces.",
"title": ""
},
{
"docid": "94e386866e9e934d53405921963e483a",
"text": "Population pharmacokinetics is the study of pharmacokinetics at the population level, in which data from all individuals in a population are evaluated simultaneously using a nonlinear mixedeffects model. “Nonlinear” refers to the fact that the dependent variable (e.g., concentration) is nonlinearly related to the model parameters and independent variable(s). “Mixed-effects” refers to the parameterization: parameters that do not vary across individuals are referred to as “fixed effects,” parameters that vary across individuals are called “random effects.” There are five major aspects to developing a population pharmacokinetic model: (i) data, (ii) structural model, (iii) statistical model, (iv) covariate models, and (v) modeling software. Structural models describe the typical concentration time course within the population. Statistical models account for “unexplainable” (random) variability in concentration within the population (e.g., betweensubject, between-occasion, residual, etc.). Covariate models explain variability predicted by subject characteristics (covariates). Nonlinear mixed effects modeling software brings data and models together, implementing an estimation method for finding parameters for the structural, statistical, and covariate models that describe the data.1 A primary goal of most population pharmacokinetic modeling evaluations is finding population pharmacokinetic parameters and sources of variability in a population. Other goals include relating observed concentrations to administered doses through identification of predictive covariates in a target population. Population pharmacokinetics does not require “rich” data (many observations/subject), as required for analysis of single-subject data, nor is there a need for structured sampling time schedules. “Sparse” data (few observations/ subject), or a combination, can be used. We examine the fundamentals of five key aspects of population pharmacokinetic modeling together with methods for comparing and evaluating population pharmacokinetic models. DATA CONSIDERATIONS",
"title": ""
}
] |
scidocsrr
|
ecd2a5d47504b3f494baa4534d21fb0b
|
The relationship of school breakfast to psychosocial and academic functioning: cross-sectional and longitudinal observations in an inner-city school sample.
|
[
{
"docid": "bceaae2a05d673bc576f365d6a0254ee",
"text": "OBJECTIVE\nResults from a recent series of surveys from 9 states and the District of Columbia by the Community Childhood Hunger Identification Project (CCHIP) provide an estimate that 4 million American children experience prolonged periodic food insufficiency and hunger each year, 8% of the children under the age of 12 in this country. The same studies show that an additional 10 million children are at risk for hunger. The current study examined the relationship between hunger as defined by the CCHIP measure (food insufficiency attributable to constrained resources) and variables reflecting the psychosocial functioning of low-income, school-aged children.\n\n\nMETHODS\nThe study group included 328 parents and children from a CCHIP study of families with at least 1 child under the age of 12 years living in the city of Pittsburgh and the surrounding Allegheny County. A two-stage area probability sampling design with standard cluster techniques was used. All parents whose child was between the ages of 6 and 12 years at the time of interview were asked to complete a Pediatric Symptom Checklist, a brief parent-report questionnaire that assesses children's emotional and behavioral symptoms. Hunger status was defined by parent responses to the standard 8 food-insufficiency questions from the CCHIP survey that are used to classify households and children as \"hungry,\" \"at-risk for hunger,\" or \"not hungry.\"\n\n\nRESULTS\nIn an area probability sample of low-income families, those defined as hungry on the CCHIP measure were significantly more likely to have clinical levels of psychosocial dysfunction on the Pediatric Symptom Checklist than children defined as at-risk for hunger or not hungry. Analysis of individual items and factor scores on the Pediatric Symptom Checklist showed that virtually all behavioral, emotional, and academic problems were more prevalent in hungry children, but that aggression and anxiety had the strongest degree of association with experiences of hunger.\n\n\nCONCLUSION\nChildren from families that report multiple experiences of food insufficiency and hunger are more likely to show behavioral, emotional, and academic problems on a standardized measure of psychosocial dysfunction than children from the same low-income communities whose families do not report experiences of hunger. Although causality cannot be determined from a cross-sectional design, the strength of these findings suggests the importance of greater awareness on the part of health care providers and public health officials of the role of food insufficiency and hunger in the lives of poor children.",
"title": ""
}
] |
[
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "459dc066960760010b1157e4929d09f8",
"text": "A dynamical extension that makes possible the integration of a kinematic controller and a torque controller for nonholonomic mobile robots is presented. A combined kinematic/torque control law is developed using backstepping, and asymptotic stability is guaranteed by Lyapunov theory. Moreover, this control algorithm can be applied to the three basic nonholonomic navigation problems: tracking a reference trajectory, path following, and stabilization about a desired posture. The result is a general structure for controlling a mobile robot that can accommodate different control techniques, ranging from a conventional computed-torque controller, when all dynamics are known, to robust-adaptive controllers if this is not the case. A robust-adaptive controller based on neural networks (NNs) is proposed in this work. The NN controller can deal with unmodeled bounded disturbances and/or unstructured unmodeled dynamics in the vehicle. On-line NN weight tuning algorithms that do not require off-line learning yet guarantee small tracking errors and bounded control signals are utilized. 1997 John Wiley & Sons, Inc.",
"title": ""
},
{
"docid": "1566ef8b6b9c21a22d9259e0ff21c71b",
"text": "Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image colocalization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data.",
"title": ""
},
{
"docid": "cbf5c00229e9ac591183f4877006cf2b",
"text": "OBJECTIVE\nTo statistically analyze the long-term results of alar base reduction after rhinoplasty.\n\n\nMETHODS\nAmong a consecutive series of 100 rhinoplasty cases, 19 patients required alar base reduction. The mean (SD) follow-up time was 11 (9) months (range, 2 months to 3 years). Using preoperative and postoperative photographs, comparisons were made of the change in the base width (width of base between left and right alar-facial junctions), flare width (width on base view between points of widest alar flare), base height (distance from base to nasal tip on base view), nostril height (distance from base to anterior edge of nostril), and vertical flare (vertical distance from base to the widest alar flare). Notching at the nasal sill was recorded as none, minimal, mild, moderate, and severe.\n\n\nRESULTS\nChanges in vertical flare (P<.05) and nostril height (P<.05) were the only significant differences seen in the patients who required alar reduction. No significant change was seen in base width (P=.92), flare width (P=.41), or base height (P=.22). No notching was noted.\n\n\nCONCLUSIONS\nIt would have been preferable to study patients undergoing alar reduction without concomitant rhinoplasty procedures, but this approach is not practical. To our knowledge, the present study represents the most extensive attempt in the literature to characterize and quantify the postoperative effects of alar base reduction.",
"title": ""
},
{
"docid": "1a5f95c2ea414bce2c74a82782d369f2",
"text": "BACKGROUND\nEctodermal dysplasia (ED) represents a disorder group characterised by abnormal development of the ectodermal derivatives. Removable partial dentures (RPD), complete dentures (CD) or overdentures (OD) are most often the treatment of choice for young affected patients. Prosthetic intervention is of utmost importance in the management of ED patients, as it resolves problems associated with functional, aesthetic, and psychological issues, and improves a patient's quality of life. However, few studies present the principles and guidelines that can assist in the decision-making process of the most appropriate removable prosthesis. The purpose of this study was to suggest a simple treatment decision-making algorithm for selecting an effective and individualised rehabilitative treatment plan, considering different parameters.\n\n\nCASE REPORTS\nThe cases and treatment of two young ED patients are described and each one was treated with either RPDs or ODs.\n\n\nFOLLOW-UP\nPeriodic recalls were employed to manage problems, and monitor the changes associated with occlusion and fit of the prostheses in relation to each patient's growth. Both patients were followed up for more than 2 years and reported significant improvement in their appearance, masticatory function, and social behaviour as a result of the prosthetic rehabilitation.\n\n\nCONCLUSION\nThe main factors guiding the decision process towards the choice of an RPD or an OD are the presence of posterior natural teeth, facial aesthetics, lip support, number and size of existing natural teeth, and the occlusal vertical dimension.",
"title": ""
},
{
"docid": "c0636509e222bf844b76cf88e696a4bd",
"text": "The emerging Spin Torque Transfer memory (STT-RAM) is a promising candidate for future on-chip caches due to STT-RAM's high density, low leakage, long endurance and high access speed. However, one of the major challenges of STT-RAM is its high write current, which is disadvantageous when used as an on-chip cache since the dynamic power generated is too high.\n In this paper, we propose Early Write Termination (EWT), a novel technique to significantly reduce write energy with no performance penalty. EWT can be implemented with low complexity and low energy overhead. Our evaluation shows that up to 80% of write energy reduction can be achieved through EWT, resulting 33% less total energy consumption, and 34% reduction in ED2. These results indicate that EWT is an effective and practical scheme to improve the energy efficiency of a STT-RAM cache.",
"title": ""
},
{
"docid": "925d0a4b4b061816c540f2408ea593d1",
"text": "It is believed that eye movements in free-viewing of natural scenes are directed by both bottom-up visual saliency and top-down visual factors. In this paper, we propose a novel computational framework to simultaneously learn these two types of visual features from raw image data using a multiresolution convolutional neural network (Mr-CNN) for predicting eye fixations. The Mr-CNN is directly trained from image regions centered on fixation and non-fixation locations over multiple resolutions, using raw image pixels as inputs and eye fixation attributes as labels. Diverse top-down visual features can be learned in higher layers. Meanwhile bottom-up visual saliency can also be inferred via combining information over multiple resolutions. Finally, optimal integration of bottom-up and top-down cues can be learned in the last logistic regression layer to predict eye fixations. The proposed approach achieves state-of-the-art results over four publically available benchmark datasets, demonstrating the superiority of our work.",
"title": ""
},
{
"docid": "739788a91526e41ea8db63837b61135d",
"text": "Much work in Natural Language Processing (NLP) has been for resource-rich languages, making generalization to new, less-resourced languages challenging. We present two approaches for improving generalization to lowresourced languages by adapting continuous word representations using linguistically motivated subword units: phonemes, morphemes and graphemes. Our method requires neither parallel corpora nor bilingual dictionaries and provides a significant gain in performance over previous methods relying on these resources. We demonstrate the effectiveness of our approaches onNamedEntity Recognition for four languages, namely Uyghur, Turkish, Bengali and Hindi, of which Uyghur and Bengali are low resource languages, and also perform experiments on Machine Translation. Exploiting subwords with transfer learning gives us a boost of +15.2 NER F1 for Uyghur and +9.7 F1 for Bengali. We also show improvements in the monolingual setting where we achieve (avg.) +3 F1 and (avg.) +1.35 BLEU.",
"title": ""
},
{
"docid": "bc33f06340e652336ef2abb875937d5a",
"text": "WORKING PAPERS Examining the Impact of Contextual Ambiguity on Search Advertising Keyword Performance: A Topic Model Approach (with Abhishek, Vibhanshu and Beibei Li), Job Market Paper, invited for resubmission to Marketing Science. Substitution or Promotion? The Impact of Price Discounts on Cross-Channel Sales of Digital Movies (with Michael D. Smith, and Rahul Telang), conditionally accepted at the Journal of Retailing.",
"title": ""
},
{
"docid": "8442995acf05044fc74817802c99ea1a",
"text": "Fumaric acid is a platform chemical with many applications in bio-based chemical and polymer production. Fungal cell morphology is an important factor that affects fumaric acid production via fermentation. In the present study, pellet and dispersed mycelia morphology of Rhizopus arrhizus NRRL 2582 was analysed using image analysis software and the impact on fumaric acid production was evaluated. Batch experiments were carried out in shake flasks using glucose as carbon source. The highest fumaric acid yield of 0.84 g/g total sugars was achieved in the case of dispersed mycelia with a final fumaric acid concentration of 19.7 g/L. The fumaric acid production was also evaluated using a nutrient rich feedstock obtained from soybean cake, as substitute of the commercial nitrogen sources. Solid state fermentation was performed in order to produce proteolytic enzymes, which were utilised for soybean cake hydrolysis. Batch fermentations were conducted using 50 g/L glucose and soybean cake hydrolysate achieving up to 33 g/L fumaric acid concentration. To the best of our knowledge the influence of R. arrhizus morphology on fumaric acid production has not been reported previously. The results indicated that dispersed clumps were more effective in fumaric acid production than pellets and renewable resources could be alternatively valorised for the biotechnological production of platform chemicals.",
"title": ""
},
{
"docid": "19f3720d0077783554b6d9cd71e95c48",
"text": "Radical prostatectomy is performed on approximately 40% of men with organ-confined prostate cancer. Pathologic information obtained from the prostatectomy specimen provides important prognostic information and guides recommendations for adjuvant treatment. The current pathology protocol in most centers involves primarily qualitative assessment. In this paper, we describe and evaluate our system for automatic prostate cancer detection and grading on hematoxylin & eosin-stained tissue images. Our approach is intended to address the dual challenges of large data size and the need for high-level tissue information about the locations and grades of tumors. Our system uses two stages of AdaBoost-based classification. The first provides high-level tissue component labeling of a superpixel image partitioning. The second uses the tissue component labeling to provide a classification of cancer versus noncancer, and low-grade versus high-grade cancer. We evaluated our system using 991 sub-images extracted from digital pathology images of 50 whole-mount tissue sections from 15 prostatectomy patients. We measured accuracies of 90% and 85% for the cancer versus noncancer and high-grade versus low-grade classification tasks, respectively. This system represents a first step toward automated cancer quantification on prostate digital histopathology imaging, which could pave the way for more accurately informed postprostatectomy patient care.",
"title": ""
},
{
"docid": "244c9b12647c64da1eff784942f06591",
"text": "Level set methods have been widely used in image processing and computer vision. In conventional level set formulations, the level set function typically develops irregularities during its evolution, which may cause numerical errors and eventually destroy the stability of the evolution. Therefore, a numerical remedy, called reinitialization, is typically applied to periodically replace the degraded level set function with a signed distance function. However, the practice of reinitialization not only raises serious problems as when and how it should be performed, but also affects numerical accuracy in an undesirable way. This paper proposes a new variational level set formulation in which the regularity of the level set function is intrinsically maintained during the level set evolution. The level set evolution is derived as the gradient flow that minimizes an energy functional with a distance regularization term and an external energy that drives the motion of the zero level set toward desired locations. The distance regularization term is defined with a potential function such that the derived level set evolution has a unique forward-and-backward (FAB) diffusion effect, which is able to maintain a desired shape of the level set function, particularly a signed distance profile near the zero level set. This yields a new type of level set evolution called distance regularized level set evolution (DRLSE). The distance regularization effect eliminates the need for reinitialization and thereby avoids its induced numerical errors. In contrast to complicated implementations of conventional level set formulations, a simpler and more efficient finite difference scheme can be used to implement the DRLSE formulation. DRLSE also allows the use of more general and efficient initialization of the level set function. In its numerical implementation, relatively large time steps can be used in the finite difference scheme to reduce the number of iterations, while ensuring sufficient numerical accuracy. To demonstrate the effectiveness of the DRLSE formulation, we apply it to an edge-based active contour model for image segmentation, and provide a simple narrowband implementation to greatly reduce computational cost.",
"title": ""
},
{
"docid": "04d70ccc95828205f8073767acd20374",
"text": "In this paper, we study the problem of detecting and tracking multiple objects of various types in outdoor urban traffic scenes. This problem is especially challenging due to the large variation of road user appearances. To handle that variation, our system uses background subtraction to detect moving objects. In order to build the object tracks, an object model is built and updated through time inside a state machine using feature points and spatial information. When an occlusion occurs between multiple objects, the positions of feature points at previous observations are used to estimate the positions and sizes of the individual occluded objects. Our Urban Tracker algorithm is validated on four outdoor urban videos involving mixed traffic that includes pedestrians, cars, large vehicles, etc. Our method compares favorably to a current state of the art feature-based tracker for urban traffic scenes on pedestrians and mixed traffic.",
"title": ""
},
{
"docid": "569ae662a71c3484e7c53e6cf8dda50d",
"text": "Node mobility and end-to-end disconnections in Delay Tolerant Networks (DTNs) greatly impair the effectiveness of data dissemination. Although social-based approaches can be used to address the problem, most existing solutions only focus on forwarding data to a single destination. In this paper, we are the first to study multicast in DTNs from the social network perspective. We study multicast in DTNs with single and multiple data items, investigate the essential difference between multicast and unicast in DTNs, and formulate relay selections for multicast as a unified knapsack problem by exploiting node centrality and social community structures. Extensive trace-driven simulations show that our approach has similar delivery ratio and delay to the Epidemic routing, but can significantly reduce the data forwarding cost measured by the number of relays used.",
"title": ""
},
{
"docid": "66c5004ce3c7e361f335afa18ab67cea",
"text": "Research suggests that police work is among the most stressful occupations in the world and officers typically suffer a variety of physiological, psychological, and behavioral effects and symptoms. Officers operating under severe or chronic stress are likely to be at greater risk of error, accidents, and overreactions that can compromise their performance, jeopardize public safety, and pose significant liability costs to the organization. Therefore, this study explored the nature and degree of physiological activation typically experienced of officers on the job and the impact of the Coherence Advantage resilience and performance enhancement training on a group of police officers from Santa Clara County, California. Areas assessed included vitality, emotional well-being, stress coping and interpersonal skills, work performance, workplace effectiveness and climate, family relationships, and physiological recalibration following acute stressors. Physiological measurements were obtained to determine the real-time cardiovascular impact of acutely stressful situations encountered in highly realistic simulated police calls used in police training and to identify officers at increased risk of future health challenges. The resilience-building training improved officers' capacity to recognize and self-regulate their responses to stressors in both work and personal contexts. Officers experienced reductions in stress, negative emotions, depression, and increased peacefulness and vitality as compared to a control group. Improvements in family relationships, more effective communication and cooperation within work teams, and enhanced work performance also were noted. Heart rate and blood pressure measurements taken during simulated police call scenarios showed that acutely stressful circumstances typically encountered on the job result in a tremendous degree of physiological activation, from which it takes a considerable amount of time to recover. Autonomic nervous system assessment based on heart rate variability (HRV) analysis of 24-hour electrocardiogram (ECG) recordings revealed that 11% of the officers were at higher risk for sudden cardiac death and other serious health challenges. This is more than twice the percentage typically found in the general population and is consistent with epidemiological data indicating that police officers have more than twice the average incidence of cardiovascular-related disease. The data suggest that training in resilience building and self-regulation skills could significantly benefit police organizations by improving judgment and decision making and decreasing the frequency of onthe-job driving accidents and the use of excessive force in high-stress situations. Potential outcomes include fewer citizens' complaints, fewer lawsuits, decreased organizational liabilities, and increased community safety. Finally, this study highlights the value of 24-hour HRV analysis as a useful screening tool to identify officers who are at increased risk, so that efforts can be made to reverse or prevent the onset of disease in these individuals.",
"title": ""
},
{
"docid": "7d604a9daef9b10c31ac74ecc60bd690",
"text": "Sentiment analysis is treated as a classification task as it classifies the orientation of a text into either positive or negative. This paper describes experimental results that applied Support Vector Machine (SVM) on benchmark datasets to train a sentiment classifier. N-grams and different weighting scheme were used to extract the most classical features. It also explores Chi-Square weight features to select informative features for the classification. Experimental analysis reveals that by using Chi-Square feature selection may provide significant improvement on classification accuracy.",
"title": ""
},
{
"docid": "5213ed67780b194a609220677b9c1dd4",
"text": "Cardiovascular diseases (CVD) are initiated by endothelial dysfunction and resultant expression of adhesion molecules for inflammatory cells. Inflammatory cells secrete cytokines/chemokines and growth factors and promote CVD. Additionally, vascular cells themselves produce and secrete several factors, some of which can be useful for the early diagnosis and evaluation of disease severity of CVD. Among vascular cells, abundant vascular smooth muscle cells (VSMCs) secrete a variety of humoral factors that affect vascular functions in an autocrine/paracrine manner. Among these factors, we reported that CyPA (cyclophilin A) is secreted mainly from VSMCs in response to Rho-kinase activation and excessive reactive oxygen species (ROS). Additionally, extracellular CyPA augments ROS production, damages vascular functions, and promotes CVD. Importantly, a recent study in ATVB demonstrated that ambient air pollution increases serum levels of inflammatory cytokines. Moreover, Bell et al reported an association of air pollution exposure with high-density lipoprotein (HDL) cholesterol and particle number. In a large, multiethnic cohort study of men and women free of prevalent clinical CVD, they found that higher concentrations of PM2.5 over a 3-month time period was associated with lower HDL particle number, and higher annual concentrations of black carbon were associated with lower HDL cholesterol. Together with the authors’ previous work on biomarkers of oxidative stress, they provided evidence for potential pathways that may explain the link between air pollution exposure and acute cardiovascular events. The objective of this review is to highlight the novel research in the field of biomarkers for CVD.",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "428069c804c035e028e9047d6c1f70f7",
"text": "We present a co-designed scheduling framework and platform architecture that together support compositional scheduling of real-time systems. The architecture is built on the Xen virtualization platform, and relies on compositional scheduling theory that uses periodic resource models as component interfaces. We implement resource models as periodic servers and consider enhancements to periodic server design that significantly improve response times of tasks and resource utilization in the system while preserving theoretical schedulability results. We present an extensive evaluation of our implementation using workloads from an avionics case study as well as synthetic ones.",
"title": ""
},
{
"docid": "b93c2265b4420b62ffbbf1a5e6d773f8",
"text": "The objective of this paper is to instill in students motivation and interest for what they are studying a little bit further of the theory they learn in classroom. Sometimes students prefer more interactive classes and want to know why the material given in class is helpful in their career. Do we have a solution for this? Autonomous Robotic Vehicle (ARV) projects can be the solution to this problem. This type of project are interdisciplinary and offer students varieties of challenges, while involving different areas of interest like programming, design, assembling and testing. One of the best ARV project is the Micromouse. The Micromouse is a small robot that solves mazes. The basic idea is that the student makes his own Micromouse from scratch using knowledge acquired from different classes and the research done. This project also helps the student to develop teamwork skills and creativity to complete the different challenges and objectives that appear when building a Micromouse. The student learns the importance of working with students from other engineering concentrations, which allows him to experience how a career in engineer will be.",
"title": ""
}
] |
scidocsrr
|
cf806ade5666d4287f695ad5a30324a9
|
A Wideband Circularly Polarized Magnetoelectric Dipole Antenna
|
[
{
"docid": "99a6907dd03efee6b8bdddfc9a8d5920",
"text": "A wideband circularly polarized antenna with a single feed is achieved by adjusting the shapes and dimensions of a linearly polarized magneto-electric (ME) dipole antenna. Experimentally, an antenna prototype operating at around 2 GHz exhibits a wide impedance bandwidth of 73.3% for SWR ≤ 2 and 46.6% for SWR ≤ 1.5, a 3-dB axial ratio bandwidth of 47.7% and an antenna gain of 6.8 ± 1.8 dBic. Based on the design operating at lower microwave frequencies, a millimeter-wave antenna is designed on a dielectric substrate. The antenna has a wide impedance bandwidth (SWR ≤ 2) of 56.7% covering 38.5 to 69 GHz and a 3-dB axial ratio bandwidth of 41% covering 45.8 to 69.4 GHz, over which the antenna boresight gain varies from 5 to 9.9 dBic. This validates that the design can be realized on a single-layer printed circuit board.",
"title": ""
}
] |
[
{
"docid": "271f6291ab2c97b5e561cf06b9131f9d",
"text": "Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.",
"title": ""
},
{
"docid": "4f747c2fb562be4608d1f97ead32e00b",
"text": "With rapid development of the Internet, the web contents become huge. Most of the websites are publicly available and anyone can access the contents everywhere such as workplace, home and even schools. Nevertheless, not all the web contents are appropriate for all users, especially children. An example of these contents is pornography images which should be restricted to certain age group. Besides, these images are not safe for work (NSFW) in which employees should not be seen accessing such contents. Recently, convolutional neural networks have been successfully applied to many computer vision problems. Inspired by these successes, we propose a mixture of convolutional neural networks for adult content recognition. Unlike other works, our method is formulated on a weighted sum of multiple deep neural network models. The weights of each CNN models are expressed as a linear regression problem learnt using Ordinary Least Squares (OLS). Experimental results demonstrate that the proposed model outperforms both single CNN model and the average sum of CNN models in adult content recognition.",
"title": ""
},
{
"docid": "054b5be56ae07c58b846cf59667734fc",
"text": "Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc. Such systems use a large number of cameras to triangulate the position of optical markers. The marker positions are estimated with high accuracy. However, especially when tracking articulated bodies, a fraction of the markers in each timestep is missing from the reconstruction. In this paper, we propose to use a neural network approach to learn how human motion is temporally and spatially correlated, and reconstruct missing markers positions through this model. We experiment with two different models, one LSTM-based and one time-window-based. Both methods produce state-of-the-art results, while working online, as opposed to most of the alternative methods, which require the complete sequence to be known. The implementation is publicly available at https://github.com/Svitozar/NN-for-Missing-Marker-Reconstruction.",
"title": ""
},
{
"docid": "e5bc5b3e4f2833bc8e49a360c050cd8a",
"text": "Research into workplace bullying has continued to grow and mature since emerging from Scandinavian investigations into school bullying in the late 1970s. Research communities now exist well beyond Scandinavia, including Europe, the UK, Australia, Asia and the USA. While the terms ‘harassment’ and ‘mobbing’ are often used to describe bullying behaviour, ‘workplace bullying’ tends to be the most consistently used term throughout the research community. In the past two decades especially, researchers have made considerable advances in developing conceptual clarity, frameworks and theoretical explanations that help explain and address this very complex, but often oversimplified and misunderstood, phenomenon. Indeed, as a phenomenon, workplace bullying is now better understood with reasonably consistent research findings in relation to its prevalence; its negative effects on targets, bystanders and organizational effectiveness; and some of its likely antecedents. However, as highlighted in this review, many challenges remain, particularly in relation to its theoretical foundations and efficacy of prevention and management strategies. Drawing on Affective Events Theory, this review advances understanding through the development of a new conceptual model and analysis of its interrelated components, which explain the dynamic and complex nature of workplace bullying and emphasize current and future debates. Gaps in the literature and future research directions are discussed, including the vexing problem of developing an agreed definition of workplace bullying among the research community, the emergence of cyberbullying, the importance of bystanders in addressing the phenomenon and the use of both formal and informal approaches to prevention and intervention.",
"title": ""
},
{
"docid": "142b1f178ade5b7ff554eae9cad27f69",
"text": "It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.",
"title": ""
},
{
"docid": "a90909570959ade87dd46186a0990a9e",
"text": "DNA methylation is among the best studied epigenetic modifications and is essential to mammalian development. Although the methylation status of most CpG dinucleotides in the genome is stably propagated through mitosis, improvements to methods for measuring methylation have identified numerous regions in which it is dynamically regulated. In this Review, we discuss key concepts in the function of DNA methylation in mammals, stemming from more than two decades of research, including many recent studies that have elucidated when and where DNA methylation has a regulatory role in the genome. We include insights from early development, embryonic stem cells and adult lineages, particularly haematopoiesis, to highlight the general features of this modification as it participates in both global and localized epigenetic regulation.",
"title": ""
},
{
"docid": "3a35170197fb05c59609fb0aa8344bcb",
"text": "Stevioside, an ent-kaurene type of diterpenoid glycoside, is a natural sweetener extracted from leaves of Stevia rebaudiana (Bertoni) Bertoni. Stevioside and a few related compounds are regarded as the most common active principles of the plant. Such phytochemicals have not only been established as non-caloric sweeteners, but reported to exhibit some other pharmacological activities also. In this article, natural distribution of stevioside and related compounds, their structural features, plausible biosynthetic pathways along with an insight into the structure-sweetness relationship are presented. Besides, the pharmacokinetics, wide-range of pharmacological potentials, safety evaluation and clinical trials of these ent-kaurene glycosides are revisited.",
"title": ""
},
{
"docid": "9d9e9dc553245c57c892f393f6275ab5",
"text": "Dimensionality reduction methods are preprocessing techniques used for coping with high dimensionality. They have the aim of projecting the original data set of dimensionality N, without information loss, onto a lower M-dimensional submanifold. Since the value of M is unknown, techniques that allow knowing in advance the value of M, called intrinsic dimension (ID), are quite useful. The aim of the paper is to review state-of-the-art of the methods of ID estimation, underlining the recent advances and the open problems. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "dc2cd7e4da0254e940618011f511590f",
"text": "Privacy considerations of individuals becomes more and more popular issue in recommender systems due to the increasing need for protecting confidential data. Even though users of recommender systems enjoy with personalized productions, they behave timidly about sharing their private data due to the some privacy concerns about price discrimination, unsolicited marketing, govern-ment surveillance and etc. Thus, preserving confidential data of users while producing accurate predictions is one of the extremely important directions of the researches about recommendation systems. In this paper, we gather the most known studies and recently published ones about producing accurately predictions without endangering privacy in order to guide researchers interested with privacy concerns in recommender systems. Moreover, we give a brief discussion about utilized methods.",
"title": ""
},
{
"docid": "b35849046b0f660453637bd237c4a39b",
"text": "A new type of transmission-line resonator is proposed. It is composed of a finite-long straight nonreciprocal phase-shift composite right/left handed transmission line, and both terminals are open or shorted. On the contrary to conventional transmission-line resonators or traveling-wave resonators, the resonant frequency does not depend on the total size of the resonators, but on the configuration of the unit cells. In addition, field profiles on the resonator are analogous to those of traveling-wave resonators, i.e., uniform magnitude distribution and linearly space-varying phase distribution along the resonator. The spatial gradient of the phase distribution is determined by the nonreciprocal phase constants of the transmission lines. The proposed resonator is specifically designed and fabricated by employing a normally magnetized ferrite microstrip line. The fundamental operations of the proposed resonator are demonstrated.",
"title": ""
},
{
"docid": "626408161aa06de1cb50253094d4d8f8",
"text": "In this communication, a corporate stacked microstrip and substrate integrated waveguide (SIW) feeding structure is reported to be used to broaden the impedance bandwidth of a <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> patch array antenna. The proposed array antenna is based on a multilayer printed circuit board structure containing two dielectric substrates and four copper cladding layers. The radiating elements, which consist of slim rectangular patches with surrounding U-shaped parasitic patches, are located on the top layer. Every four radiation elements are grouped together as a <inline-formula> <tex-math notation=\"LaTeX\">$2 \\times 2$ </tex-math></inline-formula> subarray and fed by a microstrip power divider on the next copper layer through metalized blind vias. Four such subarrays are corporate-fed by an SIW feeding network underneath. The design process and analysis of the array antenna are discussed. A prototype of the proposed array antenna is fabricated and measured, showing a good agreement between the simulation and measurement, thus validating the correctness of the design. The measured results indicate that the proposed array antenna exhibits a wide <inline-formula> <tex-math notation=\"LaTeX\">$\\vert \\text {S}_{11}\\vert < -10$ </tex-math></inline-formula> dB bandwidth of 17.7%, i.e., 25.3–30.2 GHz, a peak gain of 16.4 dBi, a high radiation efficiency above 80%, and a good orthogonal polarization discrimination of higher than 30 dB. In addition, the use of low-profile substrate in the SIW feeding network makes this array antenna easier to be integrated directly with millimeter-wave front-end integrated circuits. The demonstrated array antenna can be a good candidate for various <italic>Ka</italic>-band wireless applications, such as 5G, satellite communications and so on.",
"title": ""
},
{
"docid": "2bb21a94c803c74ad6c286c7a04b8c5b",
"text": "Recently, social media, such as Twitter, has been successfully used as a proxy to gauge the impacts of disasters in real time. However, most previous analyses of social media during disaster response focus on the magnitude and location of social media discussion. In this work, we explore the impact that disasters have on the underlying sentiment of social media streams. During disasters, people may assume negative sentiments discussing lives lost and property damage, other people may assume encouraging responses to inspire and spread hope. Our goal is to explore the underlying trends in positive and negative sentiment with respect to disasters and geographically related sentiment. In this paper, we propose a novel visual analytics framework for sentiment visualization of geo-located Twitter data. The proposed framework consists of two components, sentiment modeling and geographic visualization. In particular, we provide an entropy-based metric to model sentiment contained in social media data. The extracted sentiment is further integrated into a visualization framework to explore the uncertainty of public opinion. We explored Ebola Twitter dataset to show how visual analytics techniques and sentiment modeling can reveal interesting patterns in disaster scenarios.",
"title": ""
},
{
"docid": "7a9d7be4126069f007bafc4588633fc9",
"text": "Chinese word segmentation and part-ofspeech tagging (S&T) are fundamental steps for more advanced Chinese language processing tasks. Recently, it has attracted more and more research interests to exploit heterogeneous annotation corpora for Chinese S&T. In this paper, we propose a unified model for Chinese S&T with heterogeneous annotation corpora. We first automatically construct a loose and uncertain mapping between two representative heterogeneous corpora, Penn Chinese Treebank (CTB) and PKU’s People’s Daily (PPD). Then we regard the Chinese S&T with heterogeneous corpora as two “related” tasks and train our model on two heterogeneous corpora simultaneously. Experiments show that our method can boost the performances of both of the heterogeneous corpora by using the shared information, and achieves significant improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "ef9ffa8778c02afd75636b7722208134",
"text": "concepts (principles, formulas, rules, etc.) are illustrated with concrete, specific examples (Horton, 2001) All units/modules in the Web course include an overview and a summary (Chua, 2002; Miller, 2002; Weston et al., 1999) Learners are made aware of learning objective for each unit/module of the Web course. (Chua, 2002; Miller, 2002) Media use Graphics and multimedia assist in noticing and learning critical content rather than merely entertaining or possibly distracting learners (Horton, 2001; Nielsen, 2000) Graphics (illustrations, photographs, graphs, diagrams, etc.) are used appropriately; for example, to communicate visual and spatial concept (Keeker, 1997; Nielsen, 2000) Media (text, images, animations, etc.) included have a strong connection to the objectives and design of the courses. (Horton, 2001; Keeker, 1997; Nielsen, 2000) Learning strategies design The Web course provides opportunities and support for learning through interaction with others through discussion or other collaborative activities (Quinn, 1996) It is clear to the learner what is to be accomplished and what will be gained from its use (Quinn, 1996) The Web course is designed with activities that are both individual and group based (Johnson & Aragon, 2002) The Web course provides the learners opportunities to reflect (Johnson & Aragon, 2002) (Continued) D o w n l o a d e d B y : [ Z a h a r i a s , P a n a g i o t i s ] A t : 1 5 : 1 0 1 4 J a n u a r y 2 0 0 9 98 Zaharias and Poylymenakou Appendix (Continued) Usability items used in questionnaire Literature Instructional feedback The Web course provides learners with opportunities to access extended feedback from instructors, experts, peers, or others through E-mail or other Internet communications (Reeves et al., 2002) Feedback given at any specific time is tailored to the content being studied, problem being solved, or task being completed by the learner (Reeves et al., 2002) Instructional assessment The Web course provides opportunities for selfassessments that advance learner achievement (Reeves et al., 2002) Wherever appropriate, higher order assessments are (analysis, synthesis, and evaluation) provided rather than lower order assessments (recall and recognition) (Reeves et al., 2002) Posttests and other assessments adequately measure accomplishment of the learning objectives (Horton, 2000; Quinn, 1996) Learner guidance and support The online help or documentation is written clearly (Reeves et al., 2002) The online help is screen or context specific (Albion, 1999; Reeves et al., 2002) The Web course offers tools (taking notes, job aids, recourses, glossary, etc.) that support learning (Chua, 2002) The Web course provides support for learner activities to allow working within existing competence while encountering meaningful chunks of knowledge (Quinn, 1996) D o w n l o a d e d B y : [ Z a h a r i a s , P a n a g i o t i s ] A t : 1 5 : 1 0 1 4 J a n u a r y 2 0 0 9",
"title": ""
},
{
"docid": "aff504d1c2149d13718595fd3e745eb0",
"text": "Figure 1 illustrates a typical example of a prediction problem: given some noisy observations of a dependent variable at certain values of the independent variable , what is our best estimate of the dependent variable at a new value, ? If we expect the underlying function to be linear, and can make some assumptions about the input data, we might use a least-squares method to fit a straight line (linear regression). Moreover, if we suspect may also be quadratic, cubic, or even nonpolynomial, we can use the principles of model selection to choose among the various possibilities. Gaussian process regression (GPR) is an even finer approach than this. Rather than claiming relates to some specific models (e.g. ), a Gaussian process can represent obliquely, but rigorously, by letting the data ‘speak’ more clearly for themselves. GPR is still a form of supervised learning, but the training data are harnessed in a subtler way. As such, GPR is a less ‘parametric’ tool. However, it’s not completely free-form, and if we’re unwilling to make even basic assumptions about , then more general techniques should be considered, including those underpinned by the principle of maximum entropy; Chapter 6 of Sivia and Skilling (2006) offers an introduction.",
"title": ""
},
{
"docid": "3b17e2f76f3a3b287423a2d6f4e47125",
"text": "Computer and video games are a maturing medium and industry and have caught the attention of scholars across a variety of disciplines. By and large, computer and video games have been ignored by educators. When educators have discussed games, they have focused on the social consequences of game play, ignoring important educational potentials of gaming. This paper examines the history of games in educational research, and argues that the cognitive potential of games have been largely ignored by educators. Contemporary developments in gaming, particularly interactive stories, digital authoring tools, and collaborative worlds, suggest powerful new opportunities for educational media. VIDEO GAMES IN AMERICAN CULTURE Now just over thirty years old, video games have quickly become one of the most pervasive, profitable, and influential forms of entertainment in the United States and across the world. In 2001, computer and console game software and hardware exceeded $6.35 billion in the United States, and an estimated $19 billion worldwide (IDSA 2002). To contextualize these figures, in October 23, 2001, the Sony PlayStation system debuted in the US, netting well over $150 million in twenty-four hours, over six times the opening day revenues of Star Wars: The Phantom Menace, which netted $25 million. Twenty-five million Americans or, one out of every four households, owns a Sony Playstation (Sony Corporate website 2000). Not only are video games a powerful force not only in the entertainment and economic sector, but in the American cultural landscape, as well. 1 There may be distinctions between the technical features and cultural significance of computer and video games that are worth exploring when discussing games in education, but for the purposes of this paper, they will both be treated as “video games” to simplify matters. Nintendo’s Pokemon, which, like Pac-Man and The Mario Brothers, before it, has evolved from a video game into a cultural phenomena. In the past few years, Pokemon has spun off a television show, a full feature film, a line of toys, and a series of trading cards, making these little creatures giants in youth culture. Given the pervasive influence of video games on American culture, many educators have taken an interest in what the effects these games have on players, and how some of the motivating aspects of video games might be harnessed to facilitate learning. Other educators fear that video games might foster violence, aggression, negative imagery of women, or social isolation (Provenzo 1991). Other educators see video games as powerfully motivating digital environments and study video games in order to determine how motivational components of popular video games might be integrated into instructional design (Bowman 1982; Bracey 1992; Driskell & Dwyer 1984). Conducted during the age of Nintendo, these studies are few in number and somewhat outdated, given recent advancements in game theory and game design. These studies also tend to focus on deriving principles from traditional action (or “twitch”) games, missing important design knowledge embodied in adventure, sports, strategy, puzzle, or role-playing games (RPGs), as well as hybrid games which combine multiple genres (Appleman & Goldsworthy 1999; Saltzman 1999). Likewise, they fail to consider the social contexts of gaming and more recent developments in gaming, such as the Internet. In this paper, I argue that video games are such a popular and influential medium for a combination of many factors. Primarily, however, video games elicit powerful emotional reactions in their players, such as fear, power, aggression, wonder, or joy. Video game designers create these emotions by a balancing a number of game components, such as character traits, game rewards, obstacles, game narrative, competition with other humans, and opportunities for collaboration with other players. Understanding the dynamics behind these design considerations might be useful for instructional technologists who design interactive digital learning environments. Further, video game playing occurs in rich socio-cultural contexts, bringing friends and family together, serving as an outlet for adolescents, and providing the “raw material” for youth culture. Finally, video game research reveals many patterns in how humans interact with technology that become increasingly important to instructional technologists as they become designers of digital environments. Through studying video games, instructional technologists can better understand the impact of technology on individuals and communities, how to support digital environments by situating them in rich social contexts. LEARNERS AS “PAC-MAN” PLAYERS: USING VIDEO GAMES TO UNDERSTAND ENGAGEMENT Since the widespread popularity of PacMan in the early 1980s, some educators have wondered if “the magic of ‘Pac-Man‘cannot be bottled and unleashed in the classroom to enhance student involvement, enjoyment, and commitment” (Bowman 1982, p. 14). A few educators have undertaken this project, defining elements of game design that might be used to make learning environments more engaging (Bowman 1982; Bracey 1992; Driskell & Dwyer 1984; Malone 1981). Through a series of observations, surveys, and interviews, Malone (1981) generated three main elements that “Make video games fun”: Challenge, fantasy, and curiosity. Malone uses these concepts to outline several guidelines for creating enjoyable education programs. Malone (1981) argues that educational programs should have: • clear goals that students find meaningful, • multiple goal structures and scoring to give students feedback on their progress, • multiple difficulty levels to adjust the game difficulty to learner skill, • random elements of surprise, • an emotionally appealing fantasy and metaphor that is related to game skills. In a case study of Super Mario Brothers 2, Provenzo (1991) finds this framework very powerful in explaining why Super Mario Brothers 2 has become one of the most successful video games of all time. Bowman’s checklist provides educators an excellent starting point for understanding game design and analyzing educational games, but at best, it only suggests an underlying theoretical model of why",
"title": ""
},
{
"docid": "3267c5a5f4ab9602d6f69c3d9d137c96",
"text": "This paper briefly discusses the measurement on soil moisture distribution using Electrical Capacitance Tomography (ECT) technique. ECT sensor with 12 electrodes was used for visualization measurement of permittivity distribution. ECT sensor was calibrated using low and high permittivity material i.e. dry sand and saturated soils (sand and clay) respectively. The measurements obtained were recorded and further analyzed by using Linear Back Projection (LBP) image reconstruction. Preliminary result shows that there is a positive correlation with increasing water volume.",
"title": ""
},
{
"docid": "202ea9f6556678c7239f00b11e02d9f0",
"text": "With the popularity of social network-based services, the unprecedented growth of mobile date traffic has brought a heavy burden on the traditional cellular networks. Device-to-device (D2D) communication, as a promising solution to overcome wireless spectrum crisis, can enable fast content delivery based on user activities in social networks. In this paper, we address the content delivery problem related to optimization of peer discovery and resource allocation by combining both the social and physical layer information in D2D underlay networks. The social relationship, which is modeled as the probability of selecting similar contents and estimated by using the Bayesian nonparametric models, is used as a weight to characterize the impact of social features on D2D pair formation and content sharing. Next, we propose a 3-D iterative matching algorithm to maximize the sum rate of D2D pairs weighted by the intensity of social relationships while guaranteeing the quality of service requirements of both cellular and D2D links simultaneously. Moreover, we prove that the proposed algorithm converges to a stable matching and is weak Pareto optimal, and also provide the theoretical complexity. Simulation results show that the algorithm is able to achieve more than 90% of the optimum performance with a computation complexity 1000 times lower than the exhaustive matching algorithm. It is also demonstrated that the satisfaction performance of D2D receivers can be increased significantly by incorporating social relationships into the resource allocation design.",
"title": ""
},
{
"docid": "f85b08a0e3f38c1471b3c7f05e8a17ba",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.",
"title": ""
},
{
"docid": "c628b3e194d311cfc1dede2471819242",
"text": "Background. Children with Down syndrome (DS) demonstrate vestibular, sensory, motor and perceptual impairments which manifests as decreased levels of balance, strength, and motor coordination. Together these issues may decrease functional ability leading to more sedentary lifestyles. Use of vestibular stimulation therapy has been attempted to assist in improving motor control and balance in this population. Objective. The objective of this study was to determine the effect of a vestibular stimulation exercise program on balance, coordination and agility in children with DS. Methods. Seventeen children with DS were recruited from two summer enrichment programs and were divided into two groups based on age (group 1: 9.9 yrs ±2.8; group 2: 18.4 yrs. ±1.7). Assessments were completed using BOT2 subtests for balance, bilateral and upper limb coordination, and agility prior to and after six weeks of twice weekly vestibular stimulation exercises. Results. Both groups showed improvement in upper limb coordination and agility, while group 2 demonstrated improvement in one of the balance subtests. Conclusion. These results suggest a vestibular stimulation exercise program could increase balance and agility in children with DS and possibly assist in increasing their functional ability.",
"title": ""
}
] |
scidocsrr
|
c1b701b7dc6934c59d5f2b46ec4acc8c
|
Detection of Unauthorized IoT Devices Using Machine Learning Techniques
|
[
{
"docid": "efe74721de3eda130957ce26435375a3",
"text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.",
"title": ""
},
{
"docid": "4f7fbc3f313e68456e57a2d6d3c90cd0",
"text": "This survey paper describes a focused literature survey of machine learning (ML) and data mining (DM) methods for cyber analytics in support of intrusion detection. Short tutorial descriptions of each ML/DM method are provided. Based on the number of citations or the relevance of an emerging method, papers representing each method were identified, read, and summarized. Because data are so important in ML/DM approaches, some well-known cyber data sets used in ML/DM are described. The complexity of ML/DM algorithms is addressed, discussion of challenges for using ML/DM for cyber security is presented, and some recommendations on when to use a given method are provided.",
"title": ""
}
] |
[
{
"docid": "3ab61b4f5e59b15e98fd2979a84ba43c",
"text": "MIL-STD-1760, a Department of Defense Interface Standard, was developed to reduce the proliferation of interfaces between aircraft and their stores, and instead to promote interoperability between weapons and aircraft platforms. The original version of the MIL-STD-1760 defined a standardized electrical interface and connector that included both digital and analog databuses, a standardized message protocol (MIL-STD-1553), power, and discrete signals. In 2007, the latest version of MIL-STD-1760 was released. MIL-STD-1760E, employed the previously unused High Bandwidth 2 and High Bandwidth 4 pins of the standardized MIL-STD-1760 connector. These pins were utilized to carry a Fibre Channel based high speed digital databus, FC-AE-1553 which is an adaptation of MIL-STD-1553 for Fibre Channel. This paper provides a brief technical overview of the HS1760 databus and related standardized protocols and also provides an overview of the key technical considerations that must be accounted for when designing test and simulation equipment used to test various aircraft systems and stores which employ the MIL-STD-1760E interface.",
"title": ""
},
{
"docid": "7b6e811ea3f227c33755049355949eaf",
"text": "We revisit the task of learning a Euclidean metric from data. We approach this problem from first principles and formulate it as a surprisingly simple optimization problem. Indeed, our formulation even admits a closed form solution. This solution possesses several very attractive propertie s: (i) an innate geometric appeal through the Riemannian geometry of positive definite matrices; (ii) ease of interpretability; and (iii) computational speed several orders of magnitude faster tha n the widely used LMNN and ITML methods. Furthermore, on standard benchmark datasets, our closed-form solution consist ently attains higher classification accuracy.",
"title": ""
},
{
"docid": "4ea7482524661175e8268c15eb22a6ae",
"text": "We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research. The framework allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees. Our main contribution is the novel coverage reward term of the objective function optimized by the greedy algorithm. This component builds on the graph-of-words representation of text and the k-core decomposition algorithm to assign meaningful scores to words. We evaluate our approach on the AMI and ICSI meeting speech corpora, and on the DUC2001 news corpus. We reach state-of-the-art performance on all datasets. Results indicate that our method is particularly well-suited to the meeting domain.",
"title": ""
},
{
"docid": "7a005d66591330d6fdea5ffa8cb9020a",
"text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.",
"title": ""
},
{
"docid": "3bcd7aaa3a3c8d19ba4d6edb6554dd85",
"text": "In order to achieve up to 1 Gb/s peak data rate in future IMT-Advanced mobile systems, carrier aggregation technology is introduced by the 3GPP to support very-high-data-rate transmissions over wide frequency bandwidths (e.g., up to 100 MHz) in its new LTE-Advanced standards. This article first gives a brief review of continuous and non-continuous CA techniques, followed by two data aggregation schemes in physical and medium access control layers. Some technical challenges for implementing CA technique in LTE-Advanced systems, with the requirements of backward compatibility to LTE systems, are highlighted and discussed. Possible technical solutions for the asymmetric CA problem, control signaling design, handover control, and guard band setting are reviewed. Simulation results show Doppler frequency shift has only limited impact on data transmission performance over wide frequency bands in a high-speed mobile environment when the component carriers are time synchronized. The frequency aliasing will generate much more interference between adjacent component carriers and therefore greatly degrades the bit error rate performance of downlink data transmissions.",
"title": ""
},
{
"docid": "b753eb752d4f87dbff82d77e8417f389",
"text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:",
"title": ""
},
{
"docid": "28efe3b5fe479a1e95029f122f5b62f3",
"text": "Most of the current metric learning methods are proposed for point-to-point distance (PPD) based classification. In many computer vision tasks, however, we need to measure the point-to-set distance (PSD) and even set-to-set distance (SSD) for classification. In this paper, we extend the PPD based Mahalanobis distance metric learning to PSD and SSD based ones, namely point-to-set distance metric learning (PSDML) and set-to-set distance metric learning (SSDML), and solve them under a unified optimization framework. First, we generate positive and negative sample pairs by computing the PSD and SSD between training samples. Then, we characterize each sample pair by its covariance matrix, and propose a covariance kernel based discriminative function. Finally, we tackle the PSDML and SSDML problems by using standard support vector machine solvers, making the metric learning very efficient for multiclass visual classification tasks. Experiments on gender classification, digit recognition, object categorization and face recognition show that the proposed metric learning methods can effectively enhance the performance of PSD and SSD based classification.",
"title": ""
},
{
"docid": "6e527b021720cc006ec18a996abf36b5",
"text": "Flow cytometry is a sophisticated instrument measuring multiple physical characteristics of a single cell such as size and granularity simultaneously as the cell flows in suspension through a measuring device. Its working depends on the light scattering features of the cells under investigation, which may be derived from dyes or monoclonal antibodies targeting either extracellular molecules located on the surface or intracellular molecules inside the cell. This approach makes flow cytometry a powerful tool for detailed analysis of complex populations in a short period of time. This review covers the general principles and selected applications of flow cytometry such as immunophenotyping of peripheral blood cells, analysis of apoptosis and detection of cytokines. Additionally, this report provides a basic understanding of flow cytometry technology essential for all users as well as the methods used to analyze and interpret the data. Moreover, recent progresses in flow cytometry have been discussed in order to give an opinion about the future importance of this technology.",
"title": ""
},
{
"docid": "1802e14988d1c5c1469859616b6441a2",
"text": "Twitter is a microblogging platform in which users can post status messages, called “tweets,” to their friends. It has provided an enormous dataset of the so-called sentiments, whose classification can take place through supervised learning. To build supervised learning models, classification algorithms require a set of representative labeled data. However, labeled data are usually difficult and expensive to obtain, which motivates the interest in semi-supervised learning. This type of learning uses unlabeled data to complement the information provided by the labeled data in the training process; therefore, it is particularly useful in applications including tweet sentiment analysis, where a huge quantity of unlabeled data is accessible. Semi-supervised learning for tweet sentiment analysis, although appealing, is relatively new. We provide a comprehensive survey of semi-supervised approaches applied to tweet classification. Such approaches consist of graph-based, wrapper-based, and topic-based methods. A comparative study of algorithms based on self-training, co-training, topic modeling, and distant supervision highlights their biases and sheds light on aspects that the practitioner should consider in real-world applications.",
"title": ""
},
{
"docid": "6096eaa19f2bffae6d6944a52259d47f",
"text": "Advances in social networking and communication technologies have witnessed an increasing number of applications where data is not only characterized by rich content information, but also connected with complex relationships representing social roles and dependencies between individuals. To enable knowledge discovery from such networked data, network representation learning (NRL) aims to learn vector representations for network nodes, such that off-the-shelf machine learning algorithms can be directly applied. To date, existing NRL methods either primarily focus on network structure or simply combine node content and topology for learning. We argue that in information networks, information is mainly originated from three sources: (1) homophily, (2) topology structure, and (3) node content. Homophily states social phenomenon where individuals sharing similar attributes (content) tend to be directly connected through local relational ties, while topology structure emphasizes more on global connections. To ensure effective network representation learning, we propose to augment three information sources into one learning objective function, so that the interplay roles between three parties are enforced by requiring the learned network representations (1) being consistent with node content and topology structure, and also (2) following the social homophily constraints in the learned space. Experiments on multi-class node classification demonstrate that the representations learned by the proposed method consistently outperform state-of-the-art NRL methods, especially for very sparsely labeled networks.",
"title": ""
},
{
"docid": "815c78ad6478885fca1c1a2fe0804597",
"text": "Purpose – The electronic social media such as Twitter, Facebook, MySpace, etc. have become a major form of communication, and the expression of attitudes and opinions, for the general public. Recently, they have also become a source of data for market researchers. This paper aims to provide a critical look at the advantages and limitations of such an approach to understanding brand perceptions and attitudes in the market place. Although the social media provide a wealth of data for automated content analyses, this review questions the validity and reliability of this research approach, and concludes that social media monitoring (SMM) is a poor substitute for in-depth qualitative research which has many advantages and benefits. Design/methodology/approach – The paper presents a detailed, systematic comparison of various research approaches. These include well-established methods and recent inventions which are in use to explore and understand consumer behaviour and attitudes. Particular attention is given to the analysis of spontaneous consumer attitudes as expressed through the social media and also in qualitative research interviews. Findings – This analysis concludes that there are three critical features which differentiate qualitative research (as practised in IDIs and group discussions) from SMM. These are: the direct, interactive dialogue or conversation between consumers and researchers; the facility to “listen” and attend to the (sometimes unspoken) underlying narrative which connects consumers’ needs and aspirations, personal goals and driving forces to behaviour and brand choice; and the dynamic, interactive characteristics of the interview that achieve a meeting of minds to produce a shared understanding. Philosophically, it is this “conversation” that gives qualitative research its validity and authenticity which makes it superior to SMM. Originality/value – This review questions the validity and reliability of the SMM, and concludes that it is a poor substitute for in-depth qualitative research which has many advantages and benefits.",
"title": ""
},
{
"docid": "1f5be98d48b3129492eb99ead40574d3",
"text": "As more features have been added to mobile devices, it has become necessary to integrate more DC-DC converters into the power-management IC. Consequently, there is a growing need for an area-efficient and simple controller design for DC-DC converters. A simple hysteretic control without any additional component for compensation is a very attractive solution because it is not only cost-effective, but also immediately responds to load change. However, a conventional current-mode hysteretic controller with low-ESR output capacitor, shown in Fig. 12.1.1, has an inherent trade-off between transient response and RC network time constant for emulating inductor current [1]. For instance, at a switching frequency less than several MHz, which is widely used in industry, the RC network occupies a relatively large die area because the capacitance CSEN of the network can be up to 100pF at a switching frequency of 1 MHz. Another issue with the hysteretic converter is its variable switching frequency, which leads to difficulty in designing an EMI filter [2]. To overcome these limitations, several state-of-the-art hysteretic converters have been reported that provide fast transient response and fixed switching frequency [2-5]. However, they suffer from noise due to the differentiator for amplifying ripple voltage [2], or need a large external or internal capacitor [3, 4], or generate high switching loss in the converter [5]. In this paper, we present an area-efficient quasi-current-mode hysteretic buck converter with fixed switching frequency. By employing a quasi inductor current emulator (QICE) with reset operation, the controller only uses a total internal capacitance of 3pF and provides fast transient response.",
"title": ""
},
{
"docid": "811454b2fae8bb4720d703f2dc1b1fe0",
"text": "Cybersecurity risks and malware threats are becoming increasingly dangerous and common. Despite the severity of the problem, there has been few NLP efforts focused on tackling cybersecurity. In this paper, we discuss the construction of a new database for annotated malware texts. An annotation framework is introduced based around the MAEC vocabulary for defining malware characteristics, along with a database consisting of 39 annotated APT reports with a total of 6,819 sentences. We also use the database to construct models that can potentially help cybersecurity researchers in their data collection and analytics efforts.",
"title": ""
},
{
"docid": "682b3d97bdadd988b0a21d5dd6774fbc",
"text": "WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development.",
"title": ""
},
{
"docid": "45719c2127204b4eb169fccd2af0bf82",
"text": "A face hallucination algorithm is proposed to generate high-resolution images from JPEG compressed low-resolution inputs by decomposing a deblocked face image into structural regions such as facial components and non-structural regions like the background. For structural regions, landmarks are used to retrieve adequate high-resolution component exemplars in a large dataset based on the estimated head pose and illumination condition. For non-structural regions, an efficient generic super resolution algorithm is applied to generate high-resolution counterparts. Two sets of gradient maps extracted from these two regions are combined to guide an optimization process of generating the hallucination image. Numerous experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art hallucination methods on JPEG compressed face images with different poses, expressions, and illumination conditions.",
"title": ""
},
{
"docid": "a5214112059506a67f031d98a4e6f04f",
"text": "Accurate segmentation of cervical cells in Pap smear images is an important task for automatic identification of pre-cancerous changes in the uterine cervix. One of the major segmentation challenges is the overlapping of cytoplasm, which was less addressed by previous studies. In this paper, we propose a learning-based method to tackle the overlapping issue with robust shape priors by segmenting individual cell in Pap smear images. Specifically, we first define the problem as a discrete labeling task for multiple cells with a suitable cost function. We then use the coarse labeling result to initialize our dynamic multiple-template deformation model for further boundary refinement on each cell. Multiple-scale deep convolutional networks are adopted to learn the diverse cell appearance features. Also, we incorporate high level shape information to guide segmentation where the cells boundary is noisy or lost due to touching and overlapping cells. We evaluate the proposed algorithm on two different datasets, and our comparative experiments demonstrate the promising performance of the proposed method in terms of segmentation accuracy.",
"title": ""
},
{
"docid": "1638f42ee75131459f659ece60f46874",
"text": "Cloud computing is a rapidly evolving information technology (IT) phenomenon. Rather than procure, deploy and manage a physical IT infrastructure to host their software applications, organizations are increasingly deploying their infrastructure into remote, virtualized environments, often hosted and managed by third parties. This development has significant implications for digital forensic investigators, equipment vendors, law enforcement, as well as corporate compliance and audit departments (among others). Much of digital forensic practice assumes careful control and management of IT assets (particularly data storage) during the conduct of an investigation. This paper summarises the key aspects of cloud computing and analyses how established digital forensic procedures will be invalidated in this new environment. Several new research challenges addressing this changing context are also identified and discussed.",
"title": ""
},
{
"docid": "d735d6fb4fb274cc4531b6a04f3b6f9c",
"text": "Classification of electrocardiogram (ECG) signals plays an important role in clinical diagnosis of heart disease. This paper proposes the design of an efficient system for classification of the normal beat (N), ventricular ectopic beat (V), supraventricular ectopic beat (S), fusion beat (F), and unknown beat (Q) using a mixture of features. In this paper, two different feature extraction methods are proposed for classification of ECG beats: (i) S-transform based features along with temporal features and (ii) mixture of ST and WT based features along with temporal features. The extracted feature set is independently classified using multilayer perceptron neural network (MLPNN). The performances are evaluated on several normal and abnormal ECG signals from 44 recordings of the MIT-BIH arrhythmia database. In this work, the performances of three feature extraction techniques with MLP-NN classifier are compared using five classes of ECG beat recommended by AAMI (Association for the Advancement of Medical Instrumentation) standards. The average sensitivity performances of the proposed feature extraction technique for N, S, F, V, and Q are 95.70%, 78.05%, 49.60%, 89.68%, and 33.89%, respectively. The experimental results demonstrate that the proposed feature extraction techniques show better performances compared to other existing features extraction techniques.",
"title": ""
},
{
"docid": "d49260a42c4d800963ca8779cf50f1ee",
"text": "Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological properties of input data. In this paper, we enhance the autoencoder’s ability to learn effective data representations by aligning inner products between codes with respect to a kernel matrix. By doing so, the proposed kernelized autoencoder allows learning similarity-preserving embeddings of input data, where the notion of similarity is explicitly controlled by the user and encoded in a positive semi-definite kernel matrix. Experiments are performed for evaluating both reconstruction and kernel alignment performance in classification tasks and visualization of high-dimensional data. Additionally, we show that our method is capable to emulate kernel principal component analysis on a denoising task, obtaining competitive results at a much lower computational cost.",
"title": ""
},
{
"docid": "8b2f1dc92084548108dc349f5d8f7ff1",
"text": "Although the amygdala's role in processing facial expressions of fear has been well established, its role in the processing of other emotions is unclear. In particular, evidence for the amygdala's involvement in processing expressions of happiness and sadness remains controversial. To clarify this issue, we constructed a series of morphed stimuli whose emotional expression varied gradually from very faint to more pronounced. Five morphs each of sadness and happiness, as well as neutral faces, were shown to 27 subjects with unilateral amygdala damage and 5 with complete bilateral amygdala damage, whose data were compared to those from 12 braindamaged and 26 normal controls. Subjects were asked to rate the intensity and to label the stimuli. Subjects with unilateral amygdala damage performed very comparably to controls. By contrast, subjects with bilateral amygdala damage showed a specific impairment in rating sad faces, but performed normally in rating happy faces. Furthermore, subjects with right unilateral amygdala damage performed somewhat worse than subjects with left unilateral amygdala damage. The findings suggest that the amygdala's role in processing of emotional facial expressions encompasses multiple negatively valenced emotions, including fear and sadness.",
"title": ""
}
] |
scidocsrr
|
965f4f79141cd11b1677ec7035b3794d
|
In vitro antibacterial activity of some plant essential oils
|
[
{
"docid": "7dcba854d1f138ab157a1b24176c2245",
"text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.",
"title": ""
}
] |
[
{
"docid": "2ec0db3840965993e857b75bd87a43b7",
"text": "Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\n In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.",
"title": ""
},
{
"docid": "ac1ad9c7d3812560a1bb9bd63e248031",
"text": "Neuroarchitecture uses neuroscientific tools to better understand architectural design and its impact on human perception and subjective experience. The form or shape of the built environment is fundamental to architectural design, but not many studies have shown the impact of different forms on the inhabitants' emotions. This study investigated the neurophysiological correlates of different interior forms on the perceivers' affective state and the accompanying brain activity. To understand the impact of naturalistic three-dimensional (3D) architectural forms, it is essential to perceive forms from different perspectives. We computed clusters of form features extracted from pictures of residential interiors and constructed exemplary 3D room models based on and representing different formal clusters. To investigate human brain activity during 3D perception of architectural spaces, we used a mobile brain/body imaging (MoBI) approach recording the electroencephalogram (EEG) of participants while they naturally walk through different interior forms in virtual reality (VR). The results revealed a strong impact of curvature geometries on activity in the anterior cingulate cortex (ACC). Theta band activity in ACC correlated with specific feature types (rs (14) = 0.525, p = 0.037) and geometry (rs (14) = -0.579, p = 0.019), providing evidence for a role of this structure in processing architectural features beyond their emotional impact. The posterior cingulate cortex and the occipital lobe were involved in the perception of different room perspectives during the stroll through the rooms. This study sheds new light on the use of mobile EEG and VR in architectural studies and provides the opportunity to study human brain dynamics in participants that actively explore and realistically experience architectural spaces.",
"title": ""
},
{
"docid": "60697a4e8dd7d13147482a0992ee1862",
"text": "Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. We propose a technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision.\n To demonstrate the effectiveness of type refinement, we implement a static analysis tool for reporting potential type-errors in JavaScript programs. We provide an extensive empirical evaluation of type refinement using a benchmark suite containing a variety of JavaScript application domains, ranging from the standard performance benchmark suites (Sunspider and Octane), to open-source JavaScript applications, to machine-generated JavaScript via Emscripten. We show that type refinement can significantly improve analysis precision by up to 86% without affecting the performance of the analysis.",
"title": ""
},
{
"docid": "653a2299cd8bc5cfb48e660390632911",
"text": "Recent studies indicate that several Toll-like receptors (TLRs) are implicated in recognizing viral structures and instigating immune responses against viral infections. The aim of this study is to examine the expression of TLRs and proinflammatory cytokines in viral skin diseases such as verruca vulgaris (VV) and molluscum contagiosum (MC). Reverse transcription-polymerase chain reaction and immunostaining of skin samples were performed to determine the expression of specific antiviral and proinflammatory cytokines as well as 5 TLRs (TLR2, 3, 4, 7, and 9). In normal human skin, TLR2, 4, and 7 mRNA was constitutively expressed, whereas little TLR3 and 9 mRNA was detected. Compared to normal skin (NS), TLR3 and 9 mRNA was clearly expressed in VV and MC specimens. Likewise, immunohistochemistry indicated that keratinocytes in NS constitutively expressed TLR2, 4, and 7; however, TLR3 was rarely detected and TLR9 was only weakly expressed, whereas 5 TLRs were all strongly expressed on the epidermal keratinocytes of VV and MC lesions. In addition, the mRNA expression of IFN-beta and TNF-alpha was upregulated in the VV and MC samples. Immunohistochemistry indicated that IFN-beta and TNF-alpha were predominantly localized in the granular layer in the VV lesions and adjacent to the MC bodies. Our results indicated that VV and MC skin lesions expressed TLR3 and 9 in addition to IFN-beta and TNF-alpha. These viral-induced proinflammatory cytokines may play a pivotal role in cutaneous innate immune responses.",
"title": ""
},
{
"docid": "1783f837b61013391f3ff4f03ac6742e",
"text": "Nowadays, many methods have been applied for data transmission of MWD system. Magnetic induction is one of the alternative technique. In this paper, detailed discussion on magnetic induction communication system is provided. The optimal coil configuration is obtained by theoretical analysis and software simulations. Based on this coil arrangement, communication characteristics of path loss and bit error rate are derived.",
"title": ""
},
{
"docid": "d6039a3f998b33c08b07696dfb1c2ca9",
"text": "In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. According to the experimental results, we verity system performance in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Moreover, to deal with the accident immediately, the system provides local station, central control room and train with the video information and alarm message.",
"title": ""
},
{
"docid": "626c274978a575cd06831370a6590722",
"text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "7aef73379f97b69e6559d9db3955637d",
"text": "The emergence and proliferation of electronic health record (EHR) systems has incrementally resulted in large volumes of clinical free text documents available across healthcare networks. The huge amount of data inspires research and development focused on novel clinical natural language processing (NLP) solutions to optimize clinical care and improve patient outcomes. In recent years, deep learning techniques have demonstrated superior performance over traditional machine learning (ML) techniques for various general-domain NLP tasks e.g. language modeling, parts-of-speech (POS) tagging, named entity recognition, paraphrase identification, sentiment analysis etc. Clinical documents pose unique challenges compared to general-domain text due to widespread use of acronyms and non-standard clinical jargons by healthcare providers, inconsistent document structure and organization, and requirement for rigorous de-identification and anonymization to ensure patient data privacy. This tutorial chapter will present an overview of how deep learning techniques can be applied to solve NLP tasks in general, followed by a literature survey of existing deep learning algorithms applied to clinical NLP problems. Finally, we include a description of various deep learning-driven clinical NLP applications developed at the Artificial Intelligence (AI) lab in Philips Research in recent years such as diagnostic inferencing from unstructured clinical narratives, relevant biomedical article retrieval based on clinical case scenarios, clinical paraphrase generation, adverse drug event (ADE) detection from social media, and medical image caption generation. Sadid A. Hasan Artificial Intelligence Lab, Philips Research North America, Cambridge, MA, USA. e-mail: sadid.hasan@philips.com Oladimeji Farri Artificial Intelligence Lab, Philips Research North America, Cambridge, MA, USA. e-mail: dimeji.farri@philips.com",
"title": ""
},
{
"docid": "060e518af9a250c1e6a3abf49555754f",
"text": "The deep learning community has proposed optimizations spanning hardware, software, and learning theory to improve the computational performance of deep learning workloads. While some of these optimizations perform the same operations faster (e.g., switching from a NVIDIA K80 to P100), many modify the semantics of the training procedure (e.g., large minibatch training, reduced precision), which can impact a model’s generalization ability. Due to a lack of standard evaluation criteria that considers these trade-offs, it has become increasingly difficult to compare these different advances. To address this shortcoming, DAWNBENCH and the upcoming MLPERF benchmarks use time-to-accuracy as the primary metric for evaluation, with the accuracy threshold set close to state-of-the-art and measured on a held-out dataset not used in training; the goal is to train to this accuracy threshold as fast as possible. In DAWNBENCH, the winning entries improved time-to-accuracy on ImageNet by two orders of magnitude over the seed entries. Despite this progress, it is unclear how sensitive time-to-accuracy is to the chosen threshold as well as the variance between independent training runs, and how well models optimized for time-to-accuracy generalize. In this paper, we provide evidence to suggest that time-to-accuracy has a low coefficient of variance and that the models tuned for it generalize nearly as well as pre-trained models. We additionally analyze the winning entries to understand the source of these speedups, and give recommendations for future benchmarking efforts.",
"title": ""
},
{
"docid": "47866c8eb518f962213e3a2d8c3ab8d3",
"text": "With the increasing fears of the impacts of the high penetration rates of Photovoltaic (PV) systems, a technical study about their effects on the power quality metrics of the utility grid is required. Since such study requires a complete modeling of the PV system in an electromagnetic transient software environment, PSCAD was chosen. This paper investigates a grid-tied PV system that is prepared in PSCAD. The model consists of PV array, DC link capacitor, DC-DC buck converter, three phase six-pulse inverter, AC inductive filter, transformer and a utility grid equivalent model. The paper starts with investigating the tasks of the different blocks of the grid-tied PV system model. It also investigates the effect of variable atmospheric conditions (irradiation and temperature) on the performance of the different components in the model. DC-DC converter and inverter in this model use PWM and SPWM switching techniques, respectively. Finally, total harmonic distortion (THD) analysis on the inverter output current at PCC will be applied and the obtained THD values will be compared with the limits specified by the regulating standards such as IEEE Std 519-1992.",
"title": ""
},
{
"docid": "ee96b4c7d15008f4b8831ecf2d337b1d",
"text": "This paper proposes the identification of regions of interest in biospeckle patterns using unsupervised neural networks of the type Self-Organizing Maps. Segmented images are obtained from the acquisition and processing of laser speckle sequences. The dynamic speckle is a phenomenon that occurs when a beam of coherent light illuminates a sample in which there is some type of activity, not visible, which results in a variable pattern over time. In this particular case the method is applied to the evaluation of bacterial chemotaxis. Image stacks provided by a set of experiments are processed to extract features of the intensity dynamics. A Self-Organizing Map is trained and its cells are colored according to a criterion of similarity. During the recall stage the features of patterns belonging to a new biospeckle sample impact on the map, generating a new image using the color of the map cells impacted by the sample patterns. It is considered that this method has shown better performance to identify regions of interest than those that use a single descriptor. To test the method a chemotaxis assay experiment was performed, where regions were differentiated according to the bacterial motility within the sample.",
"title": ""
},
{
"docid": "cbd52c9d8473a81b92fcdd740326613f",
"text": "Optimizing decisions has become a vital factor for companies. In order to be able to evaluate beforehand the impact of a decision, managers need reliable previsional systems. Though data warehouses enable analysis of past data, they are not capable of giving anticipations of future trends. What-if analysis fills this gap by enabling users to simulate and inspect the behavior of a complex system under some given hypotheses. A crucial issue in the design of what-if applications is to find an adequate formalism to conceptually express the underlying simulation model. In this paper the authors report on how, within the framework of a comprehensive design methodology, this can be accomplished by extending UML 2 with a set of stereotypes. Their proposal is centered on the use of activity diagrams enriched with object flows, aimed at expressing functional, dynamic, and static aspects in an integrated fashion. The paper is completed by examples taken from a real case study in the commercial area. DOI: 10.4018/jdwm.2009080702 IGI PUBLISHING This paper appears in the publication, International Journal of Data Warehousing and Mining, Volume 5, Issue 4 edited by David Taniar © 2009, IGI Global 701 E. Chocolate Avenue, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.igi-global.com ITJ 5290 International Journal of Data Warehousing and Mining, 5(4), 24-43, October-December 2009 25 Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The BI pyramid demonstrates that data warehouses, that have been playing a lead role within BI platforms in supporting the decision process over the last decade, are no more than the starting point for the application of more advanced techniques that aim at building a bridge to the real decision-making process. This is because data warehouses are aimed at enabling analysis of past data, but they are not capable of giving anticipations of future trends. Indeed, in order to be able to evaluate beforehand the impact of a strategic or tactical move, decision makers need reliable previsional systems. So, almost at the top of the BI pyramid, what-if analysis comes into play. What-if analysis is a data-intensive simulation whose goal is to inspect the behavior of a complex system (i.e., the enterprise business or a part of it) under some given hypotheses called scenarios. More pragmatically, what-if analysis measures how changes in a set of independent variables impact on a set of dependent variables with reference to a simulation model offering a simplified representation of the business, designed to display significant features of the business and tuned according to the historical enterprise data (Kellern et al., 1999). Example 1: A simple example of what-if query in the marketing domain is: How would my profits change if I run a 3×2 (pay 2, take 3) promotion for one week on all audio products on sale? Answering this query requires a simulation model to be built. This model, that must be capable of expressing the complex relationships between the business variables that determine the impact of promotions on product sales, is then run against the historical sale data in order to determine a reliable forecast for future sales. Among the killer applications for what-if analysis, it is worth mentioning profitability analysis in commerce, hazard analysis in finance, promotion analysis in marketing, and effectiveness analysis in production planning (Rizzi, 2009b). Less traditional, yet interesting applications described in the literature are urban and regional planning supported by spatial databases, index selection in relational databases, and ETL maintenance in data warehousing systems. Surprisingly, though a few commercial tools are already capable of performing forecasting and what-if analysis, very few attempts have been made so far outside the simulation community to address methodological and modeling issues in this field (Golfarelli et al., 2006). On the other hand, facing a what-if project without the support of a design methodology is very time-consuming, and does not adequately protect designers and customers against the risk of failure. Figure 1. The business intelligence pyramid 18 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/simulation-modeling-businessintelligence/37403?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Select, InfoSci-Knowledge Discovery, Information Management, and Storage eJournal Collection, InfoSci-Surveillance, Security, and Defense eJournal Collection, InfoSci-Journal Disciplines Engineering, Natural, and Physical Science, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology, InfoSciSelect. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "e59619d9faff43ac14d6c895e9318e08",
"text": "A dysmorphic girl (Fig. 1) was referred at the age of 19 months (late April 2008) because of slow growth (height, 75 cm; weight, 8.5 kg) and developmental delay. She had the classical phenotypic manifestation of type II recessive cutis laxa (Debre type) with redundant loose skin present from birth and especially evident on the limbs. She was small at birth. She was born as the first child of healthy parents of Arabian origin, who were second cousins. The girl was hypotonic and had an opened fontanel (1.5 cm in diameter); this allowed cranial ultrasonography, which revealed normal findings. Facially, the bridge of the nose was broad, the nose was prominent, and the eyes were wide set with down-slanting palpebral fissures. The palate was high and the right hip was dislocated. She could not stand or walk. She could sit, but could not crawl. She was a “bottom shuffler.” Her language was poorly developed and she said no meaningful words. Table 1 summarizes the presenting features.",
"title": ""
},
{
"docid": "e7b9c3ef571770788cd557f8c4843bcf",
"text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.",
"title": ""
},
{
"docid": "5b9baa6587bc70c17da2b0512545c268",
"text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode. In this paper, genetic programming (GP) is used to build credit scoring models. Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression. On the basis of the results, we can conclude that GP can provide better performance than other models. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a2f062482157efb491ca841cc68b7fd3",
"text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.",
"title": ""
},
{
"docid": "83f367455b24167207bf66d9a87e4ea4",
"text": "H would you design an exercise program to promote synthesis and healing of collagen if it were the primary tissue in lesion? Are isometric contractions the most appropriate type of exercise for post-surgical rehabilitation or guarded musculature? What dosage is ideal for mobilization exercises, and how can you localize your forces? These questions and more will be addressed. Exercise is not always emphasized in manual therapy schools of thought. Classically, the focus has been on passive treatments such as articulation, manipulation, and soft tissue massage, all of which aim to increase range of motion, inhibit pain and guarding, and normalize somato-visceral reflexes. Dosed therapeutic exercise is as valuable an approach to achieve these same goals as well as produce numerous additional benefits for your patient. This one-day course will review the history of medical exercise therapy, S.T.E.P principles, exercise physiology, histology, dosage concepts, and a variety of exercise progression examples. This will be an introductory course with several video examples and a few lab activities.",
"title": ""
},
{
"docid": "0b0043590ee170957353141ef8ca42b7",
"text": "The OWL Reasoner Evaluation competition is an annual competition (with an associated workshop) that pits OWL 2 compliant reasoners against each other on various standard reasoning tasks over naturally occurring problems. The 2015 competition was the third of its sort and had 14 reasoners competing in six tracks comprising three tasks (consistency, classification, and realisation) over two profiles (OWL 2 DL and EL). In this paper, we discuss the design, execution and results of the 2015 competition with particular attention to lessons learned for benchmarking, comparative experiments, and future competitions.",
"title": ""
},
{
"docid": "fd531eeed23d5cdde6d6751b37569474",
"text": "Paraphrases play an important role in the variety and complexity of natural language documents. However they adds to the difficulty of natural language processing. Here we describe a procedure for obtaining paraphrases from news article. A set of paraphrases can be useful for various kinds of applications. Articles derived from different newspapers can contain paraphrases if they report the same event of the same day. We exploit this feature by using Named Entity recognition. Our basic approach is based on the assumption that Named Entities are preserved across paraphrases. We applied our method to articles of two domains and obtained notable examples. Although this is our initial attempt to automatically extracting paraphrases from a corpus, the results are promising.",
"title": ""
},
{
"docid": "cc7b9d8bc0036b842f3c1f492998abc7",
"text": "This paper presents a new approach called Hierarchical Support Vector Machines (HSVM), to address multiclass problems. The method solves a series of maxcut problems to hierarchically and recursively partition the set of classes into two-subsets, till pure leaf nodes that have only one class label, are obtained. The SVM is applied at each internal node to construct the discriminant function for a binary metaclass classifier. Because maxcut unsupervised decomposition uses distance measures to investigate the natural class groupings. HSVM has a fast and intuitive SVM training process that requires little tuning and yields both high accuracy levels and good generalization. The HSVM method was applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana. Classification accuracies and generalization capability are compared to those achieved by the Best Basis Binary Hierarchical Classifier, a Random Forest CART binary decision tree classifier and Binary Hierarchical Support Vector Machines.",
"title": ""
}
] |
scidocsrr
|
a97663387de9207117c9791d7c92d191
|
Virtual CPU validation
|
[
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
}
] |
[
{
"docid": "31154ba893dbb2d7ae790b9d9d4aef0b",
"text": "In this paper, we consider the problem of “evil twin” attacks in wireless local area networks (WLANs). An evil twin is essentially a phishing (rogue) Wi-Fi access point (AP) that looks like a legitimate one (with the same SSID name). It is set up by an adversary, who can eavesdrop on wireless communications of users' Internet access. Existing evil twin detection solutions are mostly for wireless network administrators to verify whether a given AP is in an authorized list or not, instead of for a wireless client to detect whether a given AP is authentic or evil. Such administrator-side solutions are limited, expensive, and not available for many scenarios. For example, for traveling users who use wireless networks at airports, hotels, or cafes, they need to protect themselves from evil twin attacks (instead of relying on those wireless network providers, which typically may not provide strong security monitoring/management service). Thus, a lightweight and effective solution for these users is highly desired. In this work, we propose a novel user-side evil twin detection technique that outperforms traditional administrator-side detection methods in several aspects. Unlike previous approaches, our technique does not need a known authorized AP/host list, thus it is suitable for users to identify and avoid evil twins. Our technique does not strictly rely on training data of target wireless networks, nor depend on the types of wireless networks. We propose to exploit fundamental communication structures and properties of such evil twin attacks in wireless networks and to design new active, statistical and anomaly detection algorithms. Our preliminary evaluation in real-world widely deployed 802.11b and 802.11g wireless networks shows very promising results. We can identify evil twins with a very high detection rate while keeping a very low false positive rate.",
"title": ""
},
{
"docid": "1d03d6f7cd7ff9490dec240a36bf5f65",
"text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.",
"title": ""
},
{
"docid": "75e9253b7c6333db1aa3cef2ab364f99",
"text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.",
"title": ""
},
{
"docid": "1a78e17056cca09250c7cc5f81fb271b",
"text": "This paper presents a lightweight stereo vision-based driving lane detection and classification system to achieve the ego-car’s lateral positioning and forward collision warning to aid advanced driver assistance systems (ADAS). For lane detection, we design a self-adaptive traffic lanes model in Hough Space with a maximum likelihood angle and dynamic pole detection region of interests (ROIs), which is robust to road bumpiness, lane structure changing while the ego-car’s driving and interferential markings on the ground. What’s more, this model can be improved with geographic information system or electronic map to achieve more accurate results. Besides, the 3-D information acquired by stereo matching is used to generate an obstacle mask to reduce irrelevant objects’ interfere and detect forward collision distance. For lane classification, a convolutional neural network is trained by using manually labeled ROI from KITTI data set to classify the left/right-side line of host lane so that we can provide significant information for lane changing strategy making in ADAS. Quantitative experimental evaluation shows good true positive rate on lane detection and classification with a real-time (15Hz) working speed. Experimental results also demonstrate a certain level of system robustness on variation of the environment.",
"title": ""
},
{
"docid": "c4b5f77f7cce22bca020fe1aca8df8b4",
"text": "In the field of law there is an absolute need for summarizing the texts of court decisions in order to make the content of the cases easily accessible for legal professionals. During the SALOMON and MOSAIC projects we investigated the summarization and retrieval of legal cases. This article presents some of the main findings while integrating the research results of experiments on legal document summarization by other research groups. In addition, we propose novel avenues of research for automatic text summarization, which we currently exploit when summarizing court decisions in the ACILA project. Techniques for automated concept learning and argument recognition are here the most challenging. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2a404b0be685e069083596b4f7a2dd80",
"text": "Sexual relations with intercourse (ASR-I) and high prevalence of teen pregnancies (19.2%, in 2002) among adolescents in Puerto Rico constitute a serious biopsychosocial problem. Studying the consequences and correlates of ASR-I in community and mental health samples of adolescents is important in designing and implementing sexual health programs. Randomized representative cross-sectional samples of male and female adolescents from 11-18 years old (N = 994 from the general community, N = 550 receiving mental health services) who had engaged in ASR-I were the subjects of this study. Demographic, family, and sexual data and the DISC-IV were collected from individual interviews. Logistic regression models, bivariate odds ratios, Chi-squares, and t tests were used in the statistical analysis. The mental health sample showed higher rates of ASR-I, lifetime reports of pregnancy and lower age of ASR-I onset for females. No gender difference in the prevalence of ASR-I was observed in both samples. Older adolescents from the community sample meeting psychiatric diagnosis criteria, and with lower parental monitoring, were more likely to engage in ASR-I, whereas in the mental health sample, adolescents with lower parental monitoring and parental involvement reported significantly more ASR-I. Prevalence of ASR-I and Risky Sexual Behavior (RSB) were almost identical. Adolescents with mental health disorders initiate and engage in ASR-I earlier and more frequently regardless of gender. Older adolescents are more likely to engage in ASR-I and parent-child relationships emerged as a highly relevant predictor of adolescent sexual behavior. The high correspondence between ASR-I and RSB has important clinical implications.",
"title": ""
},
{
"docid": "02f09c60a5d6aaad43831e933b967aeb",
"text": "The problem of plagiarism in programming assignments by students in computer science courses has caused considerable concern among both faculty and students. There are a number of methods which instructors use in an effort to control the plagiarism problem. In this paper we describe a plagiarism detection system which was recently implemented in our department. This system is being used to detect similarities in student programs.",
"title": ""
},
{
"docid": "4d7616ce77bd32bcb6bc140279aefea8",
"text": "We argue that living systems process information such that functionality emerges in them on a continuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an explanation of the anticipatory nature of functionality within our overall approach. We adopt a Peircean approach to Biosemiotics, and a dynamical approach to Digital-Analog relations and to the interplay between different levels of functionality in autonomous systems, taking an integrative approach. We then apply the underlying biosemiotic logic to a particular biological system, giving a model of the B-Cell Receptor signaling system, in order to demonstrate how biosemiotic concepts can be used to build an account of biological information and functionality. Next we show how this framework can be used to explain and model more complex aspects of biological normativity, for example, how cross-talk between different signaling pathways can be avoided. Overall, we describe an integrated theoretical framework for the emergence of normative functions and, consequently, for the way information is transduced across several interconnected organizational levels in an autonomous system, and we demonstrate how this can be applied in real biological phenomena. Our aim is to open the way towards realistic tools for the modeling of information and normativity in autonomous biological agents.",
"title": ""
},
{
"docid": "bc85e28da375e2a38e06f0332a18aef0",
"text": "Background: Statistical reviews of the theories of reasoned action (TRA) and planned behavior (TPB) applied to exercise are limited by methodological issues including insufficient sample size and data to examine some moderator associations. Methods: We conducted a meta-analytic review of 111 TRA/TPB and exercise studies and examined the influences of five moderator variables. Results: We found that: a) exercise was most strongly associated with intention and perceived behavioral control; b) intention was most strongly associated with attitude; and c) intention predicted exercise behavior, and attitude and perceived behavioral control predicted intention. Also, the time interval between intention to behavior; scale correspondence; subject age; operationalization of subjective norm, intention, and perceived behavioral control; and publication status moderated the size of the effect. Conclusions: The TRA/TPB effectively explained exercise intention and behavior and moderators of this relationship. Researchers and practitioners are more equipped to design effective interventions by understanding the TRA/TPB constructs.",
"title": ""
},
{
"docid": "21f6ca062098c0dcf04fe8fadfc67285",
"text": "The Key study in this paper is to begin the investigation process with the initial forensic analysis in the segments of the storage media which would definitely contain the digital forensic evidences. These Storage media Locations is referred as the Windows registry. Identifying the forensic evidence from windows registry may take less time than required in the case of all locations of a storage media. Our main focus in this research will be to study the registry structure of Windows 7 and identify the useful information within the registry keys of windows 7 that may be extremely useful to carry out any task of digital forensic analysis. The main aim is to describe the importance of the study on computer & digital forensics. The Idea behind the research is to implement a forensic tool which will be very useful in extracting the digital evidences and present them in usable form to a forensic investigator. The work includes identifying various events registry keys value such as machine last shut down time along with machine name, List of all the wireless networks that the computer has connected to; List of the most recently used files or applications, List of all the USB devices that have been attached to the computer and many more. This work aims to point out the importance of windows forensic analysis to extract and identify the hidden information which shall act as an evidence tool to track and gather the user activities pattern. All Research was conducted in a Windows 7 Environment. Keywords—Windows Registry, Windows 7 Forensic Analysis, Windows Registry Structure, Analysing Registry Key, Digital Forensic Identification, Forensic data Collection, Examination of Windows Registry, Decoding of Windows Registry Keys, Discovering User Activities Patterns, Computer Forensic Investigation Tool.",
"title": ""
},
{
"docid": "0d1c9d83977850217fa2462cc00dd977",
"text": "AIM\nTo quantify the dose reduction and ensure that the use of a split-bolus protocol provided sufficient vascular enhancement.\n\n\nMATERIALS AND METHODS\nBetween 1 January 2014 and 31 May 2014, both split bolus and traditional two-phase scans were performed on a single CT scanner (SOMATOM Definition AS+, Siemens Healthcare) using a two-pump injector (Medrad Stellant). Both protocols used Siemens' proprietary tube current and tube voltage modulation techniques (CARE dose and CARE kV). The protocols were compared retrospectively to assess the dose-length product (DLP), aortic radiodensity at the level of the coeliac axis and radiodensity of the portal vein.\n\n\nRESULTS\nThere were 151 trauma CT examinations during this period. Seventy-eight used the split-bolus protocol. Seventy-one had traditional two-phase imaging. One patient was excluded as they were under the age of 18 years. The radiodensity measurements for the portal vein were significantly higher (p<0.001) in the split-bolus protocol. The mean aortic enhancement in both protocols exceeded 250 HU, although the traditional two-phase protocol gave greater arterial enhancement (p<0.001) than the split-bolus protocol. The split-bolus protocol had a significantly lower (p<0.001) DLP with 43.5% reduction in the mean DLP compared to the traditional protocol.\n\n\nCONCLUSION\nSplit-bolus CT imaging offers significant dose reduction for this relatively young population while retaining both arterial and venous enhancement.",
"title": ""
},
{
"docid": "c4b037a8818cd2c335cd88daa07f70c9",
"text": "This paper presents the findings of an outdoor thermal comfort study conducted in Hong Kong using longitudinal experiments--an alternative approach to conventional transverse surveys. In a longitudinal experiment, the thermal sensations of a relatively small number of subjects over different environmental conditions are followed and evaluated. This allows an exploration of the effects of changing climatic conditions on thermal sensation, and thus can provide information that is not possible to acquire through the conventional transverse survey. The paper addresses the effects of changing wind and solar radiation conditions on thermal sensation. It examines the use of predicted mean vote (PMV) in the outdoor context and illustrates the use of an alternative thermal index--physiological equivalent temperature (PET). The paper supports the conventional assumption that thermal neutrality corresponds to thermal comfort. Finally, predictive formulas for estimating outdoor thermal sensation are presented as functions of air temperature, wind speed, solar radiation intensity and absolute humidity. According to the formulas, for a person in light clothing sitting under shade on a typical summer day in Hong Kong where the air temperature is about 28°C and relative humidity about 80%, a wind speed of about 1.6 m/s is needed to achieve neutral thermal sensation.",
"title": ""
},
{
"docid": "d27f8744df1dbf7603d3079631832a47",
"text": "We propose a new technique for edge-suppressing operations on images. We introduce cross projection tensors to achieve affine transformations of gradient fields. We use these tensors, for example, to remove edges in one image based on the edge-information in a second image. Traditionally, edge suppression is acieved by setting image gradients to zero based on thresholds. A common application is in the Retinex problem, where the illumination map is recovered by suppressing the reflectance edges, assuming it is slowly varying. We present a class of problems where edge-suppression can be a useful tool. These problems involve analyzing images of the same scene under variable illumination. Instead of resetting gradients, the key idea in our approach is to derive local tensors using one image and to transform the gradient field of another image using them. Reconstructed image from the modified gradient field shows suppressed edges or textures at the corresponding locations. All operations are local and our approach does not require any global analysis. We demonstrate the algorithm in the context of several applications such as (a) recovering the foreground layer undervarying illumination, (b) estimating intrinsic images in non-Lambertian scenes, (c) removing shadows from color images and obtaining the illumination map, and (d) removing glass relections. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2006 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "1aada401a1a86fa42bed323e8ef2889c",
"text": "KEY POINTS\nThree weeks of intensified training and mild energy deficit in elite race walkers increases peak aerobic capacity independent of dietary support. Adaptation to a ketogenic low carbohydrate, high fat (LCHF) diet markedly increases rates of whole-body fat oxidation during exercise in race walkers over a range of exercise intensities. The increased rates of fat oxidation result in reduced economy (increased oxygen demand for a given speed) at velocities that translate to real-life race performance in elite race walkers. In contrast to training with diets providing chronic or periodised high carbohydrate availability, adaptation to an LCHF diet impairs performance in elite endurance athletes despite a significant improvement in peak aerobic capacity.\n\n\nABSTRACT\nWe investigated the effects of adaptation to a ketogenic low carbohydrate (CHO), high fat diet (LCHF) during 3 weeks of intensified training on metabolism and performance of world-class endurance athletes. We controlled three isoenergetic diets in elite race walkers: high CHO availability (g kg-1 day-1 : 8.6 CHO, 2.1 protein, 1.2 fat) consumed before, during and after training (HCHO, n = 9); identical macronutrient intake, periodised within or between days to alternate between low and high CHO availability (PCHO, n = 10); LCHF (< 50 g day-1 CHO; 78% energy as fat; 2.1 g kg-1 day-1 protein; LCHF, n = 10). Post-intervention, V̇O2 peak during race walking increased in all groups (P < 0.001, 90% CI: 2.55, 5.20%). LCHF was associated with markedly increased rates of whole-body fat oxidation, attaining peak rates of 1.57 ± 0.32 g min-1 during 2 h of walking at ∼80% V̇O2 peak . However, LCHF also increased the oxygen (O2 ) cost of race walking at velocities relevant to real-life race performance: O2 uptake (expressed as a percentage of new V̇O2 peak ) at a speed approximating 20 km race pace was reduced in HCHO and PCHO (90% CI: -7.047, -2.55 and -5.18, -0.86, respectively), but was maintained at pre-intervention levels in LCHF. HCHO and PCHO groups improved times for 10 km race walk: 6.6% (90% CI: 4.1, 9.1%) and 5.3% (3.4, 7.2%), with no improvement (-1.6% (-8.5, 5.3%)) for the LCHF group. In contrast to training with diets providing chronic or periodised high-CHO availability, and despite a significant improvement in V̇O2 peak , adaptation to the topical LCHF diet negated performance benefits in elite endurance athletes, in part due to reduced exercise economy.",
"title": ""
},
{
"docid": "17a336217e717dfedd7fa9f96a28da80",
"text": "Context: Competitions for self-driving cars facilitated the development and research in the domain of autonomous vehicles towards potential solutions for the future mobility. Objective: Miniature vehicles can bridge the gap between simulation-based evaluations of algorithms relying on simplified models, and those time-consuming vehicle tests on real-scale proving grounds. Method: This article combines findings from a systematic literature review, an in-depth analysis of results and technical concepts from contestants in a competition for self-driving miniature cars, and experiences of participating in the 2013 competition for self-driving cars. Results: A simulation-based development platform for real-scale vehicles has been adapted to support the development of a self-driving miniature car. Furthermore, a standardized platform was designed and realized to enable research and experiments in the context of future mobility solutions. Conclusion: A clear separation between algorithm conceptualization and validation in a model-based simulation environment enabled efficient and riskless experiments and validation. The design of a reusable, low-cost, and energy-efficient hardware architecture utilizing a standardized software/hardware interface enables experiments, which would otherwise require resources like a large real-scale",
"title": ""
},
{
"docid": "4bba56323edd0d2bc1baca07c1cee14e",
"text": "In this paper, we propose Personalized Markov Embedding (PME), a next-song recommendation strategy for online karaoke users. By modeling the sequential singing behavior, we first embed songs and users into a Euclidean space in which distances between songs and users reflect the strength of their relationships. Then, given each user's last song, we can generate personalized recommendations by ranking the candidate songs according to the embedding. Moreover, PME can be trained without any requirement of content information. Finally, we perform an experimental evaluation on a real world data set provided by ihou.com which is an online karaoke website launched by iFLYTEK, and the results clearly demonstrate the effectiveness of PME.",
"title": ""
},
{
"docid": "dbbea89ac8120ee84b3174207bddcdb7",
"text": "Recently, due to the huge growth of web pages, social media and modern applications, text clustering technique has emerged as a significant task to deal with a huge amount of text documents. Some web pages are easily browsed and tidily presented via applying the clustering technique in order to partition the documents into a subset of homogeneous clusters. In this paper, two novel text clustering algorithms based on krill herd (KH) algorithm are proposed to improve the web text documents clustering. In the first method, the basic KH algorithm with all its operators is utilized while in the second method, the genetic operators in the basic KH algorithm are neglected. The performance of the proposed KH algorithms is analyzed and compared with the k-mean algorithm. The experiments were conducted using four standard benchmark text datasets. The results showed that the proposed KH algorithms outperformed the k-mean algorithm in term of clusters quality that is evaluated using two common clustering measures, namely, Purity and Entropy.",
"title": ""
},
{
"docid": "27c0c6c43012139fc3e4ee64ae043c0b",
"text": "This paper presents a method for measuring signal backscattering from RFID tags, and for calculating a tag's radar cross section (RCS). We derive a theoretical formula for the RCS of an RFID tag with a minimum-scattering antenna. We describe an experimental measurement technique, which involves using a network analyzer connected to an anechoic chamber with and without the tag. The return loss measured in this way allows us to calculate the backscattered power and to find the tag's RCS. Measurements were performed using an RFID tag operating in the UHF band. To determine whether the tag was turned on, we used an RFID tag tester. The tag's RCS was also calculated theoretically, using electromagnetic simulation software. The theoretical results were found to be in good agreement with experimental data",
"title": ""
},
{
"docid": "9c68b87f99450e85f3c0c6093429937d",
"text": "We present a method for activity recognition that first estimates the activity performer's location and uses it with input data for activity recognition. Existing approaches directly take video frames or entire video for feature extraction and recognition, and treat the classifier as a black box. Our method first locates the activities in each input video frame by generating an activity mask using a conditional generative adversarial network (cGAN). The generated mask is appended to color channels of input images and fed into a VGG-LSTM network for activity recognition. To test our system, we produced two datasets with manually created masks, one containing Olympic sports activities and the other containing trauma resuscitation activities. Our system makes activity prediction for each video frame and achieves performance comparable to the state-of-the-art systems while simultaneously outlining the location of the activity. We show how the generated masks facilitate the learning of features that are representative of the activity rather than accidental surrounding information.",
"title": ""
},
{
"docid": "119ca30e07356ba6bb06ec2fd9b95811",
"text": "Bioactive compounds from vegetal sources are a potential source of natural antifungic. An ethanol extraction was used to obtain bioactive compounds from Carica papaya L. cv. Maradol leaves and seeds of discarded ripe and unripe fruit. Both, extraction time and the papaya tissue flour:organic solvent ratio significantly affected yield, with the longest time and highest flour:solvent ratio producing the highest yield. The effect of time on extraction efficiency was confirmed by qualitative identification of the compounds present in the lowest and highest yield extracts. Analysis of the leaf extract with phytochemical tests showed the presence of alkaloids, flavonoids and terpenes. Antifungal effectiveness was determined by challenging the extracts (LE, SRE, SUE) from the best extraction treatment against three phytopathogenic fungi: Rhizopus stolonifer, Fusarium spp. and Colletotrichum gloeosporioides. The leaf extract exhibited the broadest action spectrum. The MIC50 for the leaf extract was 0.625 mg ml−1 for Fusarium spp. and >10 mg ml−1 for C. gloeosporioides, both equal to approximately 20% mycelial growth inhibition. Ethanolic extracts from Carica papaya L. cv. Maradol leaves are a potential source of secondary metabolites with antifungal properties.",
"title": ""
}
] |
scidocsrr
|
6328f56b17390e02fe91c826c6aaab43
|
Long-term memory, sleep, and the spacing effect.
|
[
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
}
] |
[
{
"docid": "35dd432f881acb83d6f6a362d565b7aa",
"text": "Multi-tenant database is a new cloud computing paradigm that has recently attracted attention to deliver database functionalities for multiple tenants to create, store, and access their databases over the internet. This multi-tenant database should be highly configurable and secure to meet tenants' expectations and their different business requirements. In this paper, we propose an architecture design to build an intermediate database layer to be used between software applications and Relational Database Management Systems (RDBMS) to store and access multiple tenants' data in the Elastic Extension Table (EET) multi-tenant database schema. This database layer combines multi-tenant relational tables and virtual relational tables and makes them work together to act as one database for each tenant. This architecture design is suitable for multi-tenant database environment that can run any business domain database by using a combination of a database schema, which contains shared physical structured tables and virtual structured tenant's tables. Further, this multi-tenant database architecture design can be used as a base to build software applications in general and Software as a Service (SaaS) applications in particular.",
"title": ""
},
{
"docid": "859685b70f3440366c417a3a2e7854f4",
"text": "Q is an unmanned ground vehicle designed to compete in the Autonomous and Navigation Challenges of the AUVSI Intelligent Ground Vehicle Competition (IGVC). Built on a base platform of a modified PerMobil Trax off-road wheel chair frame, and running off a Dell Inspiron D820 laptop with an Intel t7400 Core 2 Duo Processor, Q gathers information from a SICK laser range finder (LRF), video cameras, differential GPS, and digital compass to localize its behavior and map out its navigational path. This behavior is handled by intelligent closed loop speed control and robust sensor data processing algorithms. In the Autonomous challenge, data taken from two IEEE 1394 cameras and the LRF are integrated and plotted on a custom-defined occupancy grid and converted into a histogram which is analyzed for openings between obstacles. The image processing algorithm consists of a series of steps involving plane extraction, normalizing of the image histogram for an effective dynamic thresholding, texture and morphological analysis and particle filtering to allow optimum operation at varying ambient conditions. In the Navigation Challenge, a modified Vector Field Histogram (VFH) algorithm is combined with an auto-regressive path planning model for obstacle avoidance and better localization. Also, Q features the Joint Architecture for Unmanned Systems (JAUS) Level 3 compliance. All algorithms are developed and implemented using National Instruments (NI) hardware and LabVIEW software. The paper will focus on explaining the various algorithms that make up Q’s intelligence and the different ways and modes of their implementation.",
"title": ""
},
{
"docid": "b5215ddc7768f75fe72cdaaad9e3cdb8",
"text": "Visual saliency analysis detects salient regions/objects that attract human attention in natural scenes. It has attracted intensive research in different fields such as computer vision, computer graphics, and multimedia. While many such computational models exist, the focused study of what and how applications can be beneficial is still lacking. In this article, our ultimate goal is thus to provide a comprehensive review of the applications using saliency cues, the so-called attentive systems. We would like to provide a broad vision about saliency applications and what visual saliency can do. We categorize the vast amount of applications into different areas such as computer vision, computer graphics, and multimedia. Intensively covering 200+ publications we survey (1) key application trends, (2) the role of visual saliency, and (3) the usability of saliency into different tasks.",
"title": ""
},
{
"docid": "b34af4da147779c6d1505ff12cacd5aa",
"text": "Crowd-enabled place-centric systems gather and reason over large mobile sensor datasets and target everyday user locations (such as stores, workplaces, and restaurants). Such systems are transforming various consumer services (for example, local search) and data-driven organizations (city planning). As the demand for these systems increases, our understanding of how to design and deploy successful crowdsensing systems must improve. In this paper, we present a systematic study of the coverage and scaling properties of place-centric crowdsensing. During a two-month deployment, we collected smartphone sensor data from 85 participants using a representative crowdsensing system that captures 48,000 different place visits. Our analysis of this dataset examines issues of core interest to place-centric crowdsensing, including place-temporal coverage, the relationship between the user population and coverage, privacy concerns, and the characterization of the collected data. Collectively, our findings provide valuable insights to guide the building of future place-centric crowdsensing systems and applications.",
"title": ""
},
{
"docid": "36fe867115d423f39366b9a42cb89fe3",
"text": "Malware continues to be one of the major threats to Internet security. In the battle against cybercriminals, accurately identifying the underlying malicious server infrastructure (e.g., C&C servers for botnet command and control) is of vital importance. Most existing passive monitoring approaches cannot keep up with the highly dynamic, ever-evolving malware server infrastructure. As an effective complementary technique, active probing has recently attracted attention due to its high accuracy, efficiency, and scalability (even to the Internet level). In this paper, we propose Autoprobe, a novel system to automatically generate effective and efficient fingerprints of remote malicious servers. Autoprobe addresses two fundamental limitations of existing active probing approaches: it supports pull-based C&C protocols, used by the majority of malware, and it generates fingerprints even in the common case when C&C servers are not alive during fingerprint generation. Using real-world malware samples we show that Autoprobe can successfully generate accurate C&C server fingerprints through novel applications of dynamic binary analysis techniques. By conducting Internet-scale active probing, we show that Autoprobe can successfully uncover hundreds of malicious servers on the Internet, many of them unknown to existing blacklists. We believe Autoprobe is a great complement to existing defenses, and can play a unique role in the battle against cybercriminals.",
"title": ""
},
{
"docid": "c70d8ae9aeb8a36d1f68ba0067c74696",
"text": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on simple link structure between a finite set of entities, ignoring the variety of data types that are often used in knowledge bases, such as text, images, and numerical values. In this paper, we propose multimodal knowledge base embeddings (MKBE) that use different neural encoders for this variety of observed data, and combine them with existing relational models to learn embeddings of the entities and multimodal data. Further, using these learned embedings and different neural decoders, we introduce a novel multimodal imputation model to generate missing multimodal values, like text and images, from information in the knowledge base. We enrich existing relational datasets to create two novel benchmarks that contain additional information such as textual descriptions and images of the original entities. We demonstrate that our models utilize this additional information effectively to provide more accurate link prediction, achieving state-of-the-art results with a considerable gap of 5-7% over existing methods. Further, we evaluate the quality of our generated multimodal values via a user study. We have release the datasets and the opensource implementation of our models at https: //github.com/pouyapez/mkbe.",
"title": ""
},
{
"docid": "6f9be23e33910d44551b5befa219e557",
"text": "The Lecture Notes are used for the a short course on the theory and applications of the lattice Boltzmann methods for computational uid dynamics taugh by the author at Institut f ur Computeranwendungen im Bauingenieurwesen (CAB), Technischen Universitat Braunschweig, during August 7 { 12, 2003. The lectures cover the basic theory of the lattice Boltzmann equation and its applications to hydrodynamics. Lecture One brie y reviews the history of the lattice gas automata and the lattice Boltzmann equation and their connections. Lecture Two provides an a priori derivation of the lattice Boltzmann equation, which connects the lattice Boltzmann equation to the continuous Boltzmann equation and demonstrates that the lattice Boltzmann equation is indeed a special nite di erence form of the Boltzmann equation. Lecture Two also includes the derivation of the lattice Boltzmann model for nonideal gases from the Enskog equation for dense gases. Lecture Three studies the generalized lattice Boltzmann equation with multiple relaxation times. A summary is provided at the end of each Lecture. Lecture Four discusses the uid-solid boundary conditions in the lattice Boltzmann methods. Applications of the lattice Boltzmann mehod to particulate suspensions, turbulence ows, and other ows are also shown. An Epilogue on the rationale of the lattice Boltzmann method is given. Some key references in the literature is also provided.",
"title": ""
},
{
"docid": "09b35c40a65a0c2c0f58deb49555000d",
"text": "There are a wide range of forensic and analysis tools to examine digital evidence in existence today. Traditional tool design examines each source of digital evidence as a BLOB (binary large object) and it is up to the examiner to identify the relevant items from evidence. In the face of rapid technological advancements we are increasingly confronted with a diverse set of digital evidence and being able to identify a particular tool for conducting a specific analysis is an essential task. In this paper, we present a systematic study of contemporary forensic and analysis tools using a hypothesis based review to identify the different functionalities supported by these tools. We highlight the limitations of the forensic tools in regards to evidence corroboration and develop a case for building evidence correlation functionalities into these tools.",
"title": ""
},
{
"docid": "8ab9f1be0a8ed182137c9a8a9c9e71d0",
"text": "PURPOSE OF REVIEW\nTo document recent evidence regarding the role of nutrition as an intervention for sarcopenia.\n\n\nRECENT FINDINGS\nA review of seven randomized controlled trials (RCTs) on beta-hydroxy-beta-methylbutyrate (HMB) alone on muscle loss in 147 adults showed greater muscle mass gain in the intervention group, but no benefit in muscle strength and physical performance measures. Three other review articles examined nutrition and exercise as combined intervention, and suggest enhancement of benefits of exercise by nutrition supplements (energy, protein, vitamin D). Four trials reported on nutrition alone as intervention, mainly consisting of whey protein, leucine, HMB and vitamin D, with variable results on muscle mass and function. Four trials examined the combined effects of nutrition combined with exercise, showing improvements in muscle mass and function.\n\n\nSUMMARY\nTo date, evidence suggests that nutrition intervention alone does have benefit, and certainly enhances the impact of exercise. Nutrients include high-quality protein, leucine, HMB and vitamin D. Long-lasting impact may depend on baseline nutritional status, baseline severity of sarcopenia, and long-lasting adherence to the intervention regime. Future large-scale multicentered RCTs using standardized protocols may provide evidence for formulating guidelines on nutritional intervention for sarcopenia. There is a paucity of data for nursing home populations.",
"title": ""
},
{
"docid": "f75b11bc21dc711b76a7a375c2a198d3",
"text": "In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.",
"title": ""
},
{
"docid": "5db336088113fbfdf93be6e057f97748",
"text": "Unmanned Aerial Vehicles (UAVs) are an exciting new remote sensing tool capable of acquiring high resolution spatial data. Remote sensing with UAVs has the potential to provide imagery at an unprecedented spatial and temporal resolution. The small footprint of UAV imagery, however, makes it necessary to develop automated techniques to geometrically rectify and mosaic the imagery such that larger areas can be monitored. In this paper, we present a technique for geometric correction and mosaicking of UAV photography using feature matching and Structure from Motion (SfM) photogrammetric techniques. Images are processed to create three dimensional point clouds, initially in an arbitrary model space. The point clouds are transformed into a real-world coordinate system using either a direct georeferencing technique that uses estimated camera positions or via a Ground Control Point (GCP) technique that uses automatically identified GCPs within the point cloud. The point cloud is then used to generate a Digital Terrain Model (DTM) required for rectification of the images. Subsequent georeferenced images are then joined together to form a mosaic of the study area. The absolute spatial accuracy of the direct technique was found to be 65–120 cm whilst the GCP technique achieves an accuracy of approximately 10–15 cm.",
"title": ""
},
{
"docid": "8e5f2b976dfe8883e419fdc49bf53c78",
"text": "This paper studies the object transfiguration problem in wild images. The generative network in classical GANs for object transfiguration often undertakes a dual responsibility: to detect the objects of interests and to convert the object from source domain to target domain. In contrast, we decompose the generative network into two separat networks, each of which is only dedicated to one particular sub-task. The attention network predicts spatial attention maps of images, and the transformation network focuses on translating objects. Attention maps produced by attention network are encouraged to be sparse, so that major attention can be paid to objects of interests. No matter before or after object transfiguration, attention maps should remain constant. In addition, learning attention network can receive more instructions, given the available segmentation annotations of images. Experimental results demonstrate the necessity of investigating attention in object transfiguration, and that the proposed algorithm can learn accurate attention to improve quality of generated images.",
"title": ""
},
{
"docid": "722e8a04db2e6fa48623a68ccf93d2af",
"text": "This study exhibits the application of the concept of matrices, probability and optimization in making an electronic Tic-Tac-Toe game using logic gates and exhibiting the concept of Boolean algebra. For a finite number of moves in every single game of Tic-Tac-Toe, the moves are recorded in a 3×3 matrix and the subsequent solution, or a winning combination, is presented from the data obtained by playing the electronic game. The solution is also displayed electronically using an LED. The circuit has been designed in a way to apply Boolean logic to analyze player's moves and thus, give a corresponding output from the electronic game and use it in matrices. The electronic Tic-Tac-Toe game is played randomly between 20 pairs of players. The impact of different opening moves is observed. Also, effect of different strategies, aggressive or defensive, on the outcome of the game is explored. The concept of Boolean algebra, logic gates, matrices and probability is applied in this game to make the foundation of the logic for this game. The most productive position for placing an `X' or `O' is found out using probabilities. Also the most effective blocking move is found out through which a player placing `O' can block `X' from winning. The skills help in understanding what strategy can be implemented to be on the winning side. The study is developed with an attempt to realistically model a tic-tac-toe game, and help in reflecting major tendencies. This knowledge helps in understanding what strategy to implement to be on the winning side.",
"title": ""
},
{
"docid": "53562dbb7087c83c6c84875e5e784b1b",
"text": "ALIZE is an open-source platform for speaker recognition. The ALIZE library implements a low-level statistical engine based on the well-known Gaussian mixture modelling. The toolkit includes a set of high level tools dedicated to speaker recognition based on the latest developments in speaker recognition such as Joint Factor Analysis, Support Vector Machine, i-vector modelling and Probabilistic Linear Discriminant Analysis. Since 2005, the performance of ALIZE has been demonstrated in series of Speaker Recognition Evaluations (SREs) conducted by NIST and has been used by many participants in the last NISTSRE 2012. This paper presents the latest version of the corpus and performance on the NIST-SRE 2010 extended task.",
"title": ""
},
{
"docid": "c9a78279a2dfb2b8ed7ab2424aa41c34",
"text": "It is widely recognized that people sometimes use theory-of-mind judgments in moral cognition. A series of recent studies shows that the connection can also work in the opposite direction: moral judgments can sometimes be used in theory-of-mind cognition. Thus, there appear to be cases in which people's moral judgments actually serve as input to the process underlying their application of theory-of-mind concepts.",
"title": ""
},
{
"docid": "11c7ceb4d63be002154cf162f635687c",
"text": "Inter-network interference is a significant source of difficulty for wireless body area networks. Movement, proximity and the lack of central coordination all contribute to this problem. We compare the interference power of multiple Body Area Network (BAN) devices when a group of people move randomly within an office area. We find that the path loss trend is dominated by local variations in the signal, and not free-space path loss exponent.",
"title": ""
},
{
"docid": "46b5e1898dba479b7158ce5c9c0b94a8",
"text": "Finding a parking place in a busy city centre is often a frustrating task for many drivers; time and fuel are wasted in the quest for a vacant spot and traffic in the area increases due to the slow moving vehicles circling around. In this paper, we present the results of a survey on the needs of drivers from parking infrastructures from a smart services perspective. As smart parking systems are becoming a necessity in today's urban areas, we discuss the latest trends in parking availability monitoring, parking reservation and dynamic pricing schemes. We also examine how these schemes can be integrated forming technologically advanced parking infrastructures whose aim is to benefit both the drivers and the parking operators alike.",
"title": ""
},
{
"docid": "68c5689ddea935ec880c77af654431af",
"text": "The Kuka light weight robot offers unique features to researchers. Besides its 7 Degrees of Freedom (DOF), also torque sensing in every joint and a variety of compliance modes make the robot a good choice for robotic research. Unfortunately the interface to control the robot externally has its restrictions. In this paper, we present an open source solution (OpenKC) that will allow the control of the robot externally using a simple set of routines that can easily be integrated in existing software. All features and modes of the Kuka light weight robot can be used and triggered externally. Simultaneous control of several robots is explicitly supported. The software has proven its use in several applications.",
"title": ""
},
{
"docid": "073b17e195cec320c20533f154d4ab7f",
"text": "Automatic segmentation of cell nuclei is an essential step in image cytometry and histometry. Despite substantial progress, there is a need to improve accuracy, speed, level of automation, and adaptability to new applications. This paper presents a robust and accurate novel method for segmenting cell nuclei using a combination of ideas. The image foreground is extracted automatically using a graph-cuts-based binarization. Next, nuclear seed points are detected by a novel method combining multiscale Laplacian-of-Gaussian filtering constrained by distance-map-based adaptive scale selection. These points are used to perform an initial segmentation that is refined using a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity. Nuclear segmentation results were manually validated over 25 representative images (15 in vitro images and 10 in vivo images, containing more than 7400 nuclei) drawn from diverse cancer histopathology studies, and four types of segmentation errors were investigated. The overall accuracy of the proposed segmentation algorithm exceeded 86%. The accuracy was found to exceed 94% when only over- and undersegmentation errors were considered. The confounding image characteristics that led to most detection/segmentation errors were high cell density, high degree of clustering, poor image contrast and noisy background, damaged/irregular nuclei, and poor edge information. We present an efficient semiautomated approach to editing automated segmentation results that requires two mouse clicks per operation.",
"title": ""
},
{
"docid": "fd80e9a3f22dc9c7dc1ffb1b48e98bdb",
"text": "This paper presents a new technique for detecting sharp features on point-sampled geometry. Sharp features of different nature and possessing angles varying from obtuse to acute can be identified without any user interaction. The algorithm works directly on the point cloud, no surface reconstruction is needed. Given an unstructured point cloud, our method first computes a Gauss map clustering on local neighborhoods in order to discard all points which are unlikely to belong to a sharp feature. As usual, a global sensitivity parameter is used in this stage. In a second stage, the remaining feature candidates undergo a more precise iterative selection process. Central to our method is the automatic computation of an adaptive sensitivity parameter, increasing significantly the reliability and making the identification more robust in the presence of obtuse and acute angles. The algorithm is fast and does not depend on the sampling resolution, since it is based on a local neighbor graph computation.",
"title": ""
}
] |
scidocsrr
|
84fae1310741962c7f3128be637205a4
|
Introducing Robustness in Multi-Objective Optimization
|
[
{
"docid": "f3212c4240be8c0fe6147717cfb6b25f",
"text": "Multiobjective evolutionary algorithms (EAs) that use nondominated sorting and sharing have been criticized mainly for their: 1) ( ) computational complexity (where is the number of objectives and is the population size); 2) nonelitism approach; and 3) the need for specifying a sharing parameter. In this paper, we suggest a nondominated sorting-based multiobjective EA (MOEA), called nondominated sorting genetic algorithm II (NSGA-II), which alleviates all the above three difficulties. Specifically, a fast nondominated sorting approach with ( ) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best (with respect to fitness and spread) solutions. Simulation results on difficult test problems show that the proposed NSGA-II, in most problems, is able to find much better spread of solutions and better convergence near the true Pareto-optimal front compared to Pareto-archived evolution strategy and strength-Pareto EA—two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multiobjective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective seven-constraint nonlinear problem, are compared with another constrained multiobjective optimizer and much better performance of NSGA-II is observed.",
"title": ""
}
] |
[
{
"docid": "d5b1d375a343543ff85eb3e506fb9f8b",
"text": "Wireless Underground Sensor Networks (WUSNs) present a variety of new research challenges. For WUSNs, the goal is to establish an efficient wireless communication in the underground medium. A magnetic induction (MI) based transmission technique was proposed to overcome the very harsh conditions of the soil environment. In this paper, we investigate the potential of the MI-WUSNs if, in contrast to some previous proposals, no relays are used. Our main focus is on the throughput of the bottleneck link of the network, which corresponds to the overall network capacity. In order to reduce the number of relevant interferers and maximize the network throughput, we exploit the polarization of the used magnetic antennas (coils) by optimizing their orientation. Additional optimization of the system parameters improves the channel capacity of the bottleneck link. In addition, we consider a special case of the network deployment in mines and tunnels and propose a frequency switching scheme for better propagation conditions.",
"title": ""
},
{
"docid": "6af7bb1d2a7d8d44321a5b162c9781a2",
"text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.",
"title": ""
},
{
"docid": "b6ec41747ed22d73f4cf2dfc3b20ec80",
"text": "This paper proposes the high-voltage high-frequency power supply for an ozone generator using a phase-shifted pulse width modulation (PWM) full bridge inverter. The circuit operation is fully described. The high-frequency transformer and ozone generator mathematical models are also included for preliminarily calculating of instantaneous voltages and currents. The proposed system simulation using the MATLAB /SIMULINK software package is given. In order to ensure that zero voltage switching (ZVS) mode always operates over a certain range of a frequency variation, a series-compensated resonant inductor is included. The advantage of the proposed system is a capability of varying ozone gas production quantity by varying the frequency and phase shift angle of the converter whilst the applied voltage to the electrodes is kept constant. The correctness of the proposed technique is verified by both simulation and experimental results.",
"title": ""
},
{
"docid": "95fb51b0b6d8a3a88edfc96157233b10",
"text": "Various types of video can be captured with fisheye lenses; their wide field of view is particularly suited to surveillance video. However, fisheye lenses introduce distortion, and this changes as objects in the scene move, making fisheye video difficult to interpret. Current still fisheye image correction methods are either limited to small angles of view, or are strongly content dependent, and therefore unsuitable for processing video streams. We present an efficient and robust scheme for fisheye video correction, which minimizes time-varying distortion and preserves salient content in a coherent manner. Our optimization process is controlled by user annotation, and takes into account a wide set of measures addressing different aspects of natural scene appearance. Each is represented as a quadratic term in an energy minimization problem, leading to a closed-form solution via a sparse linear system. We illustrate our method with a range of examples, demonstrating coherent natural-looking video output. The visual quality of individual frames is comparable to those produced by state-of-the-art methods for fisheye still photograph correction.",
"title": ""
},
{
"docid": "e20f6ef6524a422c80544eaf590e326d",
"text": "Computing the semantic similarity/relatedness between terms is an important research area for several disciplines, including artificial intelligence, cognitive science, linguistics, psychology, biomedicine and information retrieval. These measures exploit knowledge bases to express the semantics of concepts. Some approaches, such as the information theoretical approaches, rely on knowledge structure, while others, such as the gloss-based approaches, use knowledge content. Firstly, based on structure, we propose a new intrinsic Information Content (IC) computing method which is based on the quantification of the subgraph formed by the ancestors of the target concept. Taxonomic measures including the IC-based ones consume the topological parameters that must be extracted from taxonomies considered as Directed Acyclic Graphs (DAGs). Accordingly, we propose a routine of graph algorithms that are able to provide some basic parameters, such as depth, ancestors, descendents, Lowest Common Subsumer (LCS). The IC-computing method is assessed using several knowledge structures which are: the noun and verb WordNet “is a” taxonomies, Wikipedia Category Graph (WCG), and MeSH taxonomy. We also propose an aggregation schema that exploits the WordNet “is a” taxonomy and WCG in a complementary way through the IC-based measures to improve coverage capacity. Secondly, taking content into consideration, we propose a gloss-based semantic similarity measure that operates based on the noun weighting mechanism using our IC-computing method, as well as on the WordNet, Wiktionary and Wikipedia resources. Further evaluation is performed on various items, including nouns, verbs, multiword expressions and biomedical datasets, using well-recognized benchmarks. The results indicate an improvement in terms of similarity and relatedness assessment accuracy.",
"title": ""
},
{
"docid": "c49b4bf87335ad6620de2c59761f240c",
"text": "Due to the continually increasing levels of penetration of distributed generation the correct operation of Loss-Of-Mains protection is of prime importance. Many UK utilities report persistent problems relating to incorrect operation of the ROCOF and Vector Shift methods which are currently the most commonly applied methods for Loss-Of-Mains (LOM) detection. The main focus of this paper is to demonstrate the problems associated with these methods through detailed dynamic modelling of existing available relays. The ability to investigate the transient response of the LOM protection to various system events highlights the main weaknesses of the existing methods, and more importantly, provides the means of quantitative analysis and better understanding of these weaknesses. Consequently, the dynamic analysis of the protective algorithms supports the identification of best compromise settings and gives insight to the future areas requiring improvement.",
"title": ""
},
{
"docid": "6b37baf34546ac4a630aa435af4a2284",
"text": "The adoption of smartphones, devices transforming from simple communication devices to ‘smart’ and multipurpose devices, is constantly increasing. Amongst the main reasons are their small size, their enhanced functionality and their ability to host many useful and attractive applications. However, this vast use of mobile platforms makes them an attractive target for conducting privacy and security attacks. This scenario increases the risk introduced by these attacks for personal mobile devices, given that the use of smartphones as business tools may extend the perimeter of an organization's IT infrastructure. Furthermore, smartphone platforms provide application developers with rich capabilities, which can be used to compromise the security and privacy of the device holder and her environment (private and/or organizational). This paper examines the feasibility of malware development in smartphone platforms by average programmers that have access to the official tools and programming libraries provided by smartphone platforms. Towards this direction in this paper we initially propose specific evaluation criteria assessing the security level of the well-known smartphone platforms (i.e. Android, BlackBerry, Apple iOS, Symbian, Windows Mobile), in terms of the development of malware. In the sequel, we provide a comparative analysis, based on a proof of concept study, in which the implementation and distribution of a location tracking malware is attempted. Our study has proven that, under circumstances, all smartphone platforms could be used by average developers as privacy attack vectors, harvesting data from the device without the users knowledge and consent.",
"title": ""
},
{
"docid": "fb6377f3e1d0c9a98017c507eb703365",
"text": "Classification methods from statistical pattern recognition, neural nets, and machine learning were applied to four real-world data sets. Each of these data sets has been previously analyzed and reported in the statistical, medical, or machine learning literature. The data sets are characterized by statisucal uncertainty; there is no completely accurate solution to these problems. Training and testing or resampling techniques are used to estimate the true error rates of the classification methods. Detailed attention is given to the analysis of performance of the neural nets using back propagation. For these problems, which have relatively few hypotheses and features, the machine learning procedures for rule induction or tree induction clearly performed best.",
"title": ""
},
{
"docid": "4d16c9c38837adc8f3b36031871f1048",
"text": "We present a frequency modulated continuous wave (FMCW) multiple input multiple output (MIMO) radar demonstrator system operating in the W-band at frequencies around 100 GHz. It consists of a two dimensional sparse array together with hardware for signal generation and image reconstruction that we will describe in more detail. The geometry of the sparse array was designed with the help of simulations to the aim of imaging at distances of just a few up to more than 150 meters. The FMCW principle is used to extract range information. To obtain information in both cross-range directions a back-propagation algorithm is used and further explained in this paper. Finally, we will present first measurements and explain the calibration process.",
"title": ""
},
{
"docid": "1beba2c797cb5a4b72b54fd71265a25f",
"text": "Modularity is widely used to effectively measure the strength of the community structure found by community detection algorithms. However, modularity maximization suffers from two opposite yet coexisting problems: in some cases, it tends to favor small communities over large ones while in others, large communities over small ones. The latter tendency is known in the literature as the resolution limit problem. To address them, we propose to modify modularity by subtracting from it the fraction of edges connecting nodes of different communities and by including community density into modularity. We refer to the modified metric as Modularity Density and we demonstrate that it indeed resolves both problems mentioned above. We describe the motivation for introducing this metric by using intuitively clear and simple examples. We also prove that this new metric solves the resolution limit problem. Finally, we discuss the results of applying this metric, modularity, and several other popular community quality metrics to two real dynamic networks. The results imply that Modularity Density is consistent with all the community quality measurements but not modularity, which suggests that Modularity Density is an improved measurement of the community quality compared to modularity.",
"title": ""
},
{
"docid": "b3fc899c49ceb699f62b43bb0808a1b2",
"text": "Social network users publicly share a wide variety of information with their followers and the general public ranging from their opinions, sentiments and personal life activities. There has already been significant advance in analyzing the shared information from both micro (individual user) and macro (community level) perspectives, giving access to actionable insight about user and community behaviors. The identification of personal life events from user’s profiles is a challenging yet important task, which if done appropriately, would facilitate more accurate identification of users’ preferences, interests and attitudes. For instance, a user who has just broken his phone, is likely to be upset and also be looking to purchase a new phone. While there is work that identifies tweets that include mentions of personal life events, our work in this paper goes beyond the state of the art by predicting a future personal life event that a user will be posting about on Twitter solely based on the past tweets. We propose two architectures based on recurrent neural networks, namely the classification and generation architectures, that determine the future personal life event of a user. We evaluate our work based on a gold standard Twitter life event dataset and compare our work with the state of the art baseline technique for life event detection. While presenting performance measures, we also discuss the limitations of our work in this paper.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "81b6059f24c827c271247b07f38f86d5",
"text": "We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter, respectively. The transmitter architecture takes advantage of the wideband frequency modulation capability of the all-digital phase-locked loop with built-in automatic compensation to ensure modulation accuracy. The receiver employs a discrete-time architecture in which the RF signal is directly sampled and processed using analog and digital signal processing techniques. The complete chip also integrates power management functions and a digital baseband processor. Application of the presented ideas has resulted in significant area and power savings while producing structures that are amenable to migration to more advanced deep-submicron processes, as they become available. The entire IC occupies 10 mm/sup 2/ and consumes 28 mA during transmit and 41 mA during receive at 1.5-V supply.",
"title": ""
},
{
"docid": "f7252ab3871dfae3860f575515867db6",
"text": "This review paper deals with IoT that can be used to improve cultivation of food crops, as lots of research work is going on to monitor the effective food crop cycle, since from the start to till harvesting the famers are facing very difficult for better yielding of food crops. Although few initiatives have also been taken by the Indian Government for providing online and mobile messaging services to farmers related to agricultural queries and agro vendor’s information to farmers even such information’s are not enough for farmer so still lot of research work need to be carried out on current agricultural approaches so that continuous sensing and monitoring of crops by convergence of sensors with IoT and making farmers to aware about crops growth, harvest time periodically and in turn making high productivity of crops and also ensuring correct delivery of products to end consumers at right place and right time.",
"title": ""
},
{
"docid": "5da2747dd2c3fe5263d8bfba6e23de1f",
"text": "We propose to transfer the content of a text written in a certain style to an alternative text written in a different style, while maintaining as much as possible of the original meaning. Our work is inspired by recent progress of applying style transfer to images, as well as attempts to replicate the results to text. Our model is a deep neural network based on Generative Adversarial Networks (GAN). Our novelty is replacing the discrete next-word prediction with prediction in the embedding space, which provides two benefits (1) train the GAN without using gradient approximations and (2) provide semantically related results even for failure cases.",
"title": ""
},
{
"docid": "a496f2683f49573132e5b57f7e3accf0",
"text": "Automatically generated databases of English paraphrases have the drawback that they return a single list of paraphrases for an input word or phrase. This means that all senses of polysemous words are grouped together, unlike WordNet which partitions different senses into separate synsets. We present a new method for clustering paraphrases by word sense, and apply it to the Paraphrase Database (PPDB). We investigate the performance of hierarchical and spectral clustering algorithms, and systematically explore different ways of defining the similarity matrix that they use as input. Our method produces sense clusters that are qualitatively and quantitatively good, and that represent a substantial improvement to the PPDB resource.",
"title": ""
},
{
"docid": "b96b422be2b358d92347659d96a68da7",
"text": "The bipedal spring-loaded inverted pendulum (SLIP) model captures characteristic properties of human locomotion, and it is therefore often used to study human-like walking. The extended variable spring-loaded inverted pendulum (V-SLIP) model provides a control input for gait stabilization and shows robust and energy-efficient walking patterns. This work presents a control strategy that maps the conceptual V-SLIP model on a realistic model of a bipedal robot. This walker implements the variable leg compliance by means of variable stiffness actuators in the knees. The proposed controller consists of multiple levels, each level controlling the robot at a different level of abstraction. This allows the controller to control a simple dynamic structure at the top level and control the specific degrees of freedom of the robot at a lower level. The proposed controller is validated by both numeric simulations and preliminary experimental tests.",
"title": ""
},
{
"docid": "182c83e136dcc7f41c2d7a7a30321440",
"text": "Behavioral logs are traces of human behavior seen through the lenses of sensors that capture and record user activity. They include behavior ranging from low-level keystrokes to rich audio and video recordings. Traces of behavior have been gathered in psychology studies since the 1930s (Skinner, 1938 ), and with the advent of computerbased applications it became common practice to capture a variety of interaction behaviors and save them to log fi les for later analysis. In recent years, the rise of centralized, web-based computing has made it possible to capture human interactions with web services on a scale previously unimaginable. Largescale log data has enabled HCI researchers to observe how information diffuses through social networks in near real-time during crisis situations (Starbird & Palen, 2010 ), characterize how people revisit web pages over time (Adar, Teevan, & Dumais, 2008 ), and compare how different interfaces for supporting email organization infl uence initial uptake and sustained use (Dumais, Cutrell, Cadiz, Jancke, Sarin, & Robbins, 2003 ; Rodden & Leggett, 2010 ). In this chapter we provide an overview of behavioral log use in HCI. We highlight what can be learned from logs that capture people’s interactions with existing computer systems and from experiments that compare new, alternative systems. We describe how to design and analyze web experiments, and how to collect, clean and use log data responsibly. The goal of this chapter is to enable the reader to design log studies and to understand results from log studies that they read about. Understanding User Behavior Through Log Data and Analysis",
"title": ""
}
] |
scidocsrr
|
2c3983e257127d75d452640b47eaeb3e
|
A simple gate drive for SiC MOSFET with switching transient improvement
|
[
{
"docid": "01b35a491b36f9c90f37237ef3975e33",
"text": "Wide bandgap semiconductors show superior material properties enabling potential power device operation at higher temperatures, voltages, and switching speeds than current Si technology. As a result, a new generation of power devices is being developed for power converter applications in which traditional Si power devices show limited operation. The use of these new power semiconductor devices will allow both an important improvement in the performance of existing power converters and the development of new power converters, accounting for an increase in the efficiency of the electric energy transformations and a more rational use of the electric energy. At present, SiC and GaN are the more promising semiconductor materials for these new power devices as a consequence of their outstanding properties, commercial availability of starting material, and maturity of their technological processes. This paper presents a review of recent progresses in the development of SiC- and GaN-based power semiconductor devices together with an overall view of the state of the art of this new device generation.",
"title": ""
},
{
"docid": "643d75042a38c24b0e4130cb246fc543",
"text": "Silicon carbide (SiC) switching power devices (MOSFETs, JFETs) of 1200 V rating are now commercially available, and in conjunction with SiC diodes, they offer substantially reduced switching losses relative to silicon (Si) insulated gate bipolar transistors (IGBTs) paired with fast-recovery diodes. Low-voltage industrial variable-speed drives are a key application for 1200 V devices, and there is great interest in the replacement of the Si IGBTs and diodes that presently dominate in this application with SiC-based devices. However, much of the performance benefit of SiC-based devices is due to their increased switching speeds ( di/dt, dv/ dt), which raises the issues of increased electromagnetic interference (EMI) generation and detrimental effects on the reliability of inverter-fed electrical machines. In this paper, the tradeoff between switching losses and the high-frequency spectral amplitude of the device switching waveforms is quantified experimentally for all-Si, Si-SiC, and all-SiC device combinations. While exploiting the full switching-speed capability of SiC-based devices results in significantly increased EMI generation, the all-SiC combination provides a 70% reduction in switching losses relative to all-Si when operated at comparable dv/dt. It is also shown that the loss-EMI tradeoff obtained with the Si-SiC device combination can be significantly improved by driving the IGBT with a modified gate voltage profile.",
"title": ""
},
{
"docid": "9238182fa1a61a264558a23ad1a798b1",
"text": "Silicon carbide (SiC) power transistors have started gaining significant importance in various application areas of power electronics. During the last decade, SiC power transistors were counted not only as a potential, but also more importantly as an alternative to silicon counterparts in applications where high efficiency, high switching frequencies, and operation at elevated temperatures are targeted. Various SiC device designs have been proposed and excessive investigations in terms of simulation and experimental studies have shown their advantageous performance compared to silicon technology. On a system-level, however, the design of gate and base drivers for SiC power transistors is very challenging. In particular, a sophisticated driver design is not only associated with properly switching the transistor and decreasing the switching power losses, but also it must incorporate protection features, as well as comply with the electromagnetic compatibility. This paper shows an overview of the gate and base drivers for SiC power transistors which have been proposed by several highly qualified scientists. In particular, the basic operating principle of each driver along with their applicability and drawbacks are presented. For this overview, the three most successful SiC power transistors are considered: junction-field-effect transistors, bipolar-junction transistors, and metal-oxide-semiconductor field-effect transistors. Last but not least, future challenges on gate and base drivers design are also presented.",
"title": ""
}
] |
[
{
"docid": "c0ddc4b83145a1ee7b252d65066b8969",
"text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.",
"title": ""
},
{
"docid": "7113e007073184671d0bf5c9bdda1f5c",
"text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "60922247ab6ec494528d3a03c0909231",
"text": "This paper proposes a new \"zone controlled induction heating\" (ZCIH) system. The ZCIH system consists of two or more sets of a high-frequency inverter and a split work coil, which adjusts the coil current amplitude in each zone independently. The ZCIH system has capability of controlling the exothermic distribution on the work piece to avoid the strain caused by a thermal expansion. As a result, the ZCIH system enables a rapid heating performance as well as an temperature uniformity. This paper proposes current phase control making the coil current in phase with each other, to adjust the coil current amplitude even when a mutual inductance exists between the coils. This paper presents operating principle, theoretical analysis, and experimental results obtained from a laboratory setup and a six-zone prototype for a semiconductor processing.",
"title": ""
},
{
"docid": "2c39430076bf63a05cde06fe57a61ff4",
"text": "With the advent of IoT based technologies; the overall industrial sector is amenable to undergo a fundamental and essential change alike to the industrial revolution. Online Monitoring solutions of environmental polluting parameter using Internet Of Things (IoT) techniques help us to gather the parameter values such as pH, temperature, humidity and concentration of carbon monoxide gas, etc. Using sensors and enables to have a keen control on the environmental pollution caused by the industries. This paper introduces a LabVIEW based online pollution monitoring of industries for the control over pollution caused by untreated disposal of waste. This paper proposes the use of an AT-mega 2560 Arduino board which collects the temperature and humidity parameter from the DHT-11 sensor, carbon dioxide concentration using MG-811 and update it into the online database using MYSQL. For monitoring and controlling, a website is designed and hosted which will give a real essence of IoT. To increase the reliability and flexibility an android application is also developed.",
"title": ""
},
{
"docid": "a38105bda456a970b75422df194ecd68",
"text": "Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.",
"title": ""
},
{
"docid": "64cee7715639e354e3fb0a367e2c57fc",
"text": "Cloud computing offers applications and infrastructure at low prices and opens the possibility of criminal cases. The increasing criminal cases in the cloud environment have made investigators to use latest investigative methods for forensic process. Similarly, the attackers discover new ways to hide the sources of evidence. This may hinder the investigation process and is called anti-forensics. Anti-forensic attack compromises the trust and availability of evidence. To defend such kind of attacks against forensic tools, anti-forensic techniques in cloud environment have to be researched exhaustively. This paper explores the anti-forensic techniques in the cloud environment and proposes a framework for detecting the anti-forensic attack against cloud forensic process. The framework provides an effective model for forensic investigation of anti-forensic attacks in cloud.",
"title": ""
},
{
"docid": "c66069fc52e1d6a9ab38f699b6a482c6",
"text": "An understanding of the age of the Acheulian and the transition to the Middle Stone Age in southern Africa has been hampered by a lack of reliable dates for key sequences in the region. A number of researchers have hypothesised that the Acheulian first occurred simultaneously in southern and eastern Africa at around 1.7-1.6 Ma. A chronological evaluation of the southern African sites suggests that there is currently little firm evidence for the Acheulian occurring before 1.4 Ma in southern Africa. Many researchers have also suggested the occurrence of a transitional industry, the Fauresmith, covering the transition from the Early to Middle Stone Age, but again, the Fauresmith has been poorly defined, documented, and dated. Despite the occurrence of large cutting tools in these Fauresmith assemblages, they appear to include all the technological components characteristic of the MSA. New data from stratified Fauresmith bearing sites in southern Africa suggest this transitional industry maybe as old as 511-435 ka and should represent the beginning of the MSA as a broad entity rather than the terminal phase of the Acheulian. The MSA in this form is a technology associated with archaic H. sapiens and early modern humans in Africa with a trend of greater complexity through time.",
"title": ""
},
{
"docid": "679759d8f8e4c4ef5a2bb1356a61d7f5",
"text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.",
"title": ""
},
{
"docid": "d5142a032ebff4b256beb566273cc41a",
"text": "To understand the structural dynamics of a large-scale social, biological or technological network, it may be useful to discover behavioral roles representing the main connectivity patterns present over time. In this paper, we propose a scalable non-parametric approach to automatically learn the structural dynamics of the network and individual nodes. Roles may represent structural or behavioral patterns such as the center of a star, peripheral nodes, or bridge nodes that connect different communities. Our novel approach learns the appropriate structural role dynamics for any arbitrary network and tracks the changes over time. In particular, we uncover the specific global network dynamics and the local node dynamics of a technological, communication, and social network. We identify interesting node and network patterns such as stationary and non-stationary roles, spikes/steps in role-memberships (perhaps indicating anomalies), increasing/decreasing role trends, among many others. Our results indicate that the nodes in each of these networks have distinct connectivity patterns that are non-stationary and evolve considerably over time. Overall, the experiments demonstrate the effectiveness of our approach for fast mining and tracking of the dynamics in large networks. Furthermore, the dynamic structural representation provides a basis for building more sophisticated models and tools that are fast for exploring large dynamic networks.",
"title": ""
},
{
"docid": "e444a7a0570d96d589e4238dd4458d7a",
"text": "Flood disaster is considered a norm for Malaysians since Malaysia is located near the Equator. Flood disaster usually happens due to improper irrigation method in a housing area or the sudden increase of water volume in a river. Flood disaster often causes lost of property, damages and life. Since this disaster is considered dangerous to human life, an efficient countermeasure or alert system must be implemented in order to notify people in the early stage so that safety precautions can be taken to avoid any mishaps. This paper presents a remote water level alarm system developed by applying liquid sensors and GSM technology. System focuses on monitoring water level remotely and utilizes Global System of Mobile Connections (GSM) and Short Message Service (SMS) to convey data from sensors to the respective users through their mobile phone. The hardware of the system includes Micro Controller Unit (MCU) PIC18F452, three (3) liquid sensors, Inverter and Easygate GSM Module. Software used for the system is C compiler thru (ATtention) AT commands. It is hoped that this project would be beneficial to the community and would act as a precautionary measure in case of flood disaster at any flood prone area. By having early detection, users could take swift action such as evacuation so that cases of loss of lives could be minimized.",
"title": ""
},
{
"docid": "59b7afc5c2af7de75248c90fdf5c9cd3",
"text": "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.",
"title": ""
},
{
"docid": "9246700eca378427ea2ea3c20a4377b3",
"text": "This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the wellknown convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.",
"title": ""
},
{
"docid": "be7cc41f9e8d3c9e08c5c5ff1ea79f59",
"text": "A person’s emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: “The face is the portrait of the mind; the eyes, its informers.”. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and",
"title": ""
},
{
"docid": "de86da441c52644d255836040e1aedf0",
"text": "This paper outlines the design of a wide flare angle axially corrugated conical horn for a classical offset dual-reflector antenna system. The design minimizes the input reflection coefficient of the horn and maximizes the antenna efficiency of the antenna system by simultaneously limiting the sidelobe and cross-polarization levels to the system specifications. The effects of the number of corrugations in the horn and the number of parameters used in the optimization are also investigated.",
"title": ""
},
{
"docid": "b823d427f74963372fc7015a047cb90e",
"text": "Most of the previous sparse coding (SC) based super resolution (SR) methods partition the image into overlapped patches, and process each patch separately. These methods, however, ignore the consistency of pixels in overlapped patches, which is a strong constraint for image reconstruction. In this paper, we propose a convolutional sparse coding (CSC) based SR (CSC-SR) method to address the consistency issue. Our CSC-SR involves three groups of parameters to be learned: (i) a set of filters to decompose the low resolution (LR) image into LR sparse feature maps, (ii) a mapping function to predict the high resolution (HR) feature maps from the LR ones, and (iii) a set of filters to reconstruct the HR images from the predicted HR feature maps via simple convolution operations. By working directly on the whole image, the proposed CSC-SR algorithm does not need to divide the image into overlapped patches, and can exploit the image global correlation to produce more robust reconstruction of image local structures. Experimental results clearly validate the advantages of CSC over patch based SC in SR application. Compared with state-of-the-art SR methods, the proposed CSC-SR method achieves highly competitive PSNR results, while demonstrating better edge and texture preservation performance.",
"title": ""
},
{
"docid": "876d1aa1b8de9aa147c8d3a4df68dc07",
"text": "In this paper, we formulate the stereo matching problem as a Markov network and solve it using Bayesian belief propagation. The stereo Markov network consists of three coupled Markov random fields that model the following: a smooth field for depth/disparity, a line process for depth discontinuity, and a binary process for occlusion. After eliminating the line process and the binary process by introducing two robust functions, we apply the belief propagation algorithm to obtain the maximum a posteriori (MAP) estimation in the Markov network. Other low-level visual cues (e.g., image segmentation) can also be easily incorporated in our stereo model to obtain better stereo results. Experiments demonstrate that our methods are comparable to the state-of-the-art stereo algorithms for many test cases.",
"title": ""
},
{
"docid": "4583555a91527244488b9658288f4dc2",
"text": "The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as \"block-diagonalization,\" is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as \"successive optimization,\" is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.",
"title": ""
},
{
"docid": "b68336c869207720d6ab1880744b70be",
"text": "Particle Swarm Optimization (PSO) algorithms represent a new approach for optimization. In this paper image enhancement is considered as an optimization problem and PSO is used to solve it. Image enhancement is mainly done by maximizing the information content of the enhanced image with intensity transformation function. In the present work a parameterized transformation function is used, which uses local and global information of the image. Here an objective criterion for measuring image enhancement is used which considers entropy and edge information of the image. We tried to achieve the best enhanced image according to the objective criterion by optimizing the parameters used in the transformation function with the help of PSO. Results are compared with other enhancement techniques, viz. histogram equalization, contrast stretching and genetic algorithm based image enhancement.",
"title": ""
},
{
"docid": "5ae1191a27958704ab5f33749c6b30b5",
"text": "Much of Bluetooth’s data remains confidential in practice due to the difficulty of eavesdropping it. We present mechanisms for doing so, therefore eliminating the data confidentiality properties of the protocol. As an additional security measure, devices often operate in “undiscoverable mode” in order to hide their identity and provide access control. We show how the full MAC address of such master devices can be obtained, therefore bypassing the access control of this feature. Our work results in the first open-source Bluetooth sniffer.",
"title": ""
},
{
"docid": "4912a90f30127d2e70a2bbcb3733d524",
"text": "To better understand procrastination, researchers have sought to identify cognitive personality factors associated with it. The study reported here attempts to extend previous research by exploring the application of explanatory style to academic procrastination. Findings of the study are discussed from the perspective of employers of this new generation.",
"title": ""
}
] |
scidocsrr
|
c4dafe14765111a056a4755e0bbfb01f
|
Budget Constrained Bidding by Model-free Reinforcement Learning in Display Advertising
|
[
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "26cfea93f837197e3244f771526d2fe7",
"text": "The payof matrix of the numberdistance game is as folow. We know that each player is invariant to the diferent actions in her support. First we guessed that al of the actions are in supports for both players. Let x,y,z be the probability that the first players plays 1,0,2 respectively and Let p,q,r be the probability that the second players plays 1,0,2 respectively. For the first player we have: 0*p+1*q+3*r = 1*p+0*q+2*r = 3*p+2*q+0*r p+q+r=1",
"title": ""
},
{
"docid": "2438479795a9673c36138212b61c6d88",
"text": "Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.",
"title": ""
},
{
"docid": "bf445955186e2f69f4ef182850090ffc",
"text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.",
"title": ""
}
] |
[
{
"docid": "4dd6de0fbc55b369bd0b1d069e41fdca",
"text": "A typical pipeline for Zero-Shot Learning (ZSL) is to integrate the visual features and the class semantic descriptors into a multimodal framework with a linear or bilinear model. However, the visual features and the class semantic descriptors locate in different structural spaces, a linear or bilinear model can not capture the semantic interactions between different modalities well. In this letter, we propose a nonlinear approach to impose ZSL as a multi-class classification problem via a Semantic Softmax Loss by embedding the class semantic descriptors into the softmax layer of multi-class classification network. To narrow the structural differences between the visual features and semantic descriptors, we further use an L2 normalization constraint to the differences between the visual features and visual prototypes reconstructed with the semantic descriptors. The results on three benchmark datasets, i.e., AwA, CUB and SUN demonstrate the proposed approach can boost the performances steadily and achieve the state-of-the-art performance for both zero-shot classification and zero-shot retrieval.",
"title": ""
},
{
"docid": "a0b9e873d406894eb1b411e808f0c3e6",
"text": "Pushing accuracy and reliability of radar systems to higher levels is a requirement to realize autonomous driving. To maximize its performance, the millimeter-wave radar has to be designed in consideration of its surroundings such as emblems, bumpers and so on, because the electric-field distortion will degrade the performance. We propose electro-optic (EO) measurement system to visualize amplitude and phase distribution of millimeter waves, aiming at the evaluation of the disturbance of car components with the radar module equipped inside a vehicle. Visualization of 76-GHz millimeter waves passing through plastic plates is presented to demonstrate our system's capability of diagnosing a local cause of the field disturbance.",
"title": ""
},
{
"docid": "0ee4a8771a7ab51a4e58afd249e0fbe4",
"text": "Teeth exhibit limited repair in response to damage, and dental pulp stem cells probably provide a source of cells to replace those damaged and to facilitate repair. Stem cells in other parts of the tooth, such as the periodontal ligament and growing roots, play more dynamic roles in tooth function and development. Dental stem cells can be obtained with ease, making them an attractive source of autologous stem cells for use in restoring vital pulp tissue removed because of infection, in regeneration of periodontal ligament lost in periodontal disease, and for generation of complete or partial tooth structures to form biological implants. As dental stem cells share properties with mesenchymal stem cells, there is also considerable interest in their wider potential to treat disorders involving mesenchymal (or indeed non-mesenchymal) cell derivatives, such as in Parkinson's disease.",
"title": ""
},
{
"docid": "f491b77b8edb138ef3dbe96d74fbb34a",
"text": "This paper presents two approaches to modelling of mobile robot dynamics. First approach is based on physical modelling and second approach is based on experimental identification of mobile robot dynamics features. Model of mobile robot dynamics can then be used to improve the navigational system, especially path planing and localization modules. Localization module estimates mobile robot pose using its kinematic odometry model for pose prediction and additional sensor measurements for pose correction. Kinematic odometry models are simple, valid if mobile robot is travelling with low velocity, low acceleration and light load. Disadvantage is that they don’t take any dynamic constraints into account. This leads to errors in pose prediction, especially when significant control signal (translational and rotational velocity reference) changes occur. Problem lies in the fact that mobile robot can’t immediately change its current velocity to the desired value and mostly there exists a communication delay between the navigation computer and mobile robot micro-controller. Errors in predicted pose cause additional computations in path planning and localization modules. In order to reduce such pose prediction errors and considering that mobile robots are designed to travel at higher velocities and perform heavy duty work, mobile robot drive dynamics can be modelled and included as part of the navigational system. Proposed two modelling approaches are described and first results using a Pioneer 3DX mobile robot are presented. They are also compared regarding to complexity, accuracy and suitability of implementation as part of the mobile robot navigational system.",
"title": ""
},
{
"docid": "7591f47d69c91c4da90fc04949ec21c7",
"text": "This project uses a non-invasive method for measuring the blood glucose concentration levels. By implementing two infrared light with different wavelength; 940nm and 950nm based on the use of light emitting diodes and measure transmittance through solution of distilled water and d-glucose of concentration from 0mg/dL to 200mg/dL by using a 1000nm photodiode. It is observed that the output voltage from the photodiode increased proportionally to the increased of concentration levels. The relation observed was linear. Nine subjects with the same age but different body weight have been used to observe the glucose level during fasting and non-fasting. During fasting the voltage is about 0.13096V to 0.236V and during non-fasting the voltage range is about 0.12V to 0.256V. This method of measuring blood glucose level may become a preferably choice for diabetics because of the non-invasive and may extend to the general public. For having a large majority people able to monitor their blood glucose levels, it may prevent hypoglycemia, hyperglycemia and perhaps the onset of diabetes.",
"title": ""
},
{
"docid": "63de624a33f7c9362b477aabd9faac51",
"text": "24 GHz circularly polarized Doppler front-end with a single antenna is developed. The radar system is composed of 24 GHz circularly polarized Doppler radar module, signal conditioning block, DAQ unit, and signal processing program. 24 GHz Doppler radar receiver front-end IC which is comprised of 3-stage LNA, single-ended mixer, and Lange coupler is fabricated with commercial InGaP/GaAs HBT technology. To reduce the chip size and suppress self-mixing, single-ended mixer which uses Tx leakage as a LO signal of the mixer is used. The operation of the developed radar front-end is demonstrated by measuring human vital signal. Compact size and high sensitivity can be achieved at the same time with the circularly polarized Doppler radar with a single antenna.",
"title": ""
},
{
"docid": "1527c70d0b78a3d2aa6886282425c744",
"text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.",
"title": ""
},
{
"docid": "4eb1e28d62af4a47a2e8dc795b89cc09",
"text": "This paper describes a new computational finance approach. This approach combines pattern recognition techniques with an evolutionary computation kernel applied to financial markets time series in order to optimize trading strategies. Moreover, for pattern matching a template-based approach is used in order to describe the desired trading patterns. The parameters for the pattern templates, as well as, for the decision making rules are optimized using a genetic algorithm kernel. The approach was tested considering actual data series and presents a robust profitable trading strategy which clearly beats the market, S&P 500 index, reducing the investment risk significantly.",
"title": ""
},
{
"docid": "4ebf3bd40a878c4df1871ee2ff1d55d3",
"text": "Conversations are the lifeblood of collaborative communities. Social media like microblogging tool Twitter have great potential for supporting these conversations. However, just studying the role of these media from a tool perspective is not sufficient. To fully unlock their power, they need to examined from a socio-technical perspective. We introduce a socio-technical context framework which can be used to analyze the role of systems of tools supporting goal-oriented conversations. Central to this framework is the communicative workflow loop, which is grounded in the Language/Action Perspective. We show how socio-technical conversation contexts can be used to match the communicative requirements of collaborative communities with enabling tool functionalities. This social media systems design process is illustrated with a case on Twitter.",
"title": ""
},
{
"docid": "ee278469ad2af2d9e299046cc2901a9a",
"text": "Alterations of the gut microbiota have been associated with stress-related disorders including depression and anxiety and irritable bowel syndrome (IBS). More recently, researchers have started investigating the implication of perturbation of the microbiota composition in neurodevelopmental disorders including autism spectrum disorders and Attention-Deficit Hypersensitivity Disorder (ADHD). In this review we will discuss how the microbiota is established and its functions in maintaining health. We also summarize both pre and post-natal factors that shape the developing neonatal microbiota and how they may impact on health outcomes with relevance to disorders of the central nervous system. Finally, we discuss potential therapeutic approaches based on the manipulation of the gut bacterial composition.",
"title": ""
},
{
"docid": "14d2f63cb324b3013c5fbf138a7f9dff",
"text": "THISARTICLE WILL EXPLORE THE ROLE OF THE LIBRARIAN arid of the service perspective in the digital library environment. The focus of the article will be limited to the topic of librarian/user collaboration where the librarian and user are not co-located. The role of the librarian will be explored as outlined in the literature on digital libraries, some studies will be examined that attempt to put the service perspective in the digital library, survey existing initiatives in providing library services electronically, and outline potential service perspectives for the digital library. INTRODUCTION The digital library offers users the prospect of access to electronic resources at their convenience temporally and spatially. Users do not have to be concerned with the physical library’s hours of operation, and users do not have to go physically to the library to access resources. Much has been written about the digital library. The focus of most studies, papers, and articles has been on the technology or on the types of resources offered. Human interaction in the digital library is discussed far less frequently. One would almost get the impression that the service tradition of the physical library will be unnecessary and redundant in the digital library environment. Bernie Sloan, Office for Planning and Budget, Room 338, 506 S. Wright Street, University of Illinois, Urbana, IL 61801 LIBRARY TRENDS, Vol. 47, No. 1, Summer 1998,pp. 117-143 01998 The Board of’Trustees, University of Illinois 118 I.IBRARY TRENDS/SUMMER 1998 DEFINING LIBRARY-WHERE SERVICE FITI N ? THE DIGITA DOES Defining the digital library is an interesting, but somewhat daunting, task. There is no shortage of proposed definitions. One would think that there would be some commonly accepted and fairly straightforward standard definition, but there does not appear to be. Rather, there are many. And one common thread among all these definitions is a heavy emphasis on rrsourcesand an apparent lack of emphasis on librarians and the services they provide. The Association of Research Libraries (ARL) notes: “There are many definitions of a ‘digital library’. . . .Terms such as ‘electronic library’ and ‘virtual library’ are often used synonymously” (Association of Research Libraries, 1995). The AlU relies on Karen Drabenstott’s (1994) Analytical Reuiai~ojthe Library ofthe Future for its inspiration. In defining the digital library, Drabenstott offers fourteen definitions published between 1987 and 1993. The commonalties of these different definitions are summarized as follows: The digital library is not a single entity. The digital library requires technology to link the resources of many libraries and information services. Transparent to end-users are the linkages between the many digital libraries and information services. Universal access to digital libraries and information services is a goal. Digital libraries are not limited to document surrogates; they extend to digital artifacts that cannot be represented or distributed in printed formats. (p.9) One interesting aspect of Drabenstott’s summary definition is that, while there is a user-orientation stated, as well as references to technology and information resources, there is no reference to the role of the librarian in the digital library. Another report by Saffady (1995) cites thirty definitions of the digital library published between 1991 and 1994. Among the terms Saffady uses in describing these various definitions are: “repositories of.. .information assets,” “large information repositories,” “various online databases and.. .information products,” “computer storage devices on which information repositories reside,” “computerized, networked library systems,” accessible through the Internet,” “CD-ROM information products,” “database servers,” “libraries with online catalogs,” and “collections of computer-processible information” (p. 2 2 3 ) . Saffady summarizes these definitions by stating: “Broadly defined, a digital library is a collection of computer-processible information or a repository for such information” (p. 223). He then narrows the definition by noting that “a digital library is a library that maintains all, or a substantial part, of its collection in computer-processible form as an alternative, supplement, or complement to the conventional printed and microform materials that currently domiSLOAN/SERVICE PERSPECTIVES FOR THE DIGITAL LIBRARY I 19 nate library collections” (p. 224). Without exception, each of the definitions Saffady cites focuses on collections, repositories, or information resources. In another paper, Nurnberg, Furata, Leggett, Marshall, and Shipman (1995) ask “Why is a digital library called a library at all?” They state that the traditional physical library can provide a basis for discussing the digital library and arrive at this definition: the traditional library “deals with physical data” while the digital library works “primarily with digital data.” Once again, a definition that is striking in its neglect of service perspectives. In a paper presented at the Digital Libraries ’94 conference, Miksa and Doty (1994) again discuss the digital library as a “collection” or a series of collections. In another paper, Schatz and Chen (1996) state that digital libraries are “network information systems,” accessing resources “from and across large collections.” What do all these definitions of the “digital library” have in common? An emphasis on technology and information resources and a very noticeable lack of discussion of the service aspects of the digital library. Why is it important to take a look at how the digital library is defined? As more definitions of the digital library are published, with an absence of the service perspective and little treatment of the importance of librarian/ user collaboration, we perhaps draw closer to the Redundancy Theory (Hathorn, 1997) in which “the rise of digitized information threatens to make librarians practically obsolete.” People may well begin to believe that, as physical barriers to access to information are reduced through technological means, the services of the librarian are no longer as necessary. HUMAN OF THE DIGITAL ASPECTS IBRARY While considering the future, it sometimes is helpful to examine the past. As such, it might be useful to reflect on Jesse Shera’s oft-quoted definition of a library: “To bring together human beings and recorded knowledge in as fruitful a relationship as is humanly possible” (in Dysart &Jones, 1995, p. 16). Digital library proponents must consider the role of people (i.e., as users and service providers) if the digital library is to be truly beneficial. Technology and information resources on their own cannot make up an effective digital library. While a good deal of the literature on digital libraries emphasizes technology and resources at the expense of the service perspective, a number of authors and researchers have considered human interaction in the digital library environment. A number of studies at Lancaster University (Twidale, 1995, 1996; Twidale, Nichols, & Paice, 1996; Crabtree, Twidale, O’Brien, & Nichols, 1997; Nichols, Twidale, & Paice, 1997) have considered the importance of human interaction in the digital library. These studies focus on the social interactions of library users with librarians, librarians with librarians, and users with other users. By studying these collaborations in physical library settings, the authors have drawn some general conclusions that might be applied to digital library design: Collaboration between users, and between users and system personnel, is a significant element of searching in current information systems. The development of electronic libraries threatens existing forms of collaboration but also offers opportunities for new forms of collaboration. The sharing of both the search product and the search process are important for collaborative activities (including the education of searchers). There exist$ great potential for improving search effectiveness through the re-use of previous searches; this is one mechanism for adding value to existing databases. Browsing is not restricted to browsing for inanimate objects; browsing for people is also possible and could be a valuable source ofinformation. Searchers of databases need externalized help to reduce their cognitive load during the search process. This can be provided both by traditional paper-based technology and through computerized systems (Twidale et al., 1996). In a paper presented at the Digital Libraries ’94Conference, Ackerman (1994) stresses that, while the concept of the digital library “includes solving many of the technical and logistical issues in current libraries and information seeking,” it would be a mistake to consider solely the mechanical aspects of the library while ignoring the “useful social interactions in information seeking.” Ackerman outlines four ways in which social interaction can be helpful in the information-seeking process: 1. One may need to consult another person in order to know what to know (help in selecting information). 2. One may need to consult a person to obtain information that is transitory in nature and as such is unindexed (seeking informal information). 3. One may need to consult others for assistance in obtaining/understanding information that is highly contextual in nature rather than merely obtaining the information in a textual format (information seekers often have highly specific needs and interests). 4. Libraries serve important social functions, e.g., students and/or faculty meeting each other in hallways, study areas, etc. (socializing function). Ackerman notes that these points “all argue for the inclusion of some form of social interaction within the digital library. Such interaction should include not only librarians (or some human helper), but other users as well.” In a paper for the Digital Libraries ’96 Conference, Brewer, Ding, Hahn, ",
"title": ""
},
{
"docid": "4d3ba5824551b06c861fc51a6cae41a5",
"text": "This paper shows a gate driver design for 1.7 kV SiC MOSFET module as well a Rogowski-coil based current sensor for effective short circuit protection. The design begins with the power architecture selection for better common-mode noise immunity as the driver is subjected to high dv/dt due to the very high switching speed of the SiC MOSFET modules. The selection of the most appropriate gate driver IC is made to ensure the best performance and full functionalities of the driver, followed by the circuitry designs of paralleled external current booster, Soft Turn-Off, and Miller Clamp. In addition to desaturation, a high bandwidth PCB-based Rogowski current sensor is proposed to serve as a more effective method for the short circuit protection for the high-cost SiC MOSFET modules.",
"title": ""
},
{
"docid": "6e222e4af537a1099fd51a758d5e97b8",
"text": "In this chapter, decision, decision support, decision-making and planning are defined. We describe what planning means, why planning is needed and what are the aims of planning as a process. We describe the phases decision situations typically involve. We describe the different views for studying decision-making, i.e. the descriptive view, which studies decisions as people make them, and normative view, which studies the ways that may help in making better decisions. We present the different dimensions of decision situations (under certainty/under uncertainty, single goal/multiple goals, discrete/continuous, single decision-maker/multiple decision-makers or stakeholders). Finally, we briefly present classes of methods potentially useful for decision support for these situations such as mathematical optimisation, heuristics, multi-criteria decision-making and group decision-making. 1.1 What Is Planning? Decision means choosing from at least two distinct alternatives. Most decisions we face every day may be easy, like picking a meal from a restaurant menu. In most of the decisions, we need to consider several viewpoints (criteria). For instance, in the decision about the meal, it can be considered from the points of view of price, taste, energy content and healthiness. Usually in simple decisions, we balance the different criteria without giving it a conscious thought. Everyday decision-making is a mix of careful judgement and intuitive selection, in other words the slow and fast processes of reasoning (Kahneman 2011). Sometimes the problems that people face in their lives (both professional and private) are so complex or otherwise hard to tackle, however, that some kind of decision support is needed. Decision support means that we formally model the decision. It means that we explicitly account for the multiple criteria in important decisions and systematically explore the effects of different choices on these criteria. It means that the subjective values of decision-makers are made explicit and the evaluation of the possible choices is made transparent (Belton and Stewart 2002). Decision-making, on the other hand, can be defined to include the whole decision process from problem identification to choosing the best alternative © Springer International Publishing Switzerland 2015 A. Kangas et al., Decision Support for Forest Management, Managing Forest Ecosystems 30, DOI 10.1007/978-3-319-23522-6_1 3 (e.g. Kangas 1992). The decision process starts from the discovery of a decision problem, meaning the decision-maker (DM) either realises that he/she has alternative options available and needs to choose the best or realises that the current situation is not satisfactory and some action needs to be taken. In the first case, the DM needs to think of the values based on which to make the choice, and in the latter, the DM needs to define suitable actions based on his/her values. The actual choice is just the last phase of the decision process. Planning, on the other hand, can be defined as the process of thinking about and organising the activities required in order to achieve a desired goal. Planning involves the creation and maintenance of a plan. Thus, planning is very closely related to the decision process, and the plan can be seen as the guide to the final decision. The choice of the best action is not the only decision type. Also ranking or sorting type of problems can be seen as decision problems (Ishizaka and Nemery 2013). The sorting problem can be, for instance, such that the alternative options are sorted to groups of “acceptable” and “not acceptable”, for instance, when selecting a candidate to vote for parliament or choosing research projects to be funded. The ranking problem means that the candidates are ranked from best to worst, for instance, when selecting the best candidate for a post in an organisation. Another type of a decision problem is a design problem (Keeney 1992). It means that a creative new alternative option is sought for. In economic theory, it is usually assumed that people act rationally. A rational decision-maker chooses an alternative which in his/her opinion maximises the utility (Etzioni 1986, von Winterfeldt and Edwards 1986). For this, one has to have a perfect knowledge of the consequences of different decision alternatives, the goals and objectives of the decision-maker and the preferences of the decisionmaker among them. There is evidence that people are not necessarily rational (e.g. Simon 1957, Kahneman 2011). Or, even if they would want to make the best choice, they do not necessarily have enough information for being able to behave fully rationally. Therefore, decision-making can be considered from at least two points of view: one can analyse how the decisions should be made in order to obtain best results (prescriptive approach) or one can analyse how people actually do decisions without help (descriptive approach) (e.g. von Winterfeldt and Edwards 1986). The first approach is normative; it aims at designing and promoting methods that can be used to aid people in their decisions. The second approach represents behavioural research, usually based on experiments on how people actually make decisions under different conditions. In the behavioural studies, people have been found to have many different biases (see, e.g., Kahneman et al. 1991). For instance, people tend to remember the results of their past decisions as better than they actually are or to pay attention only to the information that confirms their pre-assumptions, to name but a few. People have also been found to behave more business-like and unethically in decisions when they have been prompted with money just before the decision (e.g. Kouchaki et al. 2013). 4",
"title": ""
},
{
"docid": "aebf00f667b9e0aa23bf8484fc9e2cfd",
"text": "Patients' medical conditions often evolve in complex and seemingly unpredictable ways. Even within a relatively narrow and well-defined episode of care, variations between patients in both their progression and eventual outcome can be dramatic. Understanding the patterns of events observed within a population that most correlate with differences in outcome is therefore an important task in many types of studies using retrospective electronic health data. In this paper, we present a method for interactive pattern mining and analysis that supports ad hoc visual exploration of patterns mined from retrospective clinical patient data. Our approach combines (1) visual query capabilities to interactively specify episode definitions, (2) pattern mining techniques to help discover important intermediate events within an episode, and (3) interactive visualization techniques that help uncover event patterns that most impact outcome and how those associations change over time. In addition to presenting our methodology, we describe a prototype implementation and present use cases highlighting the types of insights or hypotheses that our approach can help uncover.",
"title": ""
},
{
"docid": "9666ac68ee1aeb8ce18ccd2615cdabb2",
"text": "As the bring your own device (BYOD) to work trend grows, so do the network security risks. This fast-growing trend has huge benefits for both employees and employers. With malware, spyware and other malicious downloads, tricking their way onto personal devices, organizations need to consider their information security policies. Malicious programs can download onto a personal device without a user even knowing. This can have disastrous results for both an organization and the personal device. When this happens, it risks BYODs making unauthorized changes to policies and leaking sensitive information into the public domain. A privacy breach can cause a domino effect with huge financial and legal implications, and loss of productivity for organizations. This is a difficult challenge. Organizations need to consider user privacy and rights together with protecting networks from attacks. This paper evaluates a new architectural framework to control the risks that challenge organizations and the use of BYODs. After analysis of large volumes of research, the previous studies addressed single issues. We integrated parts of these single solutions into a new framework to develop a complete solution for access control. With too many organizations failing to implement and enforce adequate security policies, the process needs to be simpler. This framework reduces system restrictions while enforcing access control policies for BYOD and cloud environments using an independent platform. Primary results of the study are positive with the framework reducing access control issues. Keywords—Bring your own device; access control; policy; security",
"title": ""
},
{
"docid": "a8cad81570a7391175acdcf82bc9040b",
"text": "A model of Convolutional Fuzzy Neural Network for real world objects and scenes images classification is proposed. The Convolutional Fuzzy Neural Network consists of convolutional, pooling and fully-connected layers and a Fuzzy Self Organization Layer. The model combines the power of convolutional neural networks and fuzzy logic and is capable of handling uncertainty and impreciseness in the input pattern representation. The Training of The Convolutional Fuzzy Neural Network consists of three independent steps for three components of the net.",
"title": ""
},
{
"docid": "0757280353e6e1bd73b3d1cd11f6b031",
"text": "OBJECTIVE\nTo investigate seasonal patterns in mood and behavior and estimate the prevalence of seasonal affective disorder (SAD) and subsyndromal seasonal affective disorder (S-SAD) in the Icelandic population.\n\n\nPARTICIPANTS AND SETTING\nA random sample generated from the Icelandic National Register, consisting of 1000 men and women aged 17 to 67 years from all parts of Iceland. It represents 6.4 per million of the Icelandic population in this age group.\n\n\nDESIGN\nThe Seasonal Pattern Assessment Questionnaire, an instrument for investigating mood and behavioral changes with the seasons, was mailed to a random sample of the Icelandic population. The data were compared with results obtained with similar methods in populations in the United States.\n\n\nMAIN OUTCOME MEASURES\nSeasonality score and prevalence rates of seasonal affective disorder and subsyndromal seasonal affective disorder.\n\n\nRESULTS\nThe prevalence of SAD and S-SAD were estimated at 3.8% and 7.5%, respectively, which is significantly lower than prevalence rates obtained with the same method on the east coast of the United States (chi 2 = 9.29 and 7.3; P < .01). The standardized rate ratios for Iceland compared with the United States were 0.49 and 0.63 for SAD and S-SAD, respectively. No case of summer SAD was found.\n\n\nCONCLUSIONS\nSeasonal affective disorder and S-SAD are more common in younger individuals and among women. The weight gained by patients during the winter does not seem to result in chronic obesity. The prevalence of SAD and S-SAD was lower in Iceland than on the East Coast of the United States, in spite of Iceland's more northern latitude. These results are unexpected since the prevalence of these disorders has been found to increase in more northern latitudes. The Icelandic population has remained remarkably isolated during the past 1000 years. It is conceivable that persons with a predisposition to SAD have been at a disadvantage and that there may have been a population selection toward increased tolerance of winter darkness.",
"title": ""
},
{
"docid": "a7e9decff31b66fb800fd3f75db249dc",
"text": "Article history: Received 5 January 2010 Received in revised form 6 October 2010 Accepted 13 October 2010 Available online xxxx",
"title": ""
},
{
"docid": "33a8ae0ba8ed7cbd202a9c764943ac91",
"text": "A microcontroller based capacitance meter using 89C52 microcontroller for the measurement of capacitance has been design and developed. It is based on the principle of charging and discharging of the capacitor. Atmel’s AT89C52 microcontroller is used in the present study. Further, an LCD module is interfaced with the microcontroller in 4-bit mode, which reduces the hardware complexity. Software is developed in C using Kiel’s C-cross compiler. The instrument system covers a range 1pF to 1000μF. The paper deals with the hardware and software details. This instrument has auto range selection and auto calibration facility i.e., auto reset facility. This instrument is provided with 400-value storage capacity. The system is quite successful in the measurement of capacitance with an accuracy of ±1 % and capacitance of whole construction is around 10 pF. The power consumption is less than 1W.",
"title": ""
},
{
"docid": "247c8cd5e076809a208849abe4dce3e5",
"text": "This paper deals with the application of a novel neural network technique, support vector machine (SVM), in !nancial time series forecasting. The objective of this paper is to examine the feasibility of SVM in !nancial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast !nancial time series. ? 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
7e5ea69f14f8c52b08c0e12a40c03e72
|
Modern programming assignment verification , testing and plagiarism detection approaches
|
[
{
"docid": "f9c744cf2d95b12857c2c4c6fb7d8874",
"text": "The code comparison technology plays a very important part in the work of plagiarism detection and software evaluation. Software plagiarism mainly appears as copy-and-paste or with a little modification after this, which will not change the function of the code, such as replacing the name of methods or variables, reordering the sequence of the statements etc. This paper introduces a plagiarism detection tool named CCS (Code Comparison System) which is based on the Abstract Syntax Tree (AST). According to the syntax tree's characteristics, CCS calculates their hash values, transforms their storage forms, and then compares them node by node. As a result, the efficiency improves. Moreover, CCS preprocesses a large amount of source code in its database for potential use, which also accelerate the course of plagiarism detection. CCS also takes special measurement to reduce mistakes when calculating the hash values of the operations like subtraction and division. It performs well in the code comparison field, and is able to help with the copyright protecting of the source code.",
"title": ""
}
] |
[
{
"docid": "46a1dd05e29e206b9744bf15d48f5a5e",
"text": "In this paper, we propose a distributed version of the Hungarian method to solve the well-known assignment problem. In the context of multirobot applications, all robots cooperatively compute a common assignment that optimizes a given global criterion (e.g., the total distance traveled) within a finite set of local computations and communications over a peer-to-peer network. As a motivating application, we consider a class of multirobot routing problems with “spatiotemporal” constraints, i.e., spatial targets that require servicing at particular time instants. As a means of demonstrating the theory developed in this paper, the robots cooperatively find online suboptimal routes by applying an iterative version of the proposed algorithm in a distributed and dynamic setting. As a concrete experimental test bed, we provide an interactive “multirobot orchestral” framework, in which a team of robots cooperatively plays a piece of music on a so-called orchestral floor.",
"title": ""
},
{
"docid": "4b496b32df5d8697eb31d96878a1edcb",
"text": "Intelligent Speech Analysis (ISA) plays an essential role in smart conversational agent systems that aim to enable natural, intuitive, and friendly human computer interaction. It includes not only the long-term developed Automatic Speech Recognition (ASR), but also the young field of Computational Paralinguistics, which has attracted increasing attention in recent years. In real-world applications, however, several challenging issues surrounding data quantity and quality arise. For example, predefined databases for most paralinguistic tasks are normally quite small and few in number, which are insufficient for building a robust model. A distributed structure could be useful for data collection, but original feature sets are always too large to meet the physical transmission requirements, for example, bandwidth limitation. Furthermore, in a hands-free application scenario, reverberation severely distorts speech signals, which results in performance degradation of recognisers. To address these issues, this thesis proposes and analyses semi-autonomous data enrichment and optimisation approaches. More precisely, for the representative paralinguistic task of speech emotion recognition, both labelled and unlabelled data from heterogeneous resources are exploited by methods of data pooling, data selection, confidence-based semi-supervised learning, active learning, as well as cooperative learning. As a result, the manual work for data annotation is greatly reduced. With the advance of networks and information technologies, this thesis extends the traditional ISA system into a modern distributed paradigm, in which Split Vector Quantisation is employed for feature compression. Moreover, for distant-talk ASR, Long Short-Term Memory (LSTM) recurrent neural networks, which are known to be well-suited to context-sensitive pattern recognition, are evaluated to mitigate reverberation. The experimental results demonstrate that the proposed LSTM-based feature enhancement frameworks prevail over the current state-of-the-art methods.",
"title": ""
},
{
"docid": "881da6fd2d6c77d9f31ba6237c3d2526",
"text": "Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.",
"title": ""
},
{
"docid": "46df05f01a027359f23d4de2396e2586",
"text": "Dialog act identification plays an important role in understanding conversations. It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and especially useful in systems with human-computer natural language dialogue interfaces such as virtual assistants and chatbots. The first step of identifying dialog act is identifying the boundary of the dialog act in utterances. In this paper, we focus on segmenting the utterance according to the dialog act boundaries, i.e. functional segments identification, for Vietnamese utterances. We investigate carefully functional segment identification in two approaches: (1) machine learning approach using maximum entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidirectional Long Short-Term Memory (LSTM) with a CRF layer (Bi-LSTM-CRF) on two different conversational datasets: (1) Facebook messages (Message data); (2) transcription from phone conversations (Phone data). To the best of our knowledge, this is the first work that applies deep learning based approach to dialog act segmentation. As the results show, deep learning approach performs appreciably better as to compare with traditional machine learning approaches. Moreover, it is also the first study that tackles dialog act and functional segment identification for Vietnamese.",
"title": ""
},
{
"docid": "d2bea5e928167f295e05412962d44b99",
"text": "The development of e-commerce has increased the popularity of online shopping worldwide. In Malaysia, it was reported that online shopping market size was RM1.8 billion in 2013 and it is estimated to reach RM5 billion by 2015. However, online shopping was rated 11 th out of 15 purposes of using internet in 2012. Consumers’ perceived risks of online shopping becomes a hot topic to research as it will directly influence users’ attitude towards online purchasing, and their attitude will have significant impact to the online purchasing behaviour. The conceptualization of consumers’ perceived risk, attitude and online shopping behaviour of this study provides empirical evidence in the study of consumer online behaviour. Four types of risks product risk, financial, convenience and non-delivery risks were examined in term of their effect on consumers’ online attitude. A web-based survey was employed, and a total of 300 online shoppers of a Malaysia largest online marketplace participated in this study. The findings indicated that product risk, financial and non-delivery risks are hazardous and negatively affect the attitude of online shoppers. Convenience risk was found to have positive effect on consumers’ attitude, denoting that online buyers of this site trusted the online seller and they encountered less troublesome with the site. It also implies that consumers did not really concern on non-convenience aspect of online shopping, such as handling of returned products and examine the quality of products featured in the online seller website. The online buyers’ attitude was significantly and positively affects their online purchasing behaviour. The findings provide useful model for measuring and managing consumers’ perceived risk in internet-based transaction to increase their involvement in online shopping and to reduce their cognitive dissonance in the e-commerce setting.",
"title": ""
},
{
"docid": "e7808c1fa1c5e02119a3c9da855f7499",
"text": "Cloud computing provides users with great flexibility when provisioning resources, with cloud providers offering a choice of reservation and on-demand purchasing options. Reservation plans offer cheaper prices, but must be chosen in advance, and therefore must be appropriate to users' requirements. If demand is uncertain, the reservation plan may not be sufficient and on-demand resources have to be provisioned. Previous work focused on optimally placing virtual machines with cloud providers to minimize total cost. However, many applications require large amounts of network bandwidth. Therefore, considering only virtual machines offers an incomplete view of the system. Exploiting recent developments in software defined networking (SDN), we propose a unified approach that integrates virtual machine and network bandwidth provisioning. We solve a stochastic integer programming problem to obtain an optimal provisioning of both virtual machines and network bandwidth, when demand is uncertain. Numerical results clearly show that our proposed solution minimizes users' costs and provides superior performance to alternative methods. We believe that this integrated approach is the way forward for cloud computing to support network intensive applications.",
"title": ""
},
{
"docid": "cd731b69088f5f429ba4b55b4e70dad2",
"text": "Object detection in images withstanding significant clutter and occlusion is still a challenging task whenever the object surface is characterized by poor informative content. We propose to tackle this problem by a compact and distinctive representation of groups of neighboring line segments aggregated over limited spatial supports and invariant to rotation, translation and scale changes. Peculiarly, our proposal allows for leveraging on the inherent strengths of descriptor-based approaches, i.e. robustness to occlusion and clutter and scalability with respect to the size of the model library, also when dealing with scarcely textured objects.",
"title": ""
},
{
"docid": "14b7c4f8a3fa7089247f1d4a26186c5d",
"text": "System Dynamics is often used for dealing with dynamically complex issues that are also uncertain. This paper reviews how uncertainty is dealt with in System Dynamics modeling, where uncertainties are located in models, which types of uncertainties are dealt with, and which levels of uncertainty could be handled. Shortcomings of System Dynamics and its practice in dealing with uncertainty are distilled from this review and reframed as opportunities. Potential opportunities for dealing with uncertainty in System Dynamics that are discussed here include (i) dealing explicitly with difficult sorts of uncertainties, (ii) using multi-model approaches for dealing with alternative assumptions and multiple perspectives, (iii) clearly distinguishing sensitivity analysis from uncertainty analysis and using them for different purposes, (iv) moving beyond invariant model boundaries, (v) using multi-method approaches, advanced techniques and new tools, and (vi) further developing and using System Dynamics strands for dealing with deep uncertainty.",
"title": ""
},
{
"docid": "e71860d5882f9b7b7f9ca1e209d4ac9d",
"text": "In-wheel motors for electric vehicles (EVs) have a high outer diameter (D) to axial length (L) ratio. In such applications, axial flux machines are preferred over radial flux machines due to high power density. Moreover, permanent magnet (PM)-less machines are gaining interest due to increase in cost of rare-earth PM materials. In view of this, axial flux switched reluctance motor (AFSRM) is considered as a possible option for EV propulsion. Two topologies namely, toothed and segmented rotor AFSRM are designed and compared for the same overall volume. These topologies have a three-phase, 12/16 pole single-stator, dual outer-rotor configuration along with non-overlapping winding arrangement. Analytical expressions for phase inductance and average torque are derived. To verify the performance of both the topologies a finite element method (FEM) based simulation study is carried out and its results are verified with the analytical values. It is observed from simulation that the average torque is 16.2% higher and torque ripple is 17.9% lower for segmented rotor AFSRM as compared to toothed rotor AFSRM.",
"title": ""
},
{
"docid": "36347412c7d30ae6fde3742bbc4f21b9",
"text": "iii",
"title": ""
},
{
"docid": "1b3efa626d1e2221051477c587572230",
"text": "In diesem Bericht wird die neue Implementation von Threads unter Linux behandelt. Die bis jetzt noch eingesetzte Implementation ist veraltet und basiert auf nicht mehr aktuellen Voraussetzungen. Es ist wichtig zuerst die fundamentalen Kenntnisse über ThreadImplementationen zu erhalten und die Probleme der aktuellen Implementation zu erkennen, um die nötigen Änderungen zu sehen. Florian Dürrbaum 14.12.2003 2 FH Aargau Enterprise Computing",
"title": ""
},
{
"docid": "e42a1faf3d983bac59c0bfdd79212093",
"text": "L eadership matters, according to prominent leadership scholars (see also Bennis, 2007). But what is leadership? That turns out to be a challenging question to answer. Leadership is a complex and diverse topic, and trying to make sense of leadership research can be an intimidating endeavor. One comprehensive handbook of leadership (Bass, 2008), covering more than a century of scientific study, comprises more than 1,200 pages of text and more than 200 additional pages of references! There is clearly a substantial scholarly body of leadership theory and research that continues to grow each year. Given the sheer volume of leadership scholarship that is available, our purpose is not to try to review it all. That is why our focus is on the nature or essence of leadership as we and our chapter authors see it. But to fully understand and appreciate the nature of leadership, it is essential that readers have some background knowledge of the history of leadership research, the various theoretical streams that have evolved over the years, and emerging issues that are pushing the boundaries of the leadership frontier. Further complicating our task is that more than one hundred years of leadership research have led to several paradigm shifts and a voluminous body of knowledge. On several occasions, scholars of leadership became quite frustrated by the large amount of false starts, incremental theoretical advances, and contradictory findings. As stated more than five decades ago by Warren Bennis (1959, pp. 259–260), “Of all the hazy and confounding areas in social psychology, leadership theory undoubtedly contends for Leadership: Past, Present, and Future",
"title": ""
},
{
"docid": "ca4e2cff91621bca4018ce1eca5450e2",
"text": "Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high-dimensional constrained problems, as the projection step becomes computationally prohibitive. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank–Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an <italic>inexact </italic> FW algorithm. Using a diminishing step size rule and letting <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math></inline-formula> be the iteration number, we show that the DeFW algorithm's convergence rate is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t)$</tex-math></inline-formula> for convex objectives; is <inline-formula><tex-math notation=\"LaTeX\">${\\mathcal O}(1/t^2)$</tex-math></inline-formula> for strongly convex objectives with the optimal solution in the interior of the constraint set; and is <inline-formula> <tex-math notation=\"LaTeX\">${\\mathcal O}(1/\\sqrt{t})$</tex-math></inline-formula> toward a stationary point for smooth but nonconvex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. We demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings.",
"title": ""
},
{
"docid": "25bd1930de4141a4e80441d7a1ae5b89",
"text": "Since the release of Bitcoins as crypto currency, Bitcoin has played a prominent part in the media. However, not Bitcoin but the underlying technology blockchain offers the possibility to innovatively change industries. The decentralized structure of the blockchain is particularly suitable for implementing control and business processes in microgrids, using smart contracts and decentralized applications. This paper provides a state of the art survey overview of current blockchain technology based projects with the potential to revolutionize microgrids and provides a first attempt to technically characterize different start-up approaches. The most promising use case from the microgrid perspective is peer-to-peer trading, where energy is exchanged and traded locally between consumers and prosumers. An application concept for distributed PV generation is provided in this promising area.",
"title": ""
},
{
"docid": "935fb5a196358764fda82ac50b87cf1b",
"text": "Linear dimensionality reduction methods, such as LDA, are often used in object recognition for feature extraction, but do not address the problem of how to use these features for recognition. In this paper, we propose Probabilistic LDA, a generative probability model with which we can both extract the features and combine them for recognition. The latent variables of PLDA represent both the class of the object and the view of the object within a class. By making examples of the same class share the class variable, we show how to train PLDA and use it for recognition on previously unseen classes. The usual LDA features are derived as a result of training PLDA, but in addition have a probability model attached to them, which automatically gives more weight to the more discriminative features. With PLDA, we can build a model of a previously unseen class from a single example, and can combine multiple examples for a better representation of the class. We show applications to classification, hypothesis testing, class inference, and clustering, on classes not observed during training.",
"title": ""
},
{
"docid": "05992953358e27c40ff8a83697b9c9f8",
"text": "Canonical correlation analysis (CCA) is a classical multivariate method concerned with describing linear dependencies between sets of variables. After a short exposition of the linear sample CCA problem and its analytical solution, the article proceeds with a detailed characterization of its geometry. Projection operators are used to illustrate the relations between canonical vectors and variates. The article then addresses the problem of CCA between spaces spanned by objects mapped into kernel feature spaces. An exact solution for this kernel canonical correlation (KCCA) problem is derived from a geometric point of view. It shows that the expansion coefficients of the canonical vectors in their respective feature space can be found by linear CCA in the basis induced by kernel principal component analysis. The effect of mappings into higher dimensional feature spaces is considered critically since it simplifies the CCA problem in general. Then two regularized variants of KCCA are discussed. Relations to other methods are illustrated, e.g., multicategory kernel Fisher discriminant analysis, kernel principal component regression and possible applications thereof in blind source separation.",
"title": ""
},
{
"docid": "3043eb8fbe54b5ce5f2767934a6e689e",
"text": "A 21-year-old man presented with an enlarged giant hemangioma on glans penis which also causes an erectile dysfunction (ED) that partially responded to the intracavernous injection stimulation test. Although the findings in magnetic resonance imaging (MRI) indicated a glandular hemangioma, penile colored Doppler ultrasound revealed an invaded cavernausal hemangioma to the glans. Surgical excision was avoided according to the broad extension of the gland lesion. Holmium laser coagulation was applied to the lesion due to the cosmetically concerns. However, the cosmetic results after holmium laser application was not impressive as expected without an improvement in intracavernous injection stimulation test. In conclusion, holmium laser application should not be used to the hemangiomas of glans penis related to the corpus cavernosum, but further studies are needed to reveal the effects of holmium laser application in small hemangiomas restricted to the glans penis.",
"title": ""
},
{
"docid": "e50253a714afe5ad36439ab821604ce8",
"text": "INTRODUCTION\nAn approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study.\n\n\nMETHODS\nA combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others).\n\n\nRESULTS\nThe proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia.\n\n\nCONCLUSION\nThe proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others.",
"title": ""
},
{
"docid": "1986b84084202aaf3b6aee4df9fea8e2",
"text": "Electronic marketplaces (EMs) are an important empirical phenomenon, because they are theoretically linked to significant economic and business effects. Different types of EMs have been identified; further, some researchers link different EM types with different impacts. Because the effects of EMs may vary with types, classifying and identifying the characteristics of EM types are fundamental to sound research. Some prior approaches to EM classification have been based on empirical observations, others have been theoretically motivated; each has strengths and limitations. This paper presents a third approach: surfacing strategic archetypes. The strategic archetypes approach has the empirical fidelity associated with the large numbers of attributes considered in the empirical classification approach, but the parsimony of types and the theoretical linkages associated with the theoretical classification approach. The strategic archetypes approach seeks a manageable number of EM configuration types in which the attributes are theoretically linked to each other and to hypothesized outcomes like performance and impacts. The strategic archetypes approach has the potential to inform future theoretical and empirical investigations of electronic marketplaces and to translate research findings into successful recommendations for practice.",
"title": ""
},
{
"docid": "0d7fabd5479ec2b4db3dab46fba561a1",
"text": "Purpose – This paper seeks to provide business process redesign (BPR) practitioners and academics with insight into the most popular heuristics to derive improved process designs. Design/methodology/approach – An online survey was carried out in the years 2003-2004 among a wide range of experienced BPR practitioners in the UK and The Netherlands. Findings – The survey indicates that this “top ten” of best practices is indeed extensively used in practice. Moreover, indications for their business impact have been collected and classified. Research limitations/implications – The authors’ estimations of best practices effectiveness differed from feedback obtained from respondents, possibly caused by the design of the survey instrument. This is food for further research. Practical implications – The presented framework can be used by practitioners to keep the various aspects of a redesign in perspective. The presented list of BPR best practices is directly applicable to derive new process designs. Originality/value – This paper addresses the subject of process redesign, rather than the more popular subject of process reengineering. As such, it fills in part an existing gap in knowledge.",
"title": ""
}
] |
scidocsrr
|
dcf22db76dd2db8e9f64fce8742bd7c1
|
SDN based Scalable MTD solution in Cloud Network
|
[
{
"docid": "87a735f2f42b1f072385b90c69368482",
"text": "Distributed Denial of Service (DDoS) attacks still pose a significant threat to critical infrastructure and Internet services alike. In this paper, we propose MOTAG, a moving target defense mechanism that secures service access for authenticated clients against flooding DDoS attacks. MOTAG employs a group of dynamic packet indirection proxies to relay data traffic between legitimate clients and the protected servers. Our design can effectively inhibit external attackers' attempts to directly bombard the network infrastructure. As a result, attackers will have to collude with malicious insiders in locating secret proxies and then initiating attacks. However, MOTAG can isolate insider attacks from innocent clients by continuously \"moving\" secret proxies to new network locations while shuffling client-to-proxy assignments. We develop a greedy shuffling algorithm to minimize the number of proxy re- allocations (shuffles) while maximizing attack isolation. Simulations are used to investigate MOTAG's effectiveness on protecting services of different scales against intensified DDoS attacks.",
"title": ""
},
{
"docid": "6fd511ffcdb44c39ecad1a9f71a592cc",
"text": "s Providing Supporting Policy Compositional Operators Functional Composition Network Layered Abstract Topologies Topological Decomposition Packet Extensible Headers Policy & Network Abstractions Pyretic (Contributions)",
"title": ""
}
] |
[
{
"docid": "ae83a2258907f00500792178dc65340d",
"text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "378d371bd6173ea75678b464deb9aa49",
"text": "The self-tuning, low-overhead, scan-resistant adaptive replacement cache algorithm outperforms the least-recently-used algorithm by dynamically responding to changing access patterns and continually balancing between workload recency and frequency features. Caching, a fundamental metaphor in modern computing, finds wide application in storage systems, databases, Web servers, middleware, processors, file systems, disk drives, redundant array of independent disks controllers, operating systems, and other applications such as data compression and list updating. In a two-level memory hierarchy, a cache performs faster than auxiliary storage, but it is more expensive. Cost concerns thus usually limit cache size to a fraction of the auxiliary memory's size.",
"title": ""
},
{
"docid": "088011257e741b8d08a3b44978134830",
"text": "This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.",
"title": ""
},
{
"docid": "871c89728a68926cf33e518e0478e268",
"text": "The MVN package contains functions in the S3 class to assess multivariate normality. This package is the updated version of the MVN package [1]. The data to be analyzed should be given in the \"data.frame\" or \"matrix\" class. In this example, we will work with the famous Iris data set. These data are from a multivariate data set introduced by Fisher (1936) as an application of linear discriminant analysis [2]. It is also called Anderson’s Iris data set because Edgar Anderson collected the data to measure the morphologic variation of Iris flowers of three related species [3]. First of all, the MVN library should be loaded in order to use related functions.",
"title": ""
},
{
"docid": "285595d4c7d2199e58c4a451f24560dd",
"text": "In traditional mobile crowdsensing applications, organizers need participants’ precise locations for optimal task allocation, e.g., minimizing selected workers’ travel distance to task locations. However, the exposure of their locations raises privacy concerns. Especially for those who are not eventually selected for any task, their location privacy is sacrificed in vain. Hence, in this paper, we propose a location privacy-preserving task allocation framework with geoobfuscation to protect users’ locations during task assignments. Specifically, we make participants obfuscate their reported locations under the guarantee of differential privacy, which can provide privacy protection regardless of adversaries’ prior knowledge and without the involvement of any third-part entity. In order to achieve optimal task allocation with such differential geo-obfuscation, we formulate a mixed-integer non-linear programming problem to minimize the expected travel distance of the selected workers under the constraint of differential privacy. Evaluation results on both simulation and real-world user mobility traces show the effectiveness of our proposed framework. Particularly, our framework outperforms Laplace obfuscation, a state-ofthe-art differential geo-obfuscation mechanism, by achieving 45% less average travel distance on the real-world data.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "8f88620de9b4a4d8702eaf3d962e7326",
"text": "To have automatic conversations between human and computer is regarded as one of the most hardcore problems in computer science. Conversational systems are of growing importance due to their promising potentials and commercial values as virtual assistants and chatbots. To build such systems with adequate intelligence is challenging, and requires abundant resources including an acquisition of big conversational data and interdisciplinary techniques, such as content analysis, text mining, and retrieval. The arrival of big data era reveals the feasibility to create a conversational system empowered by data-driven approaches. Now we are able to collect an extremely large number of human-human conversations on Web, and organize them to launch human-computer conversational systems. Given a human issued utterance, i.e., a query, a conversational system will search for appropriate responses, conduct relevance ranking using contexts information, and then output the highly relevant result. In this paper, we propose a novel context modeling framework with end-to-end neural networks for human-computer conversational systems. The proposed model is general and unified. In the experiments, we demonstrate the effectiveness of the proposed model for human-computer conversations using p@1, MAP, nDCG, and MRR metrics.",
"title": ""
},
{
"docid": "25fdc0032236131be6e266c6bdac37d1",
"text": "Shoulder-surfing -- using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information -- is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input.\n With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods.",
"title": ""
},
{
"docid": "f1ce3eb2b8735205fedc3b651b185ce3",
"text": "Road detection is an important problem with application to driver assistance systems and autonomous, self-guided vehicles. The focus of this paper is on the problem of feature extraction and classification for front-view road detection. Specifically, we propose using Support Vector Machines (SVM) for road detection and effective approach for self-supervised online learning. The proposed road detection algorithm is capable of automatically updating the training data for online training which reduces the possibility of misclassifying road and non-road classes and improves the adaptability of the road detection algorithm. The algorithm presented here can also be seen as a novel framework for self-supervised online learning in the application of classification-based road detection algorithm on intelligent vehicle.",
"title": ""
},
{
"docid": "3f85dea7d56f696b30d30dc74676cc48",
"text": "hch@lst.de X F s I s a F I l e s y s t e m t h at w a s d e signed from day one for computer systems with large numbers of CPUs and large disk arrays. It focuses on supporting large files and good streaming I/O performance. It also has some interesting administrative features not supported by other Linux file systems. This article gives some background information on why XFS was created and how it differs from the familiar Linux file systems. You may discover that XFS is just what your project needs instead of making do with the default Linux file system.",
"title": ""
},
{
"docid": "a898f3e513b2c738c476cfb9a519d4dd",
"text": "In addition to training our policy on the goals that were generated in the current iteration, we also save a list (“regularized replay buffer”) of goals that were generated during previous iterations (update replay). These goals are also used to train our policy, so that our policy does not forget how to achieve goals that it has previously learned. When we generate goals for our policy to train on, we sample two thirds of the goals from the Goal GAN and we sample the one third of the goals uniformly from the replay buffer. To prevent the replay buffer from concentrating in a small portion of goal space, we only insert new goals that are further away than from the goals already in the buffer, where we chose the goal-space metric and to be the same as the ones introduced in Section 3.1.",
"title": ""
},
{
"docid": "dcb64355bb122fae6ac390d4a63fae08",
"text": "The initial state of an Unmanned Aerial Vehicle (UAV) system and the relative state of the system, the continuous inputs of each flight unit are piecewise linear by a Control Parameterization and Time Discretization (CPTD) method. The approximation piecewise linearization control inputs are used to substitute for the continuous inputs. In this way, the multi-UAV formation reconfiguration problem can be formulated as an optimal control problem with dynamical and algebraic constraints. With strict constraints and mutual interference, the multi-UAV formation reconfiguration in 3-D space is a complicated problem. The recent boom of bio-inspired algorithms has attracted many researchers to the field of applying such intelligent approaches to complicated optimization problems in multi-UAVs. In this paper, a Hybrid Particle Swarm Optimization and Genetic Algorithm (HPSOGA) is proposed to solve the multi-UAV formation reconfiguration problem, which is modeled as a parameter optimization problem. This new approach combines the advantages of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), which can find the time-optimal solutions simultaneously. The proposed HPSOGA will also be compared with basic PSO algorithm and the series of experimental results will show that our HPSOGA outperforms PSO in solving multi-UAV formation reconfiguration problem under complicated environments.",
"title": ""
},
{
"docid": "f257378facb3267995a7c6f74ccf2115",
"text": "This paper discusses methods in evaluating fingerprint image quality on a local level. Feature vectors covering directional strength, sinusoidal local ridge/valley pattern, ridge/valley uniformity and core occurrences are first extracted from fingerprint image subblocks. Each subblock is then assigned a quality level through pattern classification. Three different classifiers are employed to compare each of its different effectiveness. Positive results have been obtained based on our database.",
"title": ""
},
{
"docid": "437074a50843a32f7a082abe46b3577e",
"text": "The style of an image plays a significant role in how it is viewed, but style has received little attention in computer vision research. We describe an approach to predicting style of images, and perform a thorough evaluation of different image features for these tasks. We find that features learned in a multi-layer network generally perform best – even when trained with object class (not style) labels. Our large-scale learning methods results in the best published performance on an existing dataset of aesthetic ratings and photographic style annotations. We present two novel datasets: 80K Flickr photographs annotated with 20 curated style labels, and 85K paintings annotated with 25 style/genre labels. Our approach shows excellent classification performance on both datasets. We use the learned classifiers to extend traditional tag-based image search to consider stylistic constraints, and demonstrate cross-dataset understanding of style.",
"title": ""
},
{
"docid": "6cec39463f83cfb230425704fa4549f5",
"text": "Experiments were conducted to study the effect of static magnetic fields on the seeds of soybean (Glycine max (L.) Merr. var: JS-335) by exposing the seeds to different magnetic field strengths from 0 to 300 mT in steps of 50 mT for 30, 60, and 90 min. Treatment with magnetic fields improved germination-related parameters like water uptake, speed of germination, seedling length, fresh weight, dry weight and vigor indices of soybean seeds under laboratory conditions. Improvement over untreated control was 5-42% for speed of germination, 4-73% for seedling length, 9-53% for fresh weight, 5-16% for dry weight, and 3-88% and 4-27% for vigor indices I and II, respectively. Treatment of 200 mT (60 min) and 150 mT (60 min), which were more effective than others in increasing most of the seedling parameters, were further explored for their effect on plant growth, leaf photosynthetic efficiency, and leaf protein content under field conditions. Among different growth parameters, leaf area, and leaf fresh weight showed maximum enhancement (more than twofold) in 1-month-old plants. Polyphasic chlorophyll a fluorescence (OJIP) transients from magnetically treated plants gave a higher fluorescence yield at the J-I-P phase. The total soluble protein map (SDS-polyacrylamide gel) of leaves showed increased intensities of the bands corresponding to a larger subunit (53 KDa) and smaller subunit (14 KDa) of Rubisco in the treated plants. We report here the beneficial effect of pre-sowing magnetic treatment for improving germination parameters and biomass accumulation in soybean.",
"title": ""
},
{
"docid": "960c2ad0a058e526901d23c9d301701c",
"text": "Preliminary notes High-rise buildings are designed and constructed by use of modern materials and integral structural systems which are not usual for typical buildings. The existing seismic regulations act as a limiting factor and cannot cover specific behaviour of these buildings. Considering the increasing trend in their construction worldwide, additional investigations are necessary, particularly for structures in seismically active areas. It is necessary to elaborate official codes which will clearly prescribe methods, procedures and criteria for analysis and design of such type of structures. The main goal of the paper is to present a review of the existing structural systems, design recommendations and guidelines for high-rises worldwide, as well as selected results from seismic performance of 44 stories RC high-rise building which is a unique experience coming from design and construction of the four high-rise buildings in Skopje (Macedonia).",
"title": ""
},
{
"docid": "95a74edfac2336ed113eeec04715a5ea",
"text": "Remote sensing images obtained by remote sensing are a key source of data for studying large-scale geographic areas. From 2013 onwards, a new generation of land remote sensing satellites from USA, China, Brazil, India and Europe will produce in one year as much data as 5 years of the Landsat-7 satellite. Thus, the research community needs new ways to analyze large data sets of remote sensing imagery. To address this need, this paper describes a toolbox for combing land remote sensing image analysis with data mining techniques. Data mining methods are being extensively used for statistical analysis, but up to now have had limited use in remote sensing image interpretation due to the lack of appropriate tools. The toolbox described in this paper is the Geographic Data Mining Analyst (GeoDMA). It has algorithms for segmentation, feature extraction, feature selection, classification, landscape metrics and multi-temporal methods for change detection and analysis. GeoDMA uses decision-tree strategies adapted for spatial data mining. It connects remotely sensed imagery with other geographic data types using access to local or remote databases. GeoDMA has methods to assess the accuracy of simulation models, as well as tools for spatio-temporal analysis, including a visualization of time-series that helps users to find patterns in cyclic events. The software includes a new approach for analyzing spatio-temporal data based on polar coordinates transformation. This method creates a set of descriptive features that improves the classification accuracy of multi-temporal image databases. GeoDMA is tightly integrated with TerraView GIS, so its users have access to all traditional GIS features. To demonstrate GeoDMA, we show two case studies on land use and land cover change.",
"title": ""
},
{
"docid": "13f7df2198bfe474e92e0072a3de2f9b",
"text": "Humans and other primates shift their gaze to allocate processing resources to a subset of the visual input. Understanding and emulating the way that human observers freeview a natural scene has both scientific and economic impact. It has therefore attracted the attention from researchers in a wide range of science and engineering disciplines. With the ever increasing computational power, machine learning has become a popular tool to mine human data in the exploration of how people direct their gaze when inspecting a visual scene. This paper reviews recent advances in learning saliency-based visual attention and discusses several key issues in this topic. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "072d187f56635ebc574f2eedb8a91d14",
"text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.",
"title": ""
}
] |
scidocsrr
|
172a710dd187d2ada5115bbef76ae4c2
|
Segmental acoustic indexing for zero resource keyword search
|
[
{
"docid": "952651d9d93496e04baa97f03e446b98",
"text": "We present a state-of-the-art system for performing spoken term detection on continuous telephone speech in multiple languages. The system compiles a search index from deep word lattices generated by a large-vocabulary HMM speech recognizer. It estimates word posteriors from the lattices and uses them to compute a detection threshold that minimizes the expected value of a user-specified cost function. The system accommodates search terms outside the vocabulary of the speechto-text engine by using approximate string matching on induced phonetic transcripts. Its search index occupies less than 1Mb per hour of processed speech and it supports sub-second search times for a corpus of hundreds of hours of audio. This system had the highest reported accuracy on the telephone speech portion of the 2006 NIST Spoken Term Detection evaluation, achieving 83% of the maximum possible accuracy score in English.",
"title": ""
}
] |
[
{
"docid": "663068bb3ff4d57e1609b2a337a34d7f",
"text": "Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.",
"title": ""
},
{
"docid": "a61f2e71e0b68d8f4f79bfa33c989359",
"text": "Model-based testing relies on behavior models for the generation of model traces: input and expected output---test cases---for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.",
"title": ""
},
{
"docid": "eec1f1cdb7b4adfec71f5917b077661a",
"text": "Digital games have become a remarkable cultural phenomenon in the last ten years. The casual games sector especially has been growing rapidly in the last few years. However, there is no clear view on what is \"casual\" in games cultures and the area has not previously been rigorously studied. In the discussions on casual games, \"casual\" is often taken to refer to the player, the game or the playing style, but other factors such as business models and accessibility are also considered as characteristic of \"casual\" in games. Views on casual vary and confusion over different meanings can lead to paradoxical readings, which is especially the case when \"casual gamer\" is taken to mean both \"someone who plays casual games\" and someone who \"plays casually\". In this article we will analyse the ongoing discussion by providing clarification of the different meanings of casual and a framework for an overall understanding of casual in the level of expanded game experience.",
"title": ""
},
{
"docid": "dd8194c7f8e28e55fbc45f0d71336112",
"text": "Followers' identification with the leader and the organizational unit, dependence on the leader, and empowerment by the leader are often attributed to transformational leadership in organizations. However, these hypothesized outcomes have received very little attention in empirical studies. Using a sample of 888 bank employees working under 76 branch manages, the authors tested the relationships between transformational leadership and these outcomes. They found that transformational leadership was positively related to both followers' dependence and their empowerment and that personal identification mediated the relationship between transformational leadership and followers' dependence on the leader, whereas social identification mediated the relationship between transformational leadership and followers' empowerment. The authors discuss the implications of these findings to both theory and practice.",
"title": ""
},
{
"docid": "18959618a153812f6c4f38ce2803084a",
"text": "This decade sees a growing number of applications of Unmanned Aerial Vehicles (UAVs) or drones. UAVs are now being experimented for commercial applications in public areas as well as used in private environments such as in farming. As such, the development of efficient communication protocols for UAVs is of much interest. This paper compares and contrasts recent communication protocols of UAVs with that of Vehicular Ad Hoc Networks (VANETs) using Wireless Access in Vehicular Environments (WAVE) protocol stack as the reference model. The paper also identifies the importance of developing light-weight communication protocols for certain applications of UAVs as they can be both of low processing power and limited battery energy.",
"title": ""
},
{
"docid": "805fe4eea0e9415f8683f1135b135059",
"text": "In machine translation, information on word ambiguities is usually provided by the lexicographers who construct the lexicon. In this paper we propose an automatic method for word sense induction, i.e. for the discovery of a set of sense descriptors to a given ambiguous word. The approach is based on the statistics of the distributional similarity between the words in a corpus. Our algorithm works as follows: The 20 strongest first-order associations to the ambiguous word are considered as sense descriptor candidates. All pairs of these candidates are ranked according to the following two criteria: First, the two words in a pair should be as dissimilar as possible. Second, although being dissimilar their co-occurrence vectors should add up to the co-occurrence vector of the ambiguous word scaled by two. Both conditions together have the effect that preference is given to pairs whose co-occurring words are complementary. For best results, our implementation uses singular value decomposition, entropy-based weights, and second-order similarity metrics.",
"title": ""
},
{
"docid": "793bc67bded2d159296a63d87b9b9eaf",
"text": "An energy regenerative passive snubber for transformer isolated converters is proposed. The snubber is implemented on the transformer's primary and secondary windings. The proposed snubber significantly reduces the voltage spike across the switch caused by the transformer's primary inductance upon switch turn-off and facilitates the fast ramping up of the transformer secondary current. In addition, the proposed snubber provides lossless zero voltage turn off and zero current turn on conditions for the power switch. Experimental example of a flyback converter has shown measured efficiency exceeding 90%. This paper describes the principle of operation and presents approximate theoretical analysis and design guidelines of the proposed snubber. Simulation and experimental results are also reported. The proposed energy regenerating snubber is best suited for flyback and SEPIC converters and can also be adapted to other transformer isolated topologies.",
"title": ""
},
{
"docid": "c3b691cd3671011278ecd30563b27245",
"text": "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding anO(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (Crammer et al., 2003; McDonald et al., 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies.",
"title": ""
},
{
"docid": "e0c83197770752c9fdfe5e51edcd3d46",
"text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.",
"title": ""
},
{
"docid": "63163c08e79e3eb35cac5abe21cc6003",
"text": "Neural networks can be compressed to reduce memory and computational requirements, or to increase accuracy by facilitating the use of a larger base architecture. In this paper we focus on pruning individual neurons, which can simultaneously trim model size, FLOPs, and run-time memory. To improve upon the performance of existing compression algorithms we utilize the information bottleneck principle instantiated via a tractable variational bound. Minimization of this information theoretic bound reduces the redundancy between adjacent layers by aggregating useful information into a subset of neurons that can be preserved. In contrast, the activations of disposable neurons are shut off via an attractive form of sparse regularization that emerges naturally from this framework, providing tangible advantages over traditional sparsity penalties without contributing additional tuning parameters to the energy landscape. We demonstrate state-of-theart compression rates across an array of datasets and network architectures.",
"title": ""
},
{
"docid": "843e1f3bbdf76d0fcd90e4a7f906b921",
"text": "This study aimed to elucidate which component of flaxseed, i.e. secoisolariciresinol diglucoside (SDG) lignan or flaxseed oil (FO), makes tamoxifen (TAM) more effective in reducing growth of established estrogen receptor positive breast tumors (MCF-7) at low circulating estrogen levels, and potential mechanisms of action. In a 2 x 2 factorial design, ovariectomized athymic mice with established tumors were treated for 8 wk with TAM together with basal diet (control), or basal diet supplemented with SDG (1 g/kg diet), FO (38.5 g/kg diet), or combined SDG and FO. SDG and FO were at levels in 10% flaxseed diet. Palpable tumors were monitored and after animal sacrifice, analyzed for cell proliferation, apoptosis, ER-mediated (ER-alpha, ER-beta, trefoil factor 1, cyclin D1, progesterone receptor, AIBI), growth factor-mediated (epidermal growth factor receptor, human epidermal growth factor receptor-2, insulin-like growth factor receptor-1, phosphorylated mitogen activated protein kinase, PAKT, BCL2) signaling pathways and angiogenesis (vascular endothelial growth factor). All treatments reduced the growth of TAM-treated tumors by reducing cell proliferation, expression of genes, and proteins involved in the ER- and growth factor-mediated signaling pathways with FO having the greatest effect in increasing apoptosis compared with TAM treatment alone. SDG and FO reduced the growth of TAM-treated tumors but FO was more effective. The mechanisms involve both the ER- and growth factor-signaling pathways.",
"title": ""
},
{
"docid": "cf51f466c72108d5933d070b307e5d6d",
"text": "The study reported here follows the suggestion by Caplan et al. (Justice Q, 2010) that risk terrain modeling (RTM) be developed by doing more work to elaborate, operationalize, and test variables that would provide added value to its application in police operations. Building on the ideas presented by Caplan et al., we address three important issues related to RTM that sets it apart from current approaches to spatial crime analysis. First, we address the selection criteria used in determining which risk layers to include in risk terrain models. Second, we compare the ‘‘best model’’ risk terrain derived from our analysis to the traditional hotspot density mapping technique by considering both the statistical power and overall usefulness of each approach. Third, we test for ‘‘risk clusters’’ in risk terrain maps to determine how they can be used to target police resources in a way that improves upon the current practice of using density maps of past crime in determining future locations of crime occurrence. This paper concludes with an in depth exploration of how one might develop strategies for incorporating risk terrains into police decisionmaking. RTM can be developed to the point where it may be more readily adopted by police crime analysts and enable police to be more effectively proactive and identify areas with the greatest probability of becoming locations for crime in the future. The targeting of police interventions that emerges would be based on a sound understanding of geographic attributes and qualities of space that connect to crime outcomes and would not be the result of identifying individuals from specific groups or characteristics of people as likely candidates for crime, a tactic that has led police agencies to be accused of profiling. In addition, place-based interventions may offer a more efficient method of impacting crime than efforts focused on individuals.",
"title": ""
},
{
"docid": "e60ff761b0acca53dcdad8fbf92f21a2",
"text": "In this paper, we present a new, efficient displacement sensor using core-less planar coils that are magnetically coupled. The sensor consists of two planar stationary coils and one moving coil. The mutual inductance between the stationary coils and the moving coils are measured, and the displacement is computed. The sensor design was validated using numerical computation. Two prototype sensors of different dimensions were fabricated and tested. The first prototype sensor developed has a measurement range of 70 mm and an R.M.S. error of 0.8% and the second sensor has a measurement range of 56 mm and an R.M.S. error in measurement of 0.9%. The signal output from the sensor is made tolerant to errors due to variations in the vertical position of the moving coil. The new sensor is low in cost, easy to manufacture, and can be used in a number of industrial displacement sensing applications.",
"title": ""
},
{
"docid": "d509601659e2192fb4ea8f112c9d75fe",
"text": "Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and laborintensive process, even with computer assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversarial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.",
"title": ""
},
{
"docid": "251ab6744b6517c727121ec11a11e515",
"text": "This paper presents a qualitative-reasoning method for predicting the behavior of mechanisms characterized by continuous, time-varying parameters. The structure of a mechanism is described in terms of a set of parameters and the constraints that hold among them : essentially a 'qualitative differential equation'. The qualitative-behavior description consists of a discrete set of time-points, at which the values of the parameters are described in terms of ordinal relations and directions of change. The behavioral description, or envisionment, is derived by two sets of rules: propagation rules which elaborate the description of the current time-point, and prediction rules which determine what is known about the next qualitatively distinct state of the mechanism. A detailed example shows how the envisionment method can detect a previously unsuspected landmark point at which the system is in stable equilibrium.",
"title": ""
},
{
"docid": "bc1b46f1790bc318e1675c519fba9bc3",
"text": "where A = {j : q(j) > p(j)} is the of items whose prices have increased (in q relative to p). That is, whenever a bidder loses the items of S ∩ A because of being outbid, it still wants to retain the items S \\ A it has, at the original prices. We saw several examples of GS valuations, including k-unit demand valuations, downward-sloping valuations for identical items, and so on. In this lecture, we insist that Definition 1.1 holds for all real-valued prices vectors, including those that with some negative prices. We also won’t need to assume that vi(∅) = 0. We only consider valuations that are monotone (S ⊆ T implies v(S) ≤ v(T )). ∗ c ©2014, Tim Roughgarden. †Department of Computer Science, Stanford University, 462 Gates Building, 353 Serra Mall, Stanford, CA 94305. Email: tim@cs.stanford.edu.",
"title": ""
},
{
"docid": "4cb0d0d6f1823f108a3fc32e0c407605",
"text": "This paper describes a novel method to approximate instantaneous frequency of non-stationary signals through an application of fractional Fourier transform (FRFT). FRFT enables us to build a compact and accurate chirp dictionary for each windowed signal, thus the proposed approach offers improved computational efficiency, and good performance when compared with chirp atom method.",
"title": ""
},
{
"docid": "bcf55ba5534aca41cefddb6f4b0b4d22",
"text": "In a point-to-point wireless fading channel, multiple transmit and receive antennas can be used to improve the reliability of reception (diversity gain) or increase the rate of communication for a fixed reliability level (multiplexing gain). In a multiple-access situation, multiple receive antennas can also be used to spatially separate signals from different users (multiple-access gain). Recent work has characterized the fundamental tradeoff between diversity and multiplexing gains in the point-to-point scenario. In this paper, we extend the results to a multiple-access fading channel. Our results characterize the fundamental tradeoff between the three types of gain and provide insights on the capabilities of multiple antennas in a network context.",
"title": ""
},
{
"docid": "89f85a4a20735222867c5f0b4623f0a1",
"text": "Arabic is one of the major languages in the world. Unfortunately not so much research in Arabic speaker recognition has been done. One main reason for this lack of research is the unavailability of rich Arabic speech databases. In this paper, we present a rich and comprehensive Arabic speech database that we developed for the Arabic speaker / speech recognition research and/or applications. The database is rich in different aspects: (a) it has 752 speakers; (b) the speakers are from different ethnic groups: Saudis, Arabs, and non-Arabs; (c) utterances are both read text and spontaneous; (d) scripts are of different dimensions, such as, isolated words, digits, phonetically rich words, sentences, phonetically balanced sentences, paragraphs, etc.; (e) different sets of microphones with medium and high quality; (f) telephony and non-telephony speech; (g) three different recording environments: office, sound proof room, and cafeteria; (h) three different sessions, where the recording sessions are scheduled at least with 2 weeks interval. Because of the richness of this database, it can be used in many Arabic, and non-Arabic, speech processing researches, such as speaker / speech recognition, speech analysis, accent identification, ethnic groups / nationality recognition, etc. The richness of the database makes it a valuable resource for research in Arabic speech processing in particular and for research in speech processing in general. The database was carefully manually verified. The manual verification was complemented with automatic verification. Validation was performed on a subset of the database where the recognition rate reached 100% for Saudi speakers and 96% for non-Saudi speakers by using a system with 12 Mel frequency Cepstral coefficients, and 32 Gaussian mixtures.",
"title": ""
},
{
"docid": "f37d9a57fd9100323c70876cf7a1d7ad",
"text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
c19b95fa5cb761e6197011468470f7fe
|
From user requirements to UML class diagram
|
[
{
"docid": "d9aadb86785057ae5445dc894b1ef7a7",
"text": "This paper presents Circe, an environment for the analysis of natural language requirements. Circe is first presented in terms of its architecture, based on a transformational paradigm. Details are then given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. The result of all the transformations is a set of models for the requirements document, for the system described by the requirements, and for the requirements writing process. These models can be inspected, measured, and validated against a given set of criteria. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models.",
"title": ""
}
] |
[
{
"docid": "0574f193736e10b13a22da2d9c0ee39a",
"text": "Preliminary communication In food production industry, forecasting the timing of demands is crucial in planning production scheduling to satisfy customer needs on time. In the literature, several statistical models have been used in demand forecasting in Food and Beverage (F&B) industry and the choice of the most suitable forecasting model remains a central concern. In this context, this article aims to compare the performances between Trend Analysis, Decomposition and Holt-Winters (HW) models for the prediction of a time series formed by a group of jam and sherbet product demands. Data comprised the series of monthly sales from January 2013 to December 2014 obtained from a private company. As performance measures, metric analysis of the Mean Absolute Percentage Error (MAPE) is used. In this study, the HW and Decomposition models obtained better results regarding the performance metrics.",
"title": ""
},
{
"docid": "2466ac1ce3d54436f74b5bb024f89662",
"text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.",
"title": ""
},
{
"docid": "6a9738cbe28b53b3a9ef179091f05a4a",
"text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.",
"title": ""
},
{
"docid": "f275d72eb05ce583c02c48bcb98f176c",
"text": "Building extraction from remote sensing images is of great importance in urban planning. Yet it is a longstanding problem for many complicate factors such as various scales and complex backgrounds. This paper proposes a novel supervised building extraction method via deep deconvolution neural networks (DeconvNet). Our method consists of three steps. First, we preprocess the multi-source remote sensing images provided by the IEEE GRSS Data Fusion Contest. A high-quality Vancouver building dataset is created on pansharpened images whose ground-truth are obtained from the OpenStreetMap project. Then, we pretrain a deep deconvolution network on a public large-scale Massachusetts building dataset, which is further fine-tuned by two band combinations (RGB and NRG) of our dataset, respectively. Moreover, the output saliency maps of the fine-tuned models are fused to produce the final building extraction result. Extensive experiments on our Vancouver building dataset demonstrate the effectiveness and efficiency of the proposed method. To the best of our knowledge, it is the first work to use deconvolution networks for building extraction from remote sensing images.",
"title": ""
},
{
"docid": "e5a1f6546de9683e7dc90af147d73d40",
"text": "Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.",
"title": ""
},
{
"docid": "0f02468a77b2da2eec5e0b9c3cfac486",
"text": "Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.",
"title": ""
},
{
"docid": "852ff3b52b4bf8509025cb5cb751899f",
"text": "Digital images are ubiquitous in our modern lives, with uses ranging from social media to news, and even scientific papers. For this reason, it is crucial evaluate how accurate people are when performing the task of identify doctored images. In this paper, we performed an extensive user study evaluating subjects capacity to detect fake images. After observing an image, users have been asked if it had been altered or not. If the user answered the image has been altered, he had to provide evidence in the form of a click on the image. We collected 17,208 individual answers from 383 users, using 177 images selected from public forensic databases. Different from other previously studies, our method propose different ways to avoid lucky guess when evaluating users answers. Our results indicate that people show inaccurate skills at differentiating between altered and non-altered images, with an accuracy of 58%, and only identifying the modified images 46.5% of the time. We also track user features such as age, answering time, confidence, providing deep analysis of how such variables influence on the users’ performance.",
"title": ""
},
{
"docid": "43233e45f07b80b8367ac1561356888d",
"text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.",
"title": ""
},
{
"docid": "e077bb23271fbc056290be84b39a9fcc",
"text": "Rovers will continue to play an important role in planetary exploration. Plans include the use of the rocker-bogie rover configuration. Here, models of the mechanics of this configuration are presented. Methods for solving the inverse kinematics of the system and quasi-static force analysis are described. Also described is a simulation based on the models of the rover’s performance. Experimental results confirm the validity of the models.",
"title": ""
},
{
"docid": "3e36f9b6ad8ff66c070dd65306a82333",
"text": "The topic of representation, recovery and manipulation of three-dimensional (3D) scenes from two-dimensional (2D) images thereof, provides a fertile ground for both intellectual theoretically inclined questions related to the algebra and geometry of the problem and to practical applications such as Visual Recognition, Animation and View Synthesis, recovery of scene structure and camera ego-motion, object detection and tracking, multi-sensor alignment, etc. The basic materials have been known since the turn of the century, but the full scope of the problem has been under intensive study since 1992, rst on the algebra of two views and then on the algebra of multiple views leading to a relatively mature understanding of what is known as \\multilinear matching constraints\", and the \\trilinear tensor\" of three or more views. The purpose of this paper is, rst and foremost, to provide a coherent framework for expressing the ideas behind the analysis of multiple views. Secondly, to integrate the various incremental results that have appeared on the subject into one coherent manuscript.",
"title": ""
},
{
"docid": "c60c83c93577377bad43ed1972079603",
"text": "In this contribution, a set of robust GaN MMIC T/R switches and low-noise amplifiers, all based on the same GaN process, is presented. The target operating bandwidths are the X-band and the 2-18 GHz bandwidth. Several robustness tests on the fabricated MMICs demonstrate state-ofthe-art survivability to CW input power levels. The development of high-power amplifiers, robust low-noise amplifiers and T/R switches on the same GaN monolithic process will bring to the next generation of fully-integrated T/R module",
"title": ""
},
{
"docid": "9e45bc3ac789fd1343e4e400b7f0218e",
"text": "Due to its successful application in recommender systems, collaborative filtering (CF) has become a hot research topic in data mining and information retrieval. In traditional CF methods, only the feedback matrix, which contains either explicit feedback (also called ratings) or implicit feedback on the items given by users, is used for training and prediction. Typically, the feedback matrix is sparse, which means that most users interact with few items. Due to this sparsity problem, traditional CF with only feedback information will suffer from unsatisfactory performance. Recently, many researchers have proposed to utilize auxiliary information, such as item content (attributes), to alleviate the data sparsity problem in CF. Collaborative topic regression (CTR) is one of these methods which has achieved promising performance by successfully integrating both feedback information and item content information. In many real applications, besides the feedback and item content information, there may exist relations (also known as networks) among the items which can be helpful for recommendation. In this paper, we develop a novel hierarchical Bayesian model called Relational Collaborative Topic Regression (RCTR), which extends CTR by seamlessly integrating the user-item feedback information, item content information, and network structure among items into the same model. Experiments on real-world datasets show that our model can achieve better prediction accuracy than the state-of-the-art methods with lower empirical training time. Moreover, RCTR can learn good interpretable latent structures which are useful for recommendation.",
"title": ""
},
{
"docid": "9f22a26ee09543761a07e7de99d54cd6",
"text": "Textbook: A First Course in Probability, ninth edition, by Sheldon Ross (Publisher: Prentice Hall). This text will be used to supplement the lectures and provide practice problems. I will try to post my lecture notes online for your reference as well. Additionally, I will assign problems out of the (free, online) text Introduction to Probability, Statistics and Random Processes, available at www.probabilitycourse.com.",
"title": ""
},
{
"docid": "40479536efec6311cd735f2bd34605d7",
"text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.",
"title": ""
},
{
"docid": "48fbfd8185181edda9d7333e377dbd37",
"text": "This paper proposes the novel Pose Guided Person Generation Network (PG) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128×64 re-identification images and 256×256 fashion photos show that our model generates high-quality person images with convincing details.",
"title": ""
},
{
"docid": "1e6c497fe53f8cba76bd8b432c618c1f",
"text": "inputs into digital (down or up), analog (-1.0 to 1.0), and positional (touch and • mouse cursor). By building on a solid main loop you can easily add support for detecting chorded inputs and sequence inputs.",
"title": ""
},
{
"docid": "4129d2906d3d3d96363ff0812c8be692",
"text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.",
"title": ""
},
{
"docid": "87400394fb5528d22b41ac9160645e4b",
"text": "This paper studies reverse Turing tests to distinguish humans and computers, called CAPTCHA. Contrary to classical Turing tests, in this case the judge is not a human but a computer. The main purpose of such tests is securing user logins against the dictionary or brute force password guessing, avoiding automated usage of various services, preventing bots from spamming on forums and many others. Typical approaches to solving text-based CAPTCHA automatically are based on a scheme specific pipeline containing hand-designed pre-processing, denoising, segmentation, post processing and optical character recognition. Only the last part, optical character recognition, is usually based on some machine learning algorithm. We present an approach using neural networks and a simple clustering algorithm that consists of only two steps, character localisation and recognition. We tested our approach on 11 different schemes selected to present very diverse security features. We experimentally show that using convolutional neural networks is superior to multi-layered perceptrons.",
"title": ""
},
{
"docid": "82fa51c143159f2b85f9d2e5b610e30d",
"text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a3aad879ca5f7e7683c1377e079c4726",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods including Vector Space Methods (VSMs) such as Latent Semantic Analysis (LSA), generative text models such as topic models, matrix factorization, neural nets, and energy-based models. Many of these use nonlinear operations on co-occurrence statistics, such as computing Pairwise Mutual Information (PMI). Some use hand-tuned hyperparameters and term reweighting. Often a generative model can help provide theoretical insight into such modeling choices, but there appears to be no such model to “explain” the above nonlinear models. For example, we know of no generative model for which the correct solution is the usual (dimension-restricted) PMI model. This paper gives a new generative model, a dynamic version of the loglinear topic model of Mnih and Hinton (2007), as well as a pair of training objectives called RAND-WALK to compute word embeddings. The methodological novelty is to use the prior to compute closed form expressions for word statistics. These provide an explanation for the PMI model and other recent models, as well as hyperparameter choices. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are spatially isotropic. The model also helps explain why linear algebraic structure arises in low-dimensional semantic embeddings. Such structure has been used to solve analogy tasks by Mikolov et al. (2013a) and many subsequent papers. This theoretical explanation is to give an improved analogy solving method that improves success rates on analogy solving by a few percent.",
"title": ""
}
] |
scidocsrr
|
3e379ada0f7ce91d4042d45e6f47194e
|
Parallel implementation of the TestU01 statistical test suite
|
[
{
"docid": "ac5e7e88d965aa695b8ae169edce2426",
"text": "Randomness test suites constitute an essential component within the process of assessing random number generators in view of determining their suitability for a specific application. Evaluating the randomness quality of random numbers sequences produced by a given generator is not an easy task considering that no finite set of statistical tests can assure perfect randomness, instead each test attempts to rule out sequences that show deviation from perfect randomness by means of certain statistical properties. This is the reason why several batteries of statistical tests are applied to increase the confidence in the selected generator. Therefore, in the present context of constantly increasing volumes of random data that need to be tested, special importance has to be given to the performance of the statistical test suites. Our work enrolls in this direction and this paper presents the results on improving the well known NIST Statistical Test Suite (STS) by introducing parallelism and a paradigm shift towards byte processing delivering a design that is more suitable for today's multicore architectures. Experimental results show a very significant speedup of up to 103 times compared to the original version.",
"title": ""
}
] |
[
{
"docid": "d0c5d24a5f68eb5448b45feeca098b87",
"text": "Age estimation has wide applications in video surveillance, social networking, and human-computer interaction. Many of the published approaches simply treat age estimation as an exact age regression problem, and thus do not leverage a distribution's robustness in representing labels with ambiguity such as ages. In this paper, we propose a new loss function, called mean-variance loss, for robust age estimation via distribution learning. Specifically, the mean-variance loss consists of a mean loss, which penalizes difference between the mean of the estimated age distribution and the ground-truth age, and a variance loss, which penalizes the variance of the estimated age distribution to ensure a concentrated distribution. The proposed mean-variance loss and softmax loss are jointly embedded into Convolutional Neural Networks (CNNs) for age estimation. Experimental results on the FG-NET, MORPH Album II, CLAP2016, and AADB databases show that the proposed approach outperforms the state-of-the-art age estimation methods by a large margin, and generalizes well to image aesthetics assessment.",
"title": ""
},
{
"docid": "d7076d77ef8f6cc318fed80e6403948b",
"text": "OBJECTIVES\nThe objective of this study is to develop a Ti fibre knit block without sintering, and to evaluate its deformability and new bone formation in vivo.\n\n\nMATERIAL AND METHODS\nA Ti fibre with a diameter of 150 μm was knitted to fabricate a Ti mesh tube. The mesh tube was compressed in a metal mould to fabricate porous Ti fibre knit blocks with three different porosities of 88%, 69%, and 50%. The elastic modulus and deformability were evaluated using a compression test. The knit block was implanted into bone defects of a rabbit's hind limb, and new bone formation was evaluated using micro computed tomography (micro-CT) analysis and histological analysis.\n\n\nRESULTS\nThe knit blocks with 88% porosity showed excellent deformability, indicating potential appropriateness for bone defect filling. Although the porosities of the knit block were different, they indicated similar elastic modulus smaller than 1 GPa. The elastic modulus after deformation increased linearly as the applied compression stress increased. The micro-CT analysis indicated that in the block with 50% porosity new bone filled nearly all of the pore volume four weeks after implantation. In contrast, in the block with 88% porosity, new bone filled less than half of the pore volume even 12 weeks after implantation. The histological analysis also indicated new bone formation in the block.\n\n\nCONCLUSIONS\nThe titanium fibre knit block with high porosity is potentially appropriate for bone defect filling, indicating good bone ingrowth after porosity reduction with applied compression.",
"title": ""
},
{
"docid": "aa7029c5e29a72a8507cbcb461ef92b0",
"text": "Regenerative endodontics has been defined as \"biologically based procedure designed to replace damaged structures, including dentin and root structures, as well as cells of the pulp-dentin complex.\" This is an exciting and rapidly evolving field of human endodontics for the treatment of immature permanent teeth with infected root canal systems. These procedures have shown to be able not only to resolve pain and apical periodontitis but continued root development, thus increasing the thickness and strength of the previously thin and fracture-prone roots. In the last decade, over 80 case reports, numerous animal studies, and series of regenerative endodontic cases have been published. However, even with multiple successful case reports, there are still some remaining questions regarding terminology, patient selection, and procedural details. Regenerative endodontics provides the hope of converting a nonvital tooth into vital one once again.",
"title": ""
},
{
"docid": "ad8b60be0abf430fa38c22b39f074df2",
"text": "Social media is playing an increasingly vital role in information dissemination. But with dissemination being more distributed, content often makes multiple hops, and consequently has opportunity to change. In this paper we focus on content that should be changing the least, namely quoted text. We find changes to be frequent, with their likelihood depending on the authority of the copied source and the type of site that is copying. We uncover patterns in the rate of appearance of new variants, their length, and popularity, and develop a simple model that is able to capture them. These patterns are distinct from ones produced when all copies are made from the same source, suggesting that information is evolving as it is being processed collectively in online social media.",
"title": ""
},
{
"docid": "595afbb693585eb599a3e4ea8e65807a",
"text": "Hypoglycemia is a major challenge of artificial pancreas systems and a source of concern for potential users and parents of young children with Type 1 diabetes (T1D). Early alarms to warn the potential of hypoglycemia are essential and should provide enough time to take action to avoid hypoglycemia. Many alarm systems proposed in the literature are based on interpretation of recent trends in glucose values. In the present study, subject-specific recursive linear time series models are introduced as a better alternative to capture glucose variations and predict future blood glucose concentrations. These models are then used in hypoglycemia early alarm systems that notify patients to take action to prevent hypoglycemia before it happens. The models developed and the hypoglycemia alarm system are tested retrospectively using T1D subject data. A Savitzky-Golay filter and a Kalman filter are used to reduce noise in patient data. The hypoglycemia alarm algorithm is developed by using predictions of future glucose concentrations from recursive models. The modeling algorithm enables the dynamic adaptation of models to inter-/intra-subject variation and glycemic disturbances and provides satisfactory glucose concentration prediction with relatively small error. The alarm systems demonstrate good performance in prediction of hypoglycemia and ultimately in prevention of its occurrence.",
"title": ""
},
{
"docid": "7b7b0c7ef54255839f9ff9d09669fe11",
"text": "Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.",
"title": ""
},
{
"docid": "82ef80d6257c5787dcf9201183735497",
"text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.",
"title": ""
},
{
"docid": "deed8c5dd6b46d45a32af2a832dd4073",
"text": "Support of workplace learning is increasingly important as change in every form determines today's working world in industry and public administrations alike. Adapt quickly to a new job, a new task or a new team is a major challenge that must be dealt with ever faster. Workplace learning differs significantly from school learning as it should be strictly aligned to business goals. In our approach we support workplace learning by providing recommendations of experts and learning resources in a context-sensitive and personalized manner. We utilize users' workplace environment, we consider their learning preferences and zone of proximal development, and compare required and acquired competencies in order to issue the best suited recommendations. Our approach is part of the European funded project Learn PAd. Applied research method is Design Science Research. Evaluation is done in an iterative process. The recommender system introduced here is evaluated theoretically based on user requirements and practically in an early evaluation process conducted by the Learn PAd application partner.",
"title": ""
},
{
"docid": "f9562a54cbfade2c96420911cd3642c1",
"text": "Handwritten Arabic character recognition systems face several challenges, including the unlimited variation in human handwriting and the unavailability of large public databases of handwritten characters and words. The use of synthetic data for training and testing handwritten character recognition systems is one of the possible solutions to provide several variations for these characters and to overcome the lack of large databases. While this can be using arbitrary distortions, such as image noise and randomized affine transformations, such distortions are not realistic. In this work, we model real distortions in handwriting using real handwritten Arabic character examples and then use these distortion models to synthesize handwritten examples that are more realistic. We show that the use of our proposed approach leads to significant improvements across different machine-learning classification algorithms.",
"title": ""
},
{
"docid": "80a29cdba8ceb5b3cf88942b1d8d4ded",
"text": "This paper proposes a new control strategy of doubly fed induction generators (DFIGs) under unbalanced grid voltage conditions. The proposed controller includes a model predictive direct power control (MPDPC) method and a power compensation scheme. In MPDPC, the appropriate voltage vector is selected according to an optimization cost function, hence the instantaneous active and reactive powers are regulated directly in the stator stationary reference frame without the requirement of coordinate transformation, PI regulators, switching table, or PWM modulators. In addition, the behavior of the DFIG under unbalanced grid voltage is investigated. Next, a power compensation scheme without the need of extracting negative stator current sequence is developed. By combining the proposed MPDPC strategy and the power compensation scheme, distorted currents injected into the power grid by the DFIGs can be eliminated effectively.",
"title": ""
},
{
"docid": "c02d5b0b36cf108a25d93bd8fc5d2ada",
"text": "Architectural threat analysis has become an important cornerstone for organizations concerned with developing secure software. Due to the large number of existing techniques it is becoming more challenging for practitioners to select an appropriate threat analysis technique. Therefore, we conducted a systematic literature review (SLR) of the existing techniques for threat analysis. In our study we compare 26 methodologies for what concerns their applicability, characteristics of the required input for analysis, characteristics of analysis procedure, characteristics of analysis outcomes and ease of adoption. We also provide insight into the obstacles for adopting the existing approaches and discuss the current state of their adoption in software engineering trends (e.g. Agile, DevOps, etc.). As a summary of our findings we have observed that: the analysis procedure is not precisely defined, there is a lack of quality assurance of analysis outcomes and tool support and validation are limited.",
"title": ""
},
{
"docid": "90a3dd2bc75817a49a408e7666660e29",
"text": "RATIONALE\nPulmonary arterial hypertension (PAH) is an orphan disease for which the trend is for management in designated centers with multidisciplinary teams working in a shared-care approach.\n\n\nOBJECTIVE\nTo describe clinical and hemodynamic parameters and to provide estimates for the prevalence of patients diagnosed for PAH according to a standardized definition.\n\n\nMETHODS\nThe registry was initiated in 17 university hospitals following at least five newly diagnosed patients per year. All consecutive adult (> or = 18 yr) patients seen between October 2002 and October 2003 were to be included.\n\n\nMAIN RESULTS\nA total of 674 patients (mean +/- SD age, 50 +/- 15 yr; range, 18-85 yr) were entered in the registry. Idiopathic, familial, anorexigen, connective tissue diseases, congenital heart diseases, portal hypertension, and HIV-associated PAH accounted for 39.2, 3.9, 9.5, 15.3, 11.3, 10.4, and 6.2% of the population, respectively. At diagnosis, 75% of patients were in New York Heart Association functional class III or IV. Six-minute walk test was 329 +/- 109 m. Mean pulmonary artery pressure, cardiac index, and pulmonary vascular resistance index were 55 +/- 15 mm Hg, 2.5 +/- 0.8 L/min/m(2), and 20.5 +/- 10.2 mm Hg/L/min/m(2), respectively. The low estimates of prevalence and incidence of PAH in France were 15.0 cases/million of adult inhabitants and 2.4 cases/million of adult inhabitants/yr. One-year survival was 88% in the incident cohort.\n\n\nCONCLUSIONS\nThis contemporary registry highlights current practice and shows that PAH is detected late in the course of the disease, with a majority of patients displaying severe functional and hemodynamic compromise.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "4f73815cc6bbdfbacee732d8724a3f74",
"text": "Networks can be considered as approximation schemes. Multilayer networks of the perceptron type can approximate arbitrarily well continuous functions (Cybenko 1988, 1989; Funahashi 1989; Stinchcombe and White 1989). We prove that networks derived from regularization theory and including Radial Basis Functions (Poggio and Girosi 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property ofbest approximation. The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.",
"title": ""
},
{
"docid": "a9f8f3946dd963066006f19a251eef7c",
"text": "Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe Atmosphere and the pedagogical affordances and constraints of the inscription tools, discourse tools, experiential tools, and resource tools of each application. The purpose of this review is to discuss the implications of using each application for educational initiatives by exploring how the various design features of each may support and enhance the design of interactive learning environments.",
"title": ""
},
{
"docid": "c46728b89e6cfca7422f4f0e1036ddab",
"text": "This paper presents our named entity recognition system for Vietnamese text using labeled propagation. In here we propose: (i) a method of choosing noun phrases as the named entity candidates; (ii) a method to measure the word similarity; and (iii) a method of decreasing the effect of high frequency labels in labeled documents. Experimental results show that our labeled propagate method achieves higher accuracy than the old one [12]. In addition, when the number of the labeled data is small, its accuracy is higher than when using conditional random fields.",
"title": ""
},
{
"docid": "165fcc5242321f6fed9c353cc12216ff",
"text": "Fingerprint alteration represents one of the newest challenges in biometric identification. The aim of fingerprint mutilation is to destroy the structure of the papillary ridges so that the identity of the offender cannot be recognized by the biometric system. The problem has received little attention and there is a lack of a real world altered fingerprints database that would allow researchers to develop new algorithms and techniques for altered fingerprints detection. The major contribution of this paper is that it provides a new public database of synthetically altered fingerprints. Starting from the cases described in the literature, three methods for generating simulated altered fingerprints are proposed.",
"title": ""
},
{
"docid": "672be163a987da17aca6ccbdbc4b9145",
"text": "Clothing detection is an important step for retrieving similar clothing items, organizing fashion photos, artificial intelligence powered shopping assistants and automatic labeling of large catalogues. Training a deep learning based clothing detector requires pre-defined categories (dress, pants etc) and a high volume of annotated image data for each category. However, fashion evolves and new categories are constantly introduced in the marketplace. For example, consider the case of jeggings which is a combination of jeans and leggings. Detection of this new category will require adding annotated data specific to jegging class and subsequently relearning the weights for the deep network. In this paper, we propose a novel object detection method that can handle newer categories without the need of obtaining new labeled data and retraining the network. Our approach learns the visual similarities between various clothing categories and predicts a tree of categories. The resulting framework significantly improves the generalization capabilities of the detector to novel clothing products.",
"title": ""
},
{
"docid": "d698d49a82829a2bb772d1c3f6c2efc5",
"text": "The concepts of Data Warehouse, Cloud Computing and Big Data have been proposed during the era of data flood. By reviewing current progresses in data warehouse studies, this paper introduces a framework to achieve better visualization for Big Data. This framework can reduce the cost of building Big Data warehouses by divide data into sub dataset and visualize them respectively. Meanwhile, basing on the powerful visualization tool of D3.js and directed by the principle of Whole-Parts, current data can be presented to users from different dimensions by different rich statistics graphics.",
"title": ""
},
{
"docid": "7a7fedfeaa85536028113c65d5650957",
"text": "In this work we propose a novel framework named Dual-Net aiming at learning more accurate representation for image recognition. Here two parallel neural networks are coordinated to learn complementary features and thus a wider network is constructed. Specifically, we logically divide an end-to-end deep convolutional neural network into two functional parts, i.e., feature extractor and image classifier. The extractors of two subnetworks are placed side by side, which exactly form the feature extractor of DualNet. Then the two-stream features are aggregated to the final classifier for overall classification, while two auxiliary classifiers are appended behind the feature extractor of each subnetwork to make the separately learned features discriminative alone. The complementary constraint is imposed by weighting the three classifiers, which is indeed the key of DualNet. The corresponding training strategy is also proposed, consisting of iterative training and joint finetuning, to make the two subnetworks cooperate well with each other. Finally, DualNet based on the well-known CaffeNet, VGGNet, NIN and ResNet are thoroughly investigated and experimentally evaluated on multiple datasets including CIFAR-100, Stanford Dogs and UEC FOOD-100. The results demonstrate that DualNet can really help learn more accurate image representation, and thus result in higher accuracy for recognition. In particular, the performance on CIFAR-100 is state-of-the-art compared to the recent works.",
"title": ""
}
] |
scidocsrr
|
18218294102f719952f171ee7427de97
|
Design of a Ternary Memory Cell Using CNTFETs
|
[
{
"docid": "5ae61b2cecb61ecc70c2ec2049426841",
"text": "Advances in multiple-valued logic (MVL) have been inspired, in large part, by advances in integrated circuit technology. Multiple-valued logic has matured to the point where four-valued logic is now part of commercially available VLSI IC's. Besides reduction in chip area, MVL offers other benefits such as the potential for circuit test. This paper describes the historical and technical background of MVL, and areas of present and future application. It is intended, as well, to serve as a tutorial for the nonspecialist.",
"title": ""
}
] |
[
{
"docid": "592ccb18cfc7770fcb8b8adeea1b4b92",
"text": "We show the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent. Unlike earlier algorithms with this property (e.g., Spherical LSH [1, 2]), our algorithm is also practical, improving upon the well-studied hyperplane LSH [3] in practice. We also introduce a multiprobe version of this algorithm and conduct an experimental evaluation on real and synthetic data sets. We complement the above positive results with a fine-grained lower bound for the quality of any LSH family for angular distance. Our lower bound implies that the above LSH family exhibits a trade-off between evaluation time and quality that is close to optimal for a natural class of LSH functions.",
"title": ""
},
{
"docid": "5bf9ebaecbcd4b713a52d3572e622cbd",
"text": "Essay scoring is a complicated processing requiring analyzing, summarizing and judging expertise. Traditional work on essay scoring focused on automatic handcrafted features, which are expensive yet sparse. Neural models offer a way to learn syntactic and semantic features automatically, which can potentially improve upon discrete features. In this paper, we employ convolutional neural network (CNN) for the effect of automatically learning features, and compare the result with the state-of-art discrete baselines. For in-domain and domain-adaptation essay scoring tasks, our neural model empirically outperforms discrete models.",
"title": ""
},
{
"docid": "33beb7f84ee6e34d7d9c583171f98252",
"text": "In this paper, a novel zero-voltage switching full-bridge converter with trailing edge pulse width modulation and capacitive output filter is presented. The target application for this study is the second stage dc-dc converter in a two stage 1.65 kW on-board charger for a plug-in hybrid electric vehicle. For this application the design objective is to achieve high efficiency and low cost in order to minimize the charger size, charging time, and the amount and the cost of electricity drawn from the utility. A detailed converter operation analysis is presented along with simulation and experimental results. In comparison to a benchmark full-bridge with an LC output filter, the proposed converter reduces the reverse recovery losses in the secondary rectifier diodes, therefore, enabling a converter switching frequency of 100 kHz. Experimental results are presented for a prototype unit converting 400 V from the input dc link to an output voltage range of 200-450 V dc at 1650 W. The prototype achieves a peak efficiency of 95.7%.",
"title": ""
},
{
"docid": "f43ed3feda4e243a1cb77357b435fb52",
"text": "Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeatedly generated text and high reward for “novel” and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines.1",
"title": ""
},
{
"docid": "636076c522ea4ac91afbdc93d58fa287",
"text": "Aspect-based opinion mining has attracted lots of attention today. In this thesis, we address the problem of product aspect rating prediction, where we would like to extract the product aspects, and predict aspect ratings simultaneously. Topic models have been widely adapted to jointly model aspects and sentiments, but existing models may not do the prediction task well due to their weakness in sentiment extraction. The sentiment topics usually do not have clear correspondence to commonly used ratings, and the model may fail to extract certain kinds of sentiments due to skewed data. To tackle this problem, we propose a sentiment-aligned topic model(SATM), where we incorporate two types of external knowledge: product-level overall rating distribution and word-level sentiment lexicon. Experiments on real dataset demonstrate that SATM is effective on product aspect rating prediction, and it achieves better performance compared to the existing approaches.",
"title": ""
},
{
"docid": "8f2c7836509592fda62e52c0d4a62192",
"text": "Because of its high modularity and carry-free addition, a redundant binary (RB) representation can be used when designing high performance multipliers. The conventional RB multiplier needs for an additional RB partial product (RBPP) row, because an error-correcting word (ECW) is created by both the radix-8 and radix-4 Modified Booth encodings (MBE). This becomes subject in an additional RBPP accumulation stage for the MBE multiplier. A new RB modified partial product generator (RBMPPG) was proposed in this paper; it takes off the extra ECW and hence, it rescues one RBPP accumulation stage. Therefore, than a conventional RB MBE multiplier, the proposed RBMPPG produces fewer partial product rows. Simulation results show that the proposed RBMPPG based designs sufficiently make better the area and power consumption when the word length of each operand in the multiplier is at least 32 bits; these decreases over previous NB multiplier designs need in a small delay increase (approximately 5%). The power-delay product can be making smaller by up to 59% using the proposed RB multipliers when estimated with existing RB multipliers.",
"title": ""
},
{
"docid": "aa9b9c05bf09e3c6cceeb664e218a753",
"text": "Software development is an inherently team-based activity, and many software-engineering courses are structured around team projects, in order to provide students with an authentic learning experience. The collaborative-development tools through which student developers define, share and manage their tasks generate a detailed record in the process. Albeit not designed for this purpose, this record can provide the instructor with insights into the students' work, the team's progress over time, and the individual team-member's contributions. In this paper, we describe an analysis and visualization toolkit that enables instructors to interactively explore the trace of the team's collaborative work, to better understand the team dynamics, and the tasks of the individual team developers. We also discuss our grounded-theory analysis of one team's work, based on their email exchanges, questionnaires and interviews. Our analyses suggest that the inferences supported by our toolkit are congruent with the developers' feedback, while there are some discrepancies with the reflections of the team as a whole.",
"title": ""
},
{
"docid": "3548c5a69614ff70a63d4554988e5c19",
"text": "This paper describes the design and implementation of a prototype of a mobile real time heart rate monitoring system, in order to get a preventive or detective control of the status of a person who had been diagnosed a heart disease. The system combines a mobile wearable device that senses cardiac rhythm and identify the geographical location of a person, then transmits by GPRS (General Packet Radio Service) technology to a web service where a web application stores, interprets, presents the data and notifies an unusual behavior.",
"title": ""
},
{
"docid": "cca9b3cb4a0d6fb8a690f2243cf7abce",
"text": "In this paper, we propose to predict immediacy for interacting persons from still images. A complete immediacy set includes interactions, relative distance, body leaning direction and standing orientation. These measures are found to be related to the attitude, social relationship, social interaction, action, nationality, and religion of the communicators. A large-scale dataset with 10,000 images is constructed, in which all the immediacy measures and the human poses are annotated. We propose a rich set of immediacy representations that help to predict immediacy from imperfect 1-person and 2-person pose estimation results. A multi-task deep recurrent neural network is constructed to take the proposed rich immediacy representation as input and learn the complex relationship among immediacy predictions multiple steps of refinement. The effectiveness of the proposed approach is proved through extensive experiments on the large scale dataset.",
"title": ""
},
{
"docid": "0d802fea4e3d9324ba46c35e5a002b6a",
"text": "Hyponatremia is common in both inpatients and outpatients. Medications are often the cause of acute or chronic hyponatremia. Measuring the serum osmolality, urine sodium concentration and urine osmolality will help differentiate among the possible causes. Hyponatremia in the physical states of extracellular fluid (ECF) volume contraction and expansion can be easy to diagnose but often proves difficult to manage. In patients with these states or with normal or near-normal ECF volume, the syndrome of inappropriate secretion of antidiuretic hormone is a diagnosis of exclusion, requiring a thorough search for all other possible causes. Hyponatremia should be corrected at a rate similar to that at which it developed. When symptoms are mild, hyponatremia should be managed conservatively, with therapy aimed at removing the offending cause. When symptoms are severe, therapy should be aimed at more aggressive correction of the serum sodium concentration, typically with intravenous therapy in the inpatient setting.",
"title": ""
},
{
"docid": "25bcbb44c843d71b7422905e9dbe1340",
"text": "INTRODUCTION\nThe purpose of this study was to evaluate the effect of using the transverse analysis developed at Case Western Reserve University (CWRU) in Cleveland, Ohio. The hypotheses were based on the following: (1) Does following CWRU's transverse analysis improve the orthodontic results? (2) Does following CWRU's transverse analysis minimize the active treatment duration?\n\n\nMETHODS\nA retrospective cohort research study was conducted on a randomly selected sample of 100 subjects. The sample had CWRU's analysis performed retrospectively, and the sample was divided according to whether the subjects followed what CWRU's transverse analysis would have suggested. The American Board of Orthodontics discrepancy index was used to assess the pretreatment records, and quality of the result was evaluated using the American Board of Orthodontics cast/radiograph evaluation. The Mann-Whitney test was used for the comparison.\n\n\nRESULTS\nCWRU's transverse analysis significantly improved the total cast/radiograph evaluation scores (P = 0.041), especially the buccolingual inclination component (P = 0.001). However, it did not significantly affect treatment duration (P = 0.106).\n\n\nCONCLUSIONS\nCWRU's transverse analysis significantly improves the orthodontic results but does not have significant effects on treatment duration.",
"title": ""
},
{
"docid": "e0bb1bdcba38bcfbcc7b2da09cd05a3f",
"text": "Reconstructing the 3D surface from a set of provided range images – acquired by active or passive sensors – is an important step to generate faithful virtual models of real objects or environments. Since several approaches for high quality fusion of range images are already known, the runtime efficiency of the respective methods are of increased interest. In this paper we propose a highly efficient method for range image fusion resulting in very accurate 3D models. We employ a variational formulation for the surface reconstruction task. The global optimal solution can be found by gradient descent due to the convexity of the underlying energy functional. Further, the gradient descent procedure can be parallelized, and consequently accelerated by graphics processing units. The quality and runtime performance of the proposed method is demonstrated on wellknown multi-view stereo benchmark datasets.",
"title": ""
},
{
"docid": "496e57bd6a6d06123ae886e0d6753783",
"text": "With the enormous growth of digital content in internet, various types of online reviews such as product and movie reviews present a wealth of subjective information that can be very helpful for potential users. Sentiment analysis aims to use automated tools to detect subjective information from reviews. Up to now as there are few researches conducted on feature selection in sentiment analysis, there are very rare works for Persian sentiment analysis. This paper considers the problem of sentiment classification using different feature selection methods for online customer reviews in Persian language. Three of the challenges of Persian text are using of a wide variety of declensional suffixes, different word spacing and many informal or colloquial words. In this paper we study these challenges by proposing a model for sentiment classification of Persian review documents. The proposed model is based on stemming and feature selection and is employed Naive Bayes algorithm for classification. We evaluate the performance of the model on a collection of cellphone reviews, where the results show the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "e71b44837998b3df6a750c21f6a44ce6",
"text": "As the basis of value creation increasingly depends on the leverage of the intangible assets of firms, knowledge management systems (KMS) are emerging as powerful sources of competitive advantage. However, the general recognition of the importance of such systems seems to be accompanied by a technology-induced drive to implement systems with inadequate consideration of the fundamental knowledge problems that the KMS are likely to solve. This paper contributes to the stream of research on knowledge management systems by proposing an inductively developed framework for this important class of information systems, classifying KMS based on the locus of the knowledge and the a priori structuring of contents. This framework provides a means to explore issues related to KMS and unifying dimensions underlying different types of KMS. The contingencies that we discuss—the size and diversity of networks, the maintenance of knowledge flows and the long term effects of the use of KMS—provide a window into work in a number of reference disciplines that would enrich the utility of KMS and also open up fruitful areas for future research.",
"title": ""
},
{
"docid": "eb972bb7d972c28d3d740758b59f49b6",
"text": "An ultra-low-leakage power-rail ESD clamp circuit, composed of the SCR device and new ESD detection circuit, has been proposed with consideration of gate current to reduce the standby leakage current. By controlling the gate current of the devices in the ESD detection circuit under a specified bias condition, the whole power-rail ESD clamp circuit can achieve an ultra-low standby leakage current. The new proposed circuit has been fabricated in a 1 V 65 nm CMOS process for experimental verification. The new proposed power-rail ESD clamp circuit can achieve 7 kV HBM and 325 V MM ESD levels while consuming only a standby leakage current of 96 nA at 1 V bias in room temperature and occupying an active area of only 49 m 21 m.",
"title": ""
},
{
"docid": "b27ac6851bb576cac1c8d2f7e76fc8f1",
"text": "A novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell has been successfully developed, for the first time. The DC-SF cell consists of a surrounding floating gate with stacked dual control gate. With this structure, high coupling ratio, low voltage cell operation (program: 15V and erase: −11V), and wide P/E window (9.2V) can be obtained. Moreover, negligible FG-FG interference (12mV/V) is achieved due to the control gate shield effect. Then we propose 3D DC-SF NAND flash cell as the most promising candidate for 1Tb and beyond with stacked multi bit FG cell (2 ∼ 4bit/cell).",
"title": ""
},
{
"docid": "d158d2d0b24fe3766b6ddb9bff8e8010",
"text": "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.",
"title": ""
},
{
"docid": "c50e7d16cfc2f71c256d952391dfb8ec",
"text": "Fuzzy Cognitive Maps (FCMs) are a flexible modeling technique with the goal of modeling causal relationships. Traditionally FCMs are developed by experts. We need to learn FCMs directly from data when expert knowledge is not available. The FCM learning problem can be described as the minimization of the difference between the desired response of the system and the estimated response of the learned FCM model. Learning FCMs from data can be a difficult task because of the large number of candidate FCMs. A FCM learning algorithm based on Ant Colony Optimization (ACO) is presented in order to learn FCM models from multiple observed response sequences. Experiments on simulated data suggest that the proposed ACO based FCM learning algorithm is capable of learning FCM with at least 40 nodes. The performance of the algorithm was tested on both single response sequence and multiple response sequences. The test results are compared to several algorithms, such as genetic algorithms and nonlinear Hebbian learning rule based algorithms. The performance of the ACO algorithm is better than these algorithms in several different experiment scenarios in terms of model errors, sensitivities and specificities. The effect of number of response sequences and number of nodes is discussed.",
"title": ""
},
{
"docid": "45375c1527fcb46d0d29bbb4fdab4f9c",
"text": "Removing suffixes by automatic means is an operation which is especially useful in the field of information retrieval. In a typical IR environment, one has a collection of documents, each described by the words in the document title and possibly by words in the document abstract. Ignoring the issue of precisely where the words originate, we can say that a document is represented by a vetor of words, or terms. Terms with a common stem will usually have similar meanings, for example:",
"title": ""
},
{
"docid": "0d6165524d748494a5c4d0d2f0675c42",
"text": "In Saudi Arabia, breast cancer is diagnosed at advanced stage compared to Western countries. Nevertheless, the perceived barriers to delayed presentation have been poorly examined. Additionally, available breast cancer awareness data are lacking validated measurement tool. The aim of this study is to evaluate the level of breast cancer awareness and perceived barriers to seeking medical care among Saudi women, using internationally validated tool. A cross-sectional study was conducted among adult Saudi women attending a primary care center in Riyadh during February 2014. Data were collected using self-administered questionnaire based on the Breast Cancer Awareness Measure (CAM-breast). Out of 290 women included, 30 % recognized five or more (out of nine) non-lump symptoms of breast cancer, 31 % correctly identified the risky age of breast cancer (set as 50 or 70 years), 28 % reported frequent (at least once a month) breast checking. Considering the three items of the CAM-breast, only 5 % were completely aware while 41 % were completely unaware of breast cancer. The majority (94 %) reported one or more barriers. The most frequently reported barrier was the difficulty of getting a doctor appointment (39 %) followed by worries about the possibility of being diagnosed with breast cancer (31 %) and being too busy to seek medical help (26 %). We are reporting a major gap in breast cancer awareness and several logistic and emotional barriers to seeking medical care among adult Saudi women. The current findings emphasized the critical need for an effective national breast cancer education program to increase public awareness and early diagnosis.",
"title": ""
}
] |
scidocsrr
|
e9ef985eaa79922c7991301c628754ac
|
Techniques for Maximizing Efficiency of Solar Energy Harvesting Systems ( Invited Paper )
|
[
{
"docid": "2fb3eac8622f512d1acc75874a9e25de",
"text": "DuraCap is a solar-powered energy harvesting system that stores harvested energy in supercapacitors and is voltage-compatible with lithium-ion batteries. The use of supercapacitors instead of batteries enables DuraCap to extend the operational life time from tens of months to tens of years. DuraCap addresses two additional problems with micro-solar systems: inefficient operation of supercapacitors during cold booting, and maximum power point tracking (MPPT) over a variety of solar panels. Our approach is to dedicate a smaller supercapacitor to cold booting before handing over to the array of larger-value supercapacitors. For MPPT, we designed a bound-control circuit for PFM regulator switching and an I-V tracer to enable self-configuring over the panel's aging process and replacement. Experimental results show the DuraCap system to achieve high conversion efficiency and minimal downtime.",
"title": ""
}
] |
[
{
"docid": "7f553d57ec54b210e86e4d7abba160d7",
"text": "SUMMARY\nBioIE is a rule-based system that extracts informative sentences relating to protein families, their structures, functions and diseases from the biomedical literaturE. Based on manual definition of templates and rules, it aims at precise sentence extraction rather than wide recall. After uploading source text or retrieving abstracts from MEDLINE, users can extract sentences based on predefined or user-defined template categories. BioIE also provides a brief insight into the syntactic and semantic context of the source-text by looking at word, N-gram and MeSH-term distributions. Important Applications of BioIE are in, for example, annotation of microarray data and of protein databases.\n\n\nAVAILABILITY\nhttp://umber.sbs.man.ac.uk/dbbrowser/bioie/",
"title": ""
},
{
"docid": "12f4029308308061f3cbb9e7ac56efd0",
"text": "A low-cost and effective wearable digital stethoscope for healthcare monitoring through wireless transmission module is proposed in this paper. The design integrates heart and lung sound signal acquisition, amplification, wireless transmission and signal processing using National Instruments Labview. The device can be used to record patients' cardiopulmonary sound. It has further applications for real time monitoring over long distance. The accuracy, robust and cost-effectiveness of this design make it a suitable method for real time visualized signal monitoring over long distance, data storage, replay function and auto disease preliminary assessment.",
"title": ""
},
{
"docid": "c2a59c463eb9667198040591ed746bfc",
"text": "In this paper, we present a new portable force feedback device for surgery simulations. Dielectric elastomer spring roll linear actua tors for this device were manufactured, and characterized via pas sive tensile tests and active isometric tests. The actuators exhibited a maximum force of 7.2 N, and a maximum elongation of 31%. Due to the high driving voltage, electrical safety issues were also considered. The results showed that sufficient electrical safety can be provided to the user. Two prototypes were built, which practi cally showed functionalities of the actuator and the proposed force feedback concept with actuators connected between the fingers.",
"title": ""
},
{
"docid": "bf42a82730cfc7fb81866fbb345fef64",
"text": "MicroRNAs (miRNAs) are evolutionarily conserved small non-coding RNAs that have crucial roles in regulating gene expression. Increasing evidence supports a role for miRNAs in many human diseases, including cancer and autoimmune disorders. The function of miRNAs can be efficiently and specifically inhibited by chemically modified antisense oligonucleotides, supporting their potential as targets for the development of novel therapies for several diseases. In this Review we summarize our current knowledge of the design and performance of chemically modified miRNA-targeting antisense oligonucleotides, discuss various in vivo delivery strategies and analyse ongoing challenges to ensure the specificity and efficacy of therapeutic oligonucleotides in vivo. Finally, we review current progress on the clinical development of miRNA-targeting therapeutics.",
"title": ""
},
{
"docid": "6838cf1310f0321cd524bb1120f35057",
"text": "One of the most compelling visions of future robots is that of the robot butler. An entity dedicated to fulfilling your every need. This obviously has its benefits, but there could be a flipside to this vision. To fulfill the needs of its users, it must first be aware of them, and so it could potentially amass a huge amount of personal data regarding its user, data which may or may not be safe from accidental or intentional disclosure to a third party. How may prospective owners of a personal robot feel about the data that might be collected about them? In order to investigate this issue experimentally, we conducted an exploratory study where 12 participants were exposed to an HRI scenario in which disclosure of personal information became an issue. Despite the small sample size interesting results emerged from this study, indicating how future owners of personal robots feel regarding what the robot will know about them, and what safeguards they believe should be in place to protect owners from unwanted disclosure of private information.",
"title": ""
},
{
"docid": "09dfc388fc9eec17c2ec9dd5002af8c3",
"text": "Having effective visualizations of filesystem provenance data is valuable for understanding its complex hierarchical structure. The most common visual representation of provenance data is the node-link diagram. While effective for understanding local activity, the node-link diagram fails to offer a high-level summary of activity and inter-relationships within the data. We present a new tool, InProv, which displays filesystem provenance with an interactive radial-based tree layout. The tool also utilizes a new time-based hierarchical node grouping method for filesystem provenance data we developed to match the user's mental model and make data exploration more intuitive. We compared InProv to a conventional node-link based tool, Orbiter, in a quantitative evaluation with real users of filesystem provenance data including provenance data experts, IT professionals, and computational scientists. We also compared in the evaluation our new node grouping method to a conventional method. The results demonstrate that InProv results in higher accuracy in identifying system activity than Orbiter with large complex data sets. The results also show that our new time-based hierarchical node grouping method improves performance in both tools, and participants found both tools significantly easier to use with the new time-based node grouping method. Subjective measures show that participants found InProv to require less mental activity, less physical activity, less work, and is less stressful to use. Our study also reveals one of the first cases of gender differences in visualization; both genders had comparable performance with InProv, but women had a significantly lower average accuracy (56%) compared to men (70%) with Orbiter.",
"title": ""
},
{
"docid": "2fdf4618c0519bfdee5c83bef9012e0f",
"text": "In most Western countries females have higher rates of suicidal ideation and behavior than males, yet mortality from suicide is typically lower for females than for males. This article explores the gender paradox of suicidal behavior, examines its validity, and critically examines some of the explanations, concluding that the gender paradox of suicidal behavior is a real phenomenon and not a mere artifact of data collection. At the same time, the gender paradox in suicide is a more culture-bound phenomenon than has been traditionally assumed; cultural expectations about gender and suicidal behavior strongly determine its existence. Evidence from the United States and Canada suggests that the gender gap may be more prominent in communities where different suicidal behaviors are expected of females and males. These divergent expectations may affect the scenarios chosen by females and males, once suicide becomes a possibility, as well as the interpretations of those who are charged with determining whether a particular behavior is suicidal (e.g., coroners). The realization that cultural influences play an important role in the gender paradox of suicidal behaviors holds important implications for research and for public policy.",
"title": ""
},
{
"docid": "1f72fad6fd2394011f608f7f80a96d2b",
"text": "Flooding Peer-to-Peer (P2P) networks form the basis of services such as the electronic currency system Bitcoin. The decentralized architecture enables robustness against failure. However, knowledge of the network's topology can allow adversaries to attack specific peers in order to, e.g., isolate certain peers or even partition the network. Knowledge of the topology might be gained by observing the flooding process, which is inherently possible in such networks,, performing a timing analysis on the observations. In this paper we present a timing analysis method that targets flooding P2P networks, show its theoretical, practical feasibility. A validation in the real-world Bitcoin network proves the possibility of inferring network links of actively participating peers with substantial precision, recall (both ~ 40%), potentially enabling attacks on the network. Additionally, we analyze the countermeasure of trickling, quantify the tradeoff between the effectiveness of the countermeasure, the expected performance penalty. The analysis shows that inappropriate parametrization can actually facilitate inference attacks.",
"title": ""
},
{
"docid": "e4cba1a4ebef9fa18c3ee11258160a8b",
"text": "Subocclusive hymenal variants, such as microperforate or septate hymen, impair somatic functions (e.g., vaginal intercourse or menstrual hygiene) and can negatively impact the quality of life of young women. We know little about the prevalence and inheritance of subocclusive hymenal variants. So far, eight cases of familial occurrence of occlusive hymenal anomalies (imperforate hymen) have been reported. In one of these cases, monozygotic twins were affected. We are reporting the first case of subocclusive hymenal variants (microperforate hymen and septate hymen) in 16-year-old white dizygotic twins. In addition, we review and discuss the current evidence. Conclusion: The mode of inheritance of hymenal variants has not been determined so far. Because surgical corrections of hymenal variants should be carried out in asymptomatic patients (before menarche), gynecologists and pediatricians should keep in mind that familial occurrences may occur.",
"title": ""
},
{
"docid": "02564434d1dab0031718a10400a59593",
"text": "The advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to meet the effective data retrieval need. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely differentiate the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Among various multi-keyword semantics, we choose the efficient principle of \" coordinate matching \" , i.e., as many matches as possible, to capture the similarity between search query and data documents, and further use \" inner product similarity \" to quantitatively formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product computation, and then significantly improve it to meet different privacy requirements in two levels of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication. INTRODUCTION Due to the rapid expansion of data, the data owners tend to store their data into the cloud to release the burden of data storage and maintenance [1]. However, as the cloud customers and the cloud server are not in the same trusted domain, our outsourced data may be under the exposure to the risk. Thus, before sent to the cloud, the sensitive data needs to be encrypted to protect for data privacy and combat unsolicited accesses. Unfortunately, the traditional plaintext search methods cannot be directly applied to the encrypted cloud data any more. The traditional information retrieval (IR) has already provided multi-keyword ranked search for the data user. In the same way, the cloud server needs provide the data user with the similar function, while protecting data and search privacy. It …",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "1bfecf5e7aac1af0e8170c5f3dc2cc6f",
"text": "A two-phase four-segment DC-DC converter with novel coupled-inductors output network utilizing phase shedding and phase segmentation is presented for light load efficiency enhancement. The coupled inductor network increases the effective inductance value and reduces inductor current ripple. To improve light load efficiency, resonant gate drivers are employed to reduce driver losses. The DC-DC converter is implemented in 0.18 μm six-metal CMOS technology with 5 V devices, and occupies a total area of 7.77 mm2. The converter achieves a peak efficiency of 77.8% at 6 W output with 5% efficiency improvement at 1 V output due to the use of resonant gate drivers. Furthermore, with phase shedding, the converter maintains peak efficiency as the output current varies from 0.1 A to 1.86 A.",
"title": ""
},
{
"docid": "7346ce53235490f0eaf1ad97c7c23006",
"text": "With the growth in sociality and interaction around online news media, news sites are increasingly becoming places for communities to discuss and address common issues spurred by news articles. The quality of online news comments is of importance to news organizations that want to provide a valuable exchange of community ideas and maintain credibility within the community. In this work we examine the complex interplay between the needs and desires of news commenters with the functioning of different journalistic approaches toward managing comment quality. Drawing primarily on newsroom interviews and reader surveys, we characterize the comment discourse of SacBee.com, discuss the relationship of comment quality to both the consumption and production of news information, and provide a description of both readers' and writers' motivations for usage of news comments. We also examine newsroom strategies for dealing with comment quality as well as explore tensions and opportunities for value-sensitive innovation within such online communities.",
"title": ""
},
{
"docid": "d774759e03329d0cc5611ab9104f8299",
"text": "The flexibility of neural networks is a very powerful property. In many cases, these changes lead to great improvements in accuracy compared to basic models that we discussed in the previous tutorial. In the last part of the tutorial, I will also explain how to parallelize the training of neural networks. This is also an important topic because parallelizing neural networks has played an important role in the current deep learning movement.",
"title": ""
},
{
"docid": "e9199c0f3b08979c03e0c82399ac7160",
"text": "Background: ADHD can have a negative impact on occupational performance of a child, interfering with ADLs, IADLs, education, leisure, and play. However, at this time, a cumulative review of evidence based occupational therapy interventions for children with ADHD do not exist. Purpose: The purpose of this scholarly project was to complete a systematic review of what occupational therapy interventions are effective for school-aged children with ADHD. Methods: An extensive systematic review for level T, II, or II research articles was completed using CINAHL and OT Search. Inclusion, exclusion, subject terms, and words or phrases were determined with assistance from the librarian at the Harley French Library at the University of North Dakota. Results: The systematic review yielded !3 evidence-based articles with interventions related to cognition, motor, sensory, and play. Upon completion of the systematic review, articles were categorized based upon an initial literature search understanding common occupational therapy interventions for children with ADHD. Specifically, level I, II, and III occupational therapy research is available for interventions addressing cognition, motor, sensory, and play. Conclusion: Implications for practice and education include the need for foundational and continuing education opportunities reflecting evidenced-based interventions for ADHD. Further research is needed to solidify best practices for children with ADHD including more rigorous studies across interventions.",
"title": ""
},
{
"docid": "01f3f3b3693940963f5f2c4f71585a2a",
"text": "BACKGROUND\nStress and anxiety are widely considered to be causally related to alcohol craving and consumption, as well as development and maintenance of alcohol use disorder (AUD). However, numerous preclinical and human studies examining effects of stress or anxiety on alcohol use and alcohol-related problems have been equivocal. This study examined relationships between scores on self-report anxiety, anxiety sensitivity, and stress measures and frequency and intensity of recent drinking, alcohol craving during early withdrawal, as well as laboratory measures of alcohol craving and stress reactivity among heavy drinkers with AUD.\n\n\nMETHODS\nMedia-recruited, heavy drinkers with AUD (N = 87) were assessed for recent alcohol consumption. Anxiety and stress levels were characterized using paper-and-pencil measures, including the Beck Anxiety Inventory (BAI), the Anxiety Sensitivity Index-3 (ASI-3), and the Perceived Stress Scale (PSS). Eligible subjects (N = 30) underwent alcohol abstinence on the Clinical Research Unit; twice daily measures of alcohol craving were collected. On day 4, subjects participated in the Trier Social Stress Test; measures of cortisol and alcohol craving were collected.\n\n\nRESULTS\nIn multivariate analyses, higher BAI scores were associated with lower drinking frequency and reduced drinks/drinking day; in contrast, higher ASI-3 scores were associated with higher drinking frequency. BAI anxiety symptom and ASI-3 scores also were positively related to Alcohol Use Disorders Identification Test total scores and AUD symptom and problem subscale measures. Higher BAI and ASI-3 scores but not PSS scores were related to greater self-reported alcohol craving during early alcohol abstinence. Finally, BAI scores were positively related to laboratory stress-induced cortisol and alcohol craving. In contrast, the PSS showed no relationship with most measures of alcohol craving or stress reactivity.\n\n\nCONCLUSIONS\nOverall, clinically oriented measures of anxiety compared with perceived stress were more strongly associated with a variety of alcohol-related measures in current heavy drinkers with AUD.",
"title": ""
},
{
"docid": "2e5a51176d1c0ab5394bb6a2b3034211",
"text": "School transport is used by millions of children worldwide. However, not a substantial effort is done in order to improve the existing school transport systems. This paper presents the development of an IoT based scholar bus monitoring system. The development of new telematics technologies has enabled the development of various Intelligent Transport Systems. However, these are not presented as ITS services to end users. This paper presents the development of an IoT based scholar bus monitoring system that through localization and speed sensors will allow many stakeholders such as parents, the goverment, the school and many other authorities to keep realtime track of the scholar bus behavior, resulting in a better controlled scholar bus.",
"title": ""
},
{
"docid": "6a3bef9e3ca87f13356050f85afbb0ed",
"text": "We introduce the concept of control improvisation, the process of generating a random sequence of control events guided by a reference sequence and satisfying a given specification. We propose a formal definition of the control improvisation problem and an empirical solution applied to the domain of music. More specifically, we consider the scenario of generating a monophonic Jazz melody (solo) on a given song harmonization. The music is encoded symbolically, with the improviser generating a sequence of note symbols comprising pairs of pitches (frequencies) and discrete durations. Our approach can be decomposed roughly into two phases: a generalization phase, that learns from a training sequence (e.g., obtained from a human improviser) an automaton generating similar sequences, and a supervision phase that enforces a specification on the generated sequence, imposing constraints on the music in both the pitch and rhythmic domains. The supervision uses a measure adapted from Normalized Compression Distances (NCD) to estimate the divergence between generated melodies and the training melody and employs strategies to bound this divergence. An empirical evaluation is presented on a sample set of Jazz music.",
"title": ""
},
{
"docid": "1a3c01a10c296ca067452d98847240d6",
"text": "The second edition of Creswell's book has been significantly revised and updated. The author clearly sets out three approaches to research: quantitative, qualitative and mixed methods. As someone who has used mixed methods in my research, it is refreshing to read a textbook that addresses this. The differences between the approaches are clearly identified and a rationale for using each methodological stance provided.",
"title": ""
},
{
"docid": "21c84ab0fb698ad2619e0afc6db44e1a",
"text": "Nanoscale windows in graphene (nanowindows) have the ability to switch between open and closed states, allowing them to become selective, fast, and energy-efficient membranes for molecular separations. These special pores, or nanowindows, are not electrically neutral due to passivation of the carbon edges under ambient conditions, becoming flexible atomic frameworks with functional groups along their rims. Through computer simulations of oxygen, nitrogen, and argon permeation, here we reveal the remarkable nanowindow behavior at the atomic scale: flexible nanowindows have a thousand times higher permeability than conventional membranes and at least twice their selectivity for oxygen/nitrogen separation. Also, weakly interacting functional groups open or close the nanowindow with their thermal vibrations to selectively control permeation. This selective fast permeation of oxygen, nitrogen, and argon in very restricted nanowindows suggests alternatives for future air separation membranes. Graphene with nanowindows can have 1000 times higher permeability and four times the selectivity for air separation than conventional membranes, Vallejos-Burgos et al. reveal by molecular simulation, due to flexibility at the nanoscale and thermal vibrations of the nanowindows' functional groups.",
"title": ""
}
] |
scidocsrr
|
992e205432130de5b972ddb36e37e59b
|
Impact of single layer, double layer and four layer windings on the performance of AFPMSMs
|
[
{
"docid": "c6c1ba04c8a2191f2d1b4bd970b93aff",
"text": "In this paper, a complete sensitivity analysis of the optimal parameters for the axial flux permanent magnet synchronous machines working in the field weakening region is implemented. Thanks to the presence of a parameterized accurate analytical model, it is possible to obtain all the required parameters of the machine. The two goals of the ideal design are to maximize the power density: <inline-formula><tex-math notation=\"LaTeX\">$P_{\\text{density}}$ </tex-math></inline-formula> and the ratio of maximal to rated speed: <inline-formula><tex-math notation=\"LaTeX\"> $n_{\\max}/n_r$</tex-math></inline-formula>, which is an inductance related parameter keeping the efficiency at the target speed above 90<inline-formula><tex-math notation=\"LaTeX\">$\\%$</tex-math></inline-formula>. Different slots/poles/phases combinations are studied to reveal the optimum combination for each phase. This paper has studied the effect of the ratio of number of stator slots to number of rotor poles on the <inline-formula> <tex-math notation=\"LaTeX\">$P_{\\text{density}}$</tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$n_{\\max}/n_r$</tex-math></inline-formula>. It is shown that a low value of this parameter results in a better <inline-formula><tex-math notation=\"LaTeX\">$P_{\\text{density}}$</tex-math></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$n_{\\max}/n_r$</tex-math></inline-formula>. The effect of the outer diameter, and the inner to outer diameter ratio are studied with respect to the two design goals. In addition, a comparison between the finite and the theoretical infinite speed designs is implemented. A complete 3D finite element validation has proven the robustness of the analytical model.",
"title": ""
},
{
"docid": "7f29c121af6573a5c81366b8cb2c8a21",
"text": "The objective of this paper is to develop an analytical optimal design tool to determine a megawatt-scale yokelss and segmented armature (YASA) machine design that fulfills the application requirements and constraints. This analytical tool considers both electromagnetic and structural designs. Different designs that provide similar performance will have emerged from this analytical process. A design reference map that graphically shows the relationships and tradeoffs between each objective function is introduced. A multicriteria optimization process is applied to determine a design optimum. In the optimization process, the design objectives considered in this study are to minimize the outer diameter, to minimize the structural mass of the machine, to minimize the copper and iron losses, and to minimize the active materials cost. Three variables considered in calculating the objective functions are the air-gap flux density, the ratio of outer-to-inner machine diameter, and the current loading. The optimization method uses a pseudoweight vector to provide the flexibility to prioritize one or more objective functions, dependant on the specific application requirements.",
"title": ""
}
] |
[
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
},
{
"docid": "b929cbcaf8de8e845d1cf7f59d3eca63",
"text": "This paper presents 35 GHz single-pole-single-throw (SPST) and single-pole-double-throw (SPDT) CMOS switches using a 0.13 mum BiCMOS process (IBM 8 HP). The CMOS transistors are designed to have a high substrate resistance to minimize the insertion loss and improve power handling capability. The SPST/SPDT switches have a insertion loss of 1.8 dB/2.2 dB, respectively, and an input 1-dB compression point (P1 dB) greater than 22 dBm. The isolation is greater than 30 dB at 35-40 GHz and is achieved using two parallel resonant networks. To our knowledge, this is the first demonstration of low-loss, high-isolation CMOS switches at Ka-band frequencies.",
"title": ""
},
{
"docid": "63d26f3336960c1d92afbd3a61a9168c",
"text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.",
"title": ""
},
{
"docid": "31e0de8b5ca6321ef182b84c66e07ecd",
"text": "Visual sentiment analysis is raising more and more attention with the increasing tendency to express emotions through images. While most existing works assign a single dominant emotion to each image, we address the sentiment ambiguity by label distribution learning (LDL), which is motivated by the fact that image usually evokes multiple emotions. Two new algorithms are developed based on conditional probability neural network (CPNN). First, we propose BCPNN which encodes image label into a binary representation to replace the signless integers used in CPNN, and employ it as a part of input for the neural network. Then, we train our ACPNN model by adding noises to ground truth label and augmenting affective distributions. Since current datasets are mostly annotated for single-label learning, we build two new datasets, one of which is relabeled on the popular Flickr dataset and the other is collected from Twitter. These datasets contain 20,745 images with multiple affective labels, which are over ten times larger than the existing ones. Experimental results show that the proposed methods outperform the state-of-theart works on our large-scale datasets and other publicly available benchmarks. Introduction In recent years, lots of attention has been paid to affective image classification (Jou et al. 2015; Joshi et al. 2011; Chen et al. 2015). Most of these works are conducted by psychological studies (Lang 1979; Lang, Bradley, and Cuthbert 1998), and focus on manual design of features and classifiers (You et al. 2015a). As defined as a singlelabel learning (SLL) problem which assigns a single emotional label to each image, previous works (You et al. 2016; Sun et al. 2016) have performed promising results. However, image sentiment may be the mixture of all components from different regions rather than a single representative emotion. Meanwhile, different people may have different emotional reactions to the same image, which is caused by a variety of elements like the different culture background and various recognitions from unique experiences (Peng et al. 2015). Furthermore, even a single viewer may have multiple reactions to one image. Figure 1 shows examples from a widely used dataset, i.e. Abstract Paintings Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Amusement Awe Contentment Excitement Anger Disgust Fear Sadness",
"title": ""
},
{
"docid": "01472364545392cad69b9c7e1f65f4bb",
"text": "The designing of power transmission network is a difficult task due to the complexity of power system. Due to complexity in the power system there is always a loss of the stability due to the fault. Whenever a fault is intercepted in system, the whole system goes to severe transients. These transients cause oscillation in phase angle which leads poor power quality. The nature of oscillation is increasing instead being sustained, which leads system failure in form of generator damage. To reduce and eliminate the unstable oscillations one needs to use a stabilizer which can generate a perfect compensatory signal in order to minimize the harmonics generated due to instability. This paper presents a Power System stabilizer to reduce oscillations due to small signal disturbance. Additionally, a hybrid approach is proposed using FOPID stabilizer with the PSS connected SMIB. Genetic algorithm (GA), Particle swarm optimization (PSO) and Grey Wolf Optimization (GWO) are used for the parameter tuning of the stabilizer. Reason behind the use of GA, PSO and GWO instead of conventional methods is that it search the parameter heuristically, which leads better results. The efficiency of proposed approach is observed by rotor angle and power angle deviations in the SMIB system.",
"title": ""
},
{
"docid": "6545ea7d281be5528d9217f3b891a5da",
"text": "In this paper, a novel metamaterial absorber working in the C band frequency range has been proposed to reduce the in-band Radar Cross Section (RCS) of a typical planar antenna. The absorber is first designed in the shape of a hexagonal ring structure having dipoles at the corresponding arms of the rings. The various geometrical parameters of the proposed metamaterial structure have first been optimized using the numerical simulator, and the structure is fabricated and tested. In the second step, the metamaterial absorber is loaded on a microstrip patch antenna working in the same frequency band as that of the metamaterial absorber to reduce the in-band Radar Cross Section (RCS) of the antenna. The prototype is simulated, fabricated and tested. The simulated results show the 99% absorption of the absorber at 6.35 GHz which is in accordance with the measured data. A close agreement between the simulated and the measured results shows that the proposed absorber can be used for the RCS reduction of the planar antenna in order to improve its in-band stealth performance.",
"title": ""
},
{
"docid": "3e80b90205de0033a3e22f7914f7fed9",
"text": "-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------------Financial losses due to financial statement frauds (FSF) are increasing day by day in the world. The industry recognizes the problem and is just now starting to act. Although prevention is the best way to reduce frauds, fraudsters are adaptive and will usually find ways to circumvent such measures. Detecting fraud is essential once prevention mechanism has failed. Several data mining algorithms have been developed that allow one to extract relevant knowledge from a large amount of data like fraudulent financial statements to detect FSF. It is an attempt to detect FSF ; We present a generic framework to do our analysis.",
"title": ""
},
{
"docid": "95f81c1063b9965213061238f4cca2f1",
"text": "The poisoned child presents unique considerations in circumstances of exposure, clinical effects, diagnostic approach, and therapeutic interventions. The emergency provider must be aware of the pathophysiologic vulnerabilities of infants and children and substances that are especially toxic. Awareness is essential for situations in which the risk of morbidity and mortality is increased, such as child abuse by poisoning. Considerations in treatment include the need for attentive supportive care, pediatric implications for antidotal therapy, and extracorporeal removal methods such as hemodialysis in children. In this article, each of these issues and emerging poison hazards are discussed.",
"title": ""
},
{
"docid": "76715b342c0b0a475ba6db06a0345c7b",
"text": "Generalized linear mixed models are a widely used tool for modeling longitudinal data. However , their use is typically restricted to few covariates, because the presence of many predictors yields unstable estimates. The presented approach to the fitting of generalized linear mixed models includes an L 1-penalty term that enforces variable selection and shrinkage simultaneously. A gradient ascent algorithm is proposed that allows to maximize the penalized log-likelihood yielding models with reduced complexity. In contrast to common procedures it can be used in high-dimensional settings where a large number of potentially influential explanatory variables is available. The method is investigated in simulation studies and illustrated by use of real data sets.",
"title": ""
},
{
"docid": "04eb4a91188b3098a1316955b49323d6",
"text": "Previous studies have demonstrated the importance of eating behaviour regarding dietary variety and nutrient intake of children. However, the association between picky eating and growth of children is still a topic of debate. This study sought to estimate the prevalence of picky eating and to identify possible associations with the growth of school-age children in China. In this survey, 793 healthy children aged 7-12 years were recruited from nine cities and rural areas in China using a multi-stage cluster sampling method. Data collected included socio-demographic information and parents' perceptions of picky eating using a structured questionnaire, nutrient intake using 24-hour dietary recall, weight and height using body measurements, and intelligence using the Wechsler Intelligence Scale for Children. Blood samples were collected and analysed for minerals. The prevalence of picky eating reported by parents was 59.3% in children. Compared with non-picky eaters, picky eaters had a lower dietary intake of energy, protein, carbohydrates, most vitamins and minerals, and lower levels of magnesium, iron, and copper in the blood (p < 0.05), and also had a 0.184 z-score lower in height for age (95% CI: -0.332, 0.036; p = 0.015), a 0.385 z-score lower in weight for age (95% CI: -0.533, -0.237; p < 0.001), a 0.383 z-score lower in BMI for age (95% CI: -0.563, -0.203; p < 0.001), and scored 2.726 points higher on the intelligence test (95% CI: 0.809, 4.643; p = 0.006) when adjusted for children's birth weight and food allergy, mothers' education, and family income. Picky eating behaviour towards meat, eggs and vegetables showed negative associations with growth. Picky eating behaviour is prevalent in school-age children in China and may have a negative effect on growth.",
"title": ""
},
{
"docid": "be42a930883337c16f8f8bce790e016f",
"text": "Plug-in hybrid electric vehicles (PHEVs) are promising options for future transportation. Having two sources of energy enables them to offer better fuel economy and fewer emissions. Significant research has been done to take advantage of future route information to enhance vehicle performance. In this paper, an ecological adaptive cruise controller (Eco-ACC) is used to improve both fuel economy and safety of the Toyota Prius Plug-in Hybrid. Recently, an emerging trend in the research has been to improve the adaptive cruise controller. However, the majority of research to date has focused on driving safety, and only rare reports in the literature substantiate the applicability of such systems for PHEVs. Here, we demonstrate that using an Eco-ACC system can simultaneously improve total energy costs and vehicle safety. The developed controller is equipped with an onboard sensor that captures upcoming trip data to optimally adjust the speed of PHEVs. The nonlinear model predictive control technique (NMPC) is used to optimally control vehicle speed. To prepare the NMPC controller for real-time applications, a fast and efficient control-oriented model is developed. The authenticity of the model is validated using a high-fidelity Autonomie-based model. To evaluate the designed controller, the global optimum solution for cruise control problem is found using Pontryagin's minimum principle (PMP). To explore the efficacy of the controller, PID and linear MPC controllers are also applied to the same problem. Simulations are conducted for different driving scenarios such as driving over a hill and car following. These simulations demonstrate that NMPC improves the total energy cost up to 19%.",
"title": ""
},
{
"docid": "9a68cbb486205ee013bfd2ac37211aec",
"text": "Minimization of the rank loss or, equivalently, maximization of the AUC in bipartite ranking calls for minimizing the number of disagreements between pairs of instances. Since the complexity of this problem is inherently quadratic in the number of training examples, it is tempting to ask how much is actually lost by minimizing a simple univariate loss function, as done by standard classification methods, as a surrogate. In this paper, we first note that minimization of 0/1 loss is not an option, as it may yield an arbitrarily high rank loss. We show, however, that better results can be achieved by means of a weighted (cost-sensitive) version of 0/1 loss. Yet, the real gain is obtained through marginbased loss functions, for which we are able to derive proper bounds, not only for rank risk but, more importantly, also for rank regret. The paper is completed with an experimental study in which we address specific questions raised by our theoretical analysis.",
"title": ""
},
{
"docid": "712d292b38a262a8c37679c9549a631d",
"text": "Addresses for correspondence: Dr Sara de Freitas, London Knowledge Lab, Birkbeck College, University of London, 23–29 Emerald Street, London WC1N 3QS. UK. Tel: +44(0)20 7763 2117; fax: +44(0)20 7242 2754; email: sara@lkl.ac.uk. Steve Jarvis, Vega Group PLC, 2 Falcon Way, Shire Park, Welwyn Garden City, Herts AL7 1TW, UK. Tel: +44 (0)1707 362602; Fax: +44 (0)1707 393909; email: steve.jarvis@vega.co.uk",
"title": ""
},
{
"docid": "a870a0628c57f56c8162ff4495bec540",
"text": "We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.",
"title": ""
},
{
"docid": "8ea9aa5399701dc73533063644108bca",
"text": "The paper presents the design and implementation of an IOT-based health monitoring system for emergency medical services which can demonstrate collection, integration, and interoperation of IoT data flexibly which can provide support to emergency medical services like Intensive Care Units (ICU), using a INTEL GALILEO 2ND generation development board. The proposed model enables users to improve health related risks and reduce healthcare costs by collecting, recording, analyzing and sharing large data streams in real time and efficiently. The idea of this project came so to reduce the headache of patient to visit to doctor every time he need to check his blood pressure, heart beat rate, temperature etc. With the help of this proposal the time of both patients and doctors are saved and doctors can also help in emergency scenario as much as possible. The proposed outcome of the project is to give proper and efficient medical services to patients by connecting and collecting data information through health status monitors which would include patient's heart rate, blood pressure and ECG and sends an emergency alert to patient's doctor with his current status and full medical information.",
"title": ""
},
{
"docid": "f5422fcf0046b189e3d6e78f98b98202",
"text": "Muscle contraction during exercise, whether resistive or endurance in nature, has profound affects on muscle protein turnover that can persist for up to 72 h. It is well established that feeding during the postexercise period is required to bring about a positive net protein balance (muscle protein synthesis - muscle protein breakdown). There is mounting evidence that the timing of ingestion and the protein source during recovery independently regulate the protein synthetic response and influence the extent of muscle hypertrophy. Minor differences in muscle protein turnover appear to exist in young men and women; however, with aging there may be more substantial sex-based differences in response to both feeding and resistance exercise. The recognition of anabolic signaling pathways and molecules are also enhancing our understanding of the regulation of protein turnover following exercise perturbations. In this review we summarize the current understanding of muscle protein turnover in response to exercise and feeding and highlight potential sex-based dimorphisms. Furthermore, we examine the underlying anabolic signaling pathways and molecules that regulate these processes.",
"title": ""
},
{
"docid": "5e946f2a15b5d9c663d85cd12bc3d9fc",
"text": "Individual differences in young children's understanding of others' feelings and in their ability to explain human action in terms of beliefs, and the earlier correlates of these differences, were studied with 50 children observed at home with mother and sibling at 33 months, then tested at 40 months on affective-labeling, perspective-taking, and false-belief tasks. Individual differences in social understanding were marked; a third of the children offered explanations of actions in terms of false belief, though few predicted actions on the basis of beliefs. These differences were associated with participation in family discourse about feelings and causality 7 months earlier, verbal fluency of mother and child, and cooperative interaction with the sibling. Differences in understanding feelings were also associated with the discourse measures, the quality of mother-sibling interaction, SES, and gender, with girls more successful than boys. The results support the view that discourse about the social world may in part mediate the key conceptual advances reflected in the social cognition tasks; interaction between child and sibling and the relationships between other family members are also implicated in the growth of social understanding.",
"title": ""
},
{
"docid": "14b616d5737369e3eecc7da82e97f0e8",
"text": "This paper presents a novel algorithm which uses compact hash bits to greatly improve the efficiency of non-linear kernel SVM in very large scale visual classification problems. Our key idea is to represent each sample with compact hash bits, over which an inner product is defined to serve as the surrogate of the original nonlinear kernels. Then the problem of solving the nonlinear SVM can be transformed into solving a linear SVM over the hash bits. The proposed Hash-SVM enjoys dramatic storage cost reduction owing to the compact binary representation, as well as a (sub-)linear training complexity via linear SVM. As a critical component of Hash-SVM, we propose a novel hashing scheme for arbitrary non-linear kernels via random subspace projection in reproducing kernel Hilbert space. Our comprehensive analysis reveals a well behaved theoretic bound of the deviation between the proposed hashing-based kernel approximation and the original kernel function. We also derive requirements on the hash bits for achieving a satisfactory accuracy level. Several experiments on large-scale visual classification benchmarks are conducted, including one with over 1 million images. The results show that Hash-SVM greatly reduces the computational complexity (more than ten times faster in many cases) while keeping comparable accuracies.",
"title": ""
},
{
"docid": "1bdd37293cd9750e1dd41a825a8aeedb",
"text": "As a widely used marker of health, birthweight has been a persistent racialized disparity with the low birthweight rate of Blacks in Alabama nearly doubling the national average. The purpose of this study was to examine the role of racial identity and acculturation on birthweight in a sample of Black women living in Alabama. Black women (n=72) in West Alabama were surveyed about the birthweight of their first born child. Correlation and multiple linear regression analyses were conducted. Racial identity was the only significant predictor of birthweight. Mothers with a strong racial identity reported having low birthweight babies less often than those who scored lower on racial identity. Further exploration of racial identity revealed self-image as the essential element that predicted birthweight. Birthweight increased 4.2 ounces for each additional degree of self-image. Results also indicated that birthweight decreased as mothers’ age increased, within the widely accepted optimal maternal age range 21 to 35. Results add to the existing body of literature in support of the positive effect racial identity has on health. Findings on age are congruent with the weathering hypothesis, which states that the health of Black women may begin to deteriorate in early adulthood possibly due to the strain of racism.",
"title": ""
},
{
"docid": "7ee4a708d41065c619a5bf9e86f871a3",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
}
] |
scidocsrr
|
d438250ac472d7d7e2b937383193fbc3
|
Financial Sentiment Analysis for Risk Prediction
|
[
{
"docid": "a178871cd82edaa05a0b0befacb7fc38",
"text": "The main applications and challenges of one of the hottest research areas in computer science.",
"title": ""
},
{
"docid": "216097cf8a9567fd7427d9c653a7c8cd",
"text": "This paper studies sentiment analysis of conditional sentences. The aim is to determine whether opinions expressed on different topics in a conditional sentence are positive, negative or neutral. Conditional sentences are one of the commonly used language constructs in text. In a typical document, there are around 8% of such sentences. Due to the condition clause, sentiments expressed in a conditional sentence can be hard to determine. For example, in the sentence, if your Nokia phone is not good, buy this great Samsung phone, the author is positive about “Samsung phone” but does not express an opinion on “Nokia phone” (although the owner of the “Nokia phone” may be negative about it). However, if the sentence does not have “if”, the first clause is clearly negative. Although “if” commonly signifies a conditional sentence, there are many other words and constructs that can express conditions. This paper first presents a linguistic analysis of such sentences, and then builds some supervised learning models to determine if sentiments expressed on different topics in a conditional sentence are positive, negative or neutral. Experimental results on conditional sentences from 5 diverse domains are given to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "3ee39231fc2fbf3b6295b1b105a33c05",
"text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.",
"title": ""
}
] |
[
{
"docid": "c724fdcf7f58121ff6ad886df68e2725",
"text": "The Internet of Things (IoT) is an emerging paradigm where smart objects are seamlessly connected to the overall Internet and can potentially cooperate to achieve common objectives such as supporting innovative home automation services. With reference to such a scenario, this paper presents an Intrusion Detection System (IDS) framework for IoT empowered by IPv6 over low-power personal area network (6LoWPAN) devices. In fact, 6LoWPAN is an interesting protocol supporting the realization of IoT in a resource constrained environment. 6LoWPAN devices are vulnerable to attacks inherited from both the wireless sensor networks and the Internet protocols. The proposed IDS framework which includes a monitoring system and a detection engine has been integrated into the network framework developed within the EU FP7 project `ebbits'. A penetration testing (PenTest) system had been used to evaluate the performance of the implemented IDS framework. Preliminary tests revealed that the proposed framework represents a promising solution for ensuring better security in 6LoWPANs.",
"title": ""
},
{
"docid": "b0840d44b7ec95922eeed4ef71b338f9",
"text": "Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.",
"title": ""
},
{
"docid": "dbafe7db0387b56464ac630404875465",
"text": "Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.",
"title": ""
},
{
"docid": "715eaf7bca0a1b65b9fbd0dd05f9684e",
"text": "The recent proliferation of location-based services (LBSs) has necessitated the development of effective indoor positioning solutions. In such a context, wireless local area network (WLAN) positioning is a particularly viable solution in terms of hardware and installation costs due to the ubiquity of WLAN infrastructures. This paper examines three aspects of the problem of indoor WLAN positioning using received signal strength (RSS). First, we show that, due to the variability of RSS features over space, a spatially localized positioning method leads to improved positioning results. Second, we explore the problem of access point (AP) selection for positioning and demonstrate the need for further research in this area. Third, we present a kernelized distance calculation algorithm for comparing RSS observations to RSS training records. Experimental results indicate that the proposed system leads to a 17 percent (0.56 m) improvement over the widely used K-nearest neighbor and histogram-based methods",
"title": ""
},
{
"docid": "9643afa619093422114a1449b1bf6b76",
"text": "In this paper we describe the adaptation of a supervised classification system that was originally developed to detect sentiment on Twitter texts written in English. The Columbia University team adapted this system to participate in Task 1 of the 4th edition of the experimental evaluation workshop for sentiment analysis focused on the Spanish language (TASS 2015). The task consists of determining the global polarity of a group of messages written in Spanish using the social media platform Twitter.",
"title": ""
},
{
"docid": "5cdb99bf928039bd5377b3eca521d534",
"text": "Thanks to advances in information and communication technologies, there is a prominent increase in the amount of information produced specifically in the form of text documents. In order to, effectively deal with this “information explosion” problem and utilize the huge amount of text databases, efficient and scalable tools and techniques are indispensable. In this study, text clustering which is one of the most important techniques of text mining that aims at extracting useful information by processing data in textual form is addressed. An improved variant of spherical K-Means (SKM) algorithm named multi-cluster SKM is developed for clustering high dimensional document collections with high performance and efficiency. Experiments were performed on several document data sets and it is shown that the new algorithm provides significant increase in clustering quality without causing considerable difference in CPU time usage when compared to SKM algorithm.",
"title": ""
},
{
"docid": "ad076495666725ed3fd871c04d6b6794",
"text": "Elite endurance athletes possess a high capacity for whole-body maximal fat oxidation (MFO). The aim was to investigate the determinants of a high MFO in endurance athletes. The hypotheses were that augmented MFO in endurance athletes is related to concomitantly increments of skeletal muscle mitochondrial volume density (MitoVD ) and mitochondrial fatty acid oxidation (FAOp ), that is, quantitative mitochondrial adaptations as well as intrinsic FAOp per mitochondria, that is, qualitative adaptations. Eight competitive male cross-country skiers and eight untrained controls were compared in the study. A graded exercise test was performed to determine MFO, the intensity where MFO occurs (FatMax ), and V ˙ O 2 Max . Skeletal muscle biopsies were obtained to determine MitoVD (electron microscopy), FAOp , and OXPHOSp (high-resolution respirometry). The following were higher (P < 0.05) in endurance athletes compared to controls: MFO (mean [95% confidence intervals]) (0.60 g/min [0.50-0.70] vs 0.32 [0.24-0.39]), FatMax (46% V ˙ O 2 Max [44-47] vs 35 [34-37]), V ˙ O 2 Max (71 mL/min/kg [69-72] vs 48 [47-49]), MitoVD (7.8% [7.2-8.5] vs 6.0 [5.3-6.8]), FAOp (34 pmol/s/mg muscle ww [27-40] vs 21 [17-25]), and OXPHOSp (108 pmol/s/mg muscle ww [104-112] vs 69 [68-71]). Intrinsic FAOp (4.0 pmol/s/mg muscle w.w/MitoVD [2.7-5.3] vs 3.3 [2.7-3.9]) and OXPHOSp (14 pmol/s/mg muscle ww/MitoVD [13-15] vs 11 [10-13]) were, however, similar in the endurance athletes and untrained controls. MFO and MitoVD correlated (r2 = 0.504, P < 0.05) in the endurance athletes. A strong correlation between MitoVD and MFO suggests that expansion of MitoVD might be rate-limiting for MFO in the endurance athletes. In contrast, intrinsic mitochondrial changes were not associated with augmented MFO.",
"title": ""
},
{
"docid": "bb4001c4cb5fde8d34fd48ee50eb053c",
"text": "We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam’s razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Rényi entropy. Our main result is that, under natural assumptions, if the exogenous variable has lowH0 entropy (cardinality) in the true direction, it must have high H0 entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum H1 entropy (Shannon Entropy) is equivalent to the problem of finding minimum joint entropy given n marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for n = 2 provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum Shannon entropy. Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.",
"title": ""
},
{
"docid": "3dcf5f63798458ed697a23664675f2fe",
"text": "Volatility plays crucial roles in financial markets, such as in derivative pricing, portfolio risk management, and hedging strategies. Therefore, accurate prediction of volatility is critical. We propose a new hybrid long short-term memory (LSTM) model to forecast stock price volatility that combines the LSTM model with various generalized autoregressive conditional heteroscedasticity (GARCH)-type models. We use KOSPI 200 index data to discover proposed hybrid models that combine an LSTM with one to three GARCH-type models. In addition, we compare their performance with existing methodologies by analyzing single models, such as the GARCH, exponential GARCH, exponentially weighted moving average, a deep feedforward neural network (DFN), and the LSTM, as well as the hybrid DFN models combining a DFN with one GARCH-type model. Their performance is compared with that of the proposed hybrid LSTM models. We discover that GEW-LSTM, a proposed hybrid model combining the LSTM model with three GARCH-type models, has the lowest prediction errors in terms of mean absolute error (MAE), mean squared error (MSE), heteroscedasticity adjusted MAE (HMAE), and heteroscedasticity adjusted MSE (HMSE). The MAE of GEW-LSTM is 0.0107, which is 37.2% less than that of the E-DFN (0.017), the model combining EGARCH and DFN and the best model among those existing. In addition, the GEWLSTM has 57.3%, 24.7%, and 48% smaller MSE, HMAE, and HMSE, respectively. The first contribution of this study is its hybrid LSTM model that combines excellent sequential pattern learning with improved prediction performance in stock market volatility. Second, our proposed model markedly enhances prediction performance of the existing literature by combining a neural network model with multiple econometric models rather than only a single econometric model. Finally, the proposed methodology can be extended to various fields as an integrated model combining time-series and neural network models as well as forecasting stock market volatility.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "ff6420335374291508063663acb9dbe6",
"text": "Many people are exposed to loss or potentially traumatic events at some point in their lives, and yet they continue to have positive emotional experiences and show only minor and transient disruptions in their ability to function. Unfortunately, because much of psychology's knowledge about how adults cope with loss or trauma has come from individuals who sought treatment or exhibited great distress, loss and trauma theorists have often viewed this type of resilience as either rare or pathological. The author challenges these assumptions by reviewing evidence that resilience represents a distinct trajectory from the process of recovery, that resilience in the face of loss or potential trauma is more common than is often believed, and that there are multiple and sometimes unexpected pathways to resilience.",
"title": ""
},
{
"docid": "dd52742343462b3106c18274c143928b",
"text": "This paper presents a descriptive account of the social practices surrounding the iTunes music sharing of 13 participants in one organizational setting. Specifically, we characterize adoption, critical mass, and privacy; impression management and access control; the musical impressions of others that are created as a result of music sharing; the ways in which participants attempted to make sense of the dynamic system; and implications of the overlaid technical, musical, and corporate topologies. We interleave design implications throughout our results and relate those results to broader themes in a music sharing design space.",
"title": ""
},
{
"docid": "b6ee2327d8e7de5ede72540a378e69a0",
"text": "Heads of Government from Asia and the Pacific have committed to a malaria-free region by 2030. In 2015, the total number of confirmed cases reported to the World Health Organization by 22 Asia Pacific countries was 2,461,025. However, this was likely a gross underestimate due in part to incidence data not being available from the wide variety of known sources. There is a recognized need for an accurate picture of malaria over time and space to support the goal of elimination. A survey was conducted to gain a deeper understanding of the collection of malaria incidence data for surveillance by National Malaria Control Programmes in 22 countries identified by the Asia Pacific Leaders Malaria Alliance. In 2015–2016, a short questionnaire on malaria surveillance was distributed to 22 country National Malaria Control Programmes (NMCP) in the Asia Pacific. It collected country-specific information about the extent of inclusion of the range of possible sources of malaria incidence data and the role of the private sector in malaria treatment. The findings were used to produce recommendations for the regional heads of government on improving malaria surveillance to inform regional efforts towards malaria elimination. A survey response was received from all 22 target countries. Most of the malaria incidence data collected by NMCPs originated from government health facilities, while many did not collect comprehensive data from mobile and migrant populations, the private sector or the military. All data from village health workers were included by 10/20 countries and some by 5/20. Other sources of data included by some countries were plantations, police and other security forces, sentinel surveillance sites, research or academic institutions, private laboratories and other government ministries. Malaria was treated in private health facilities in 19/21 countries, while anti-malarials were available in private pharmacies in 16/21 and private shops in 6/21. Most countries use primarily paper-based reporting. Most collected malaria incidence data in the Asia Pacific is from government health facilities while data from a wide variety of other known sources are often not included in national surveillance databases. In particular, there needs to be a concerted regional effort to support inclusion of data on mobile and migrant populations and the private sector. There should also be an emphasis on electronic reporting and data harmonization across organizations. This will provide a more accurate and up to date picture of the true burden and distribution of malaria and will be of great assistance in helping realize the goal of malaria elimination in the Asia Pacific by 2030.",
"title": ""
},
{
"docid": "b22e590e8de494018fea30b24cacbc71",
"text": "Rendering: Out-of-core Rendering for Information Visualization Joseph A. Cottama and Andrew Lumsdainea and Peter Wangb aCREST/Indiana University, Bloomington, IN, USA; bContinuum Analytics, Austin, TX, USA",
"title": ""
},
{
"docid": "99c088268633c19a8c4789c58c4c9aca",
"text": "Executing agile quadrotor maneuvers with cablesuspended payloads is a challenging problem and complications induced by the dynamics typically require trajectory optimization. State-of-the-art approaches often need significant computation time and complex parameter tuning. We present a novel dynamical model and a fast trajectory optimization algorithm for quadrotors with a cable-suspended payload. Our first contribution is a new formulation of the suspended payload behavior, modeled as a link attached to the quadrotor with a combination of two revolute joints and a prismatic joint, all being passive. Differently from state of the art, we do not require the use of hybrid modes depending on the cable tension. Our second contribution is a fast trajectory optimization technique for the aforementioned system. Our model enables us to pose the trajectory optimization problem as a Mathematical Program with Complementarity Constraints (MPCC). Desired behaviors of the system (e.g., obstacle avoidance) can easily be formulated within this framework. We show that our approach outperforms the state of the art in terms of computation speed and guarantees feasibility of the trajectory with respect to both the system dynamics and control input saturation, while utilizing far fewer tuning parameters. We experimentally validate our approach on a real quadrotor showing that our method generalizes to a variety of tasks, such as flying through desired waypoints while avoiding obstacles, or throwing the payload toward a desired target. To the best of our knowledge, this is the first time that three-dimensional, agile maneuvers exploiting the system dynamics have been achieved on quadrotors with a cable-suspended payload. SUPPLEMENTARY MATERIAL This paper is accompanied by a video showcasing the experiments: https://youtu.be/s9zb5MRXiHA",
"title": ""
},
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
},
{
"docid": "25bd1930de4141a4e80441d7a1ae5b89",
"text": "Since the release of Bitcoins as crypto currency, Bitcoin has played a prominent part in the media. However, not Bitcoin but the underlying technology blockchain offers the possibility to innovatively change industries. The decentralized structure of the blockchain is particularly suitable for implementing control and business processes in microgrids, using smart contracts and decentralized applications. This paper provides a state of the art survey overview of current blockchain technology based projects with the potential to revolutionize microgrids and provides a first attempt to technically characterize different start-up approaches. The most promising use case from the microgrid perspective is peer-to-peer trading, where energy is exchanged and traded locally between consumers and prosumers. An application concept for distributed PV generation is provided in this promising area.",
"title": ""
},
{
"docid": "f2239ebff484962c302b00faf24374e4",
"text": "In this paper, a methodology for the automated detection and classification of transient events in electroencephalographic (EEG) recordings is presented. It is based on association rule mining and classifies transient events into four categories: epileptic spikes, muscle activity, eye blinking activity, and sharp alpha activity. The methodology involves four stages: 1) transient event detection; 2) clustering of transient events and feature extraction; 3) feature discretization and feature subset selection; and 4) association rule mining and classification of transient events. The methodology is evaluated using 25 EEG recordings, and the best obtained accuracy was 87.38%. The proposed approach combines high accuracy with the ability to provide interpretation for the decisions made, since it is based on a set of association rules",
"title": ""
},
{
"docid": "c0a51f27931d8314b73a7de969bdfb08",
"text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.",
"title": ""
},
{
"docid": "2dd16c00cccd76b2c4265131151c8cb5",
"text": "This paper introduces the freely available WikEd Error Corpus. We describe the data mining process from Wikipedia revision histories, corpus content and format. The corpus consists of more than 12 million sentences with a total of 14 million edits of various types. As one possible application, we show that WikEd can be successfully adapted to improve a strong baseline in a task of grammatical error correction for English-as-a-Second-Language (ESL) learners’ writings by 2.63%. Used together with an ESL error corpus, a composed system gains 1.64% when compared to the ESL-trained system.",
"title": ""
}
] |
scidocsrr
|
34e854ba68affef11b7cd9b9fbdf28c1
|
Emotions Mediated Through Mid-Air Haptics
|
[
{
"docid": "1039532ef4dfbb7e0d04b25ad99682cb",
"text": "Communication of affect across a distance is not well supported by current technology, despite its importance to interpersonal interaction in modern lifestyles. Touch is a powerful conduit for emotional connectedness, and thus mediating haptic (touch) displays have been proposed to address this deficiency; but suitable evaluative methodology has been elusive. In this paper, we offer a first, structured examination of a design space for haptic support of remote affective communication, by analyzing the space and then comparing haptic models designed to manipulate its key dimensions. In our study, dyads (intimate pairs or strangers) are asked to communicate specified emotions using a purely haptic link that consists of virtual models rendered on simple knobs. These models instantiate both interaction metaphors of varying intimacy, and representations of virtual interpersonal distance. Our integrated objective and subjective observations imply that emotion can indeed be communicated through this medium, and confirm that the factors examined influence emotion communication performance as well as preference, comfort and connectedness. The proposed design space and the study results have implications for future efforts to support affective communication using the haptic modality, and the study approach comprises a first model for systematic evaluation of haptically expressed affect. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8564762ca6de73d72236f94bc5fe0a7a",
"text": "The current work examines the phenomenon of Virtual Interpersonal Touch (VIT), people touching one another via force-feedback haptic devices. As collaborative virtual environments become utilized more effectively, it is only natural that interactants will have the ability to touch one another. In the current work, we used relatively basic devices to begin to explore the expression of emotion through VIT. In Experiment 1, participants utilized a 2 DOF force-feedback joystick to express seven emotions. We examined various dimensions of the forces generated and subjective ratings of the difficulty of expressing those emotions. In Experiment 2, a separate group of participants attempted to recognize the recordings of emotions generated in Experiment 1. In Experiment 3, pairs of participants attempted to communicate the seven emotions using physical handshakes. Results indicated that humans were above chance when recognizing emotions via VIT, but not as accurate as people expressing emotions through non-mediated handshakes. We discuss a theoretical framework for understanding emotions expressed through touch as well as the implications of the current findings for the utilization of VIT in human computer interaction. Virtual Interpersonal Touch 3 Virtual Interpersonal Touch: Expressing and Recognizing Emotions through Haptic Devices There are many reasons to support the development of collaborative virtual environments (Lanier, 2001). One major criticism of collaborative virtual environments, however, is that they do not provide emotional warmth and nonverbal intimacy (Mehrabian, 1967; Sproull & Kiesler, 1986). In the current work, we empirically explore the augmentation of collaborative virtual environments with simple networked haptic devices to allow for the transmission of emotion through virtual interpersonal touch (VIT). EMOTION IN SOCIAL INTERACTION Interpersonal communication is largely non-verbal (Argyle, 1988), and one of the primary purposes of nonverbal behavior is to communicate subtleties of emotional states between individuals. Clearly, if social interaction mediated by virtual reality and other digital communication systems is to be successful, it will be necessary to allow for a full range of emotional expressions via a number of communication channels. In face-to-face communication, we express emotion primarily through facial expressions, voice, and through touch. While emotion is also communicated through other nonverbal gestures such as posture and hand signals (Cassell & Thorisson, in press; Collier, 1985), in the current review we focus on emotions transmitted via face, voice and touch. In a review of the emotion literature, Ortony and Turner (1990) discuss the concept of basic emotions. These fundamental emotions (e.g., fear) are the building blocks of other more complex emotions (e.g., jealousy). Furthermore, many people argue that these emotions are innate and universal across cultures (Plutchik, 2001). In terms of defining the set of basic emotions, previous work has provided very disparate sets of such emotions. Virtual Interpersonal Touch 4 For example, Watson (1930) has limited his list to “hardwired” emotions such as fear, love, and rage. On the other hand, Ekman & Friesen (1975) have limited their list to those discernable through facial movements such as anger, disgust, fear, joy, sadness, and surprise. The psychophysiology literature adds to our understanding of emotions by suggesting a fundamental biphasic model (Bradley, 2000). In other words, emotions can be thought of as variations on two axes hedonic valence and intensity. Pleasurable emotions have high hedonic valences, while negative emotions have low hedonic valences. This line of research suggests that while emotions may appear complex, much of the variation may nonetheless be mapped onto a two-dimensional scale. This notion also dovetails with research in embodied cognition that has shown that human language is spatially organized (Richardson, Spivey, Edelman, & Naples, 2001). For example, certain words are judged to be more “horizontal” while other words are judged to be more “vertical”. In the current work, we were not concerned predominantly with what constitutes a basic or universal emotion. Instead, we attempted to identify emotions that could be transmitted through virtual touch, and provide an initial framework for classifying and interpreting those digital haptic emotions. To this end, we reviewed theoretical frameworks that have attempted to accomplish this goal with other nonverbal behaviors— most notably facial expressions and paralinguistics. Facial Expressions Research in facial expressions has received much attention from social scientists for the past fifty years. Some researchers argue that the face is a portal to one’s internal mental state (Ekman & Friesen 1978; Izard, 1971). These scholars argue that when an Virtual Interpersonal Touch 5 emotion occurs, a series of biological events follow that produce changes in a person—one of those manifestations is movement in facial muscles. Moreover, these changes in facial expressions are also correlated with other physiological changes such as heart rate or blood pressure (Ekman & Friesen, 1976). Alternatively, other researchers argue that the correspondence of facial expressions to actual emotion is not as high as many think. For example, Fridland (1994) believes that people use facial expressions as a tool to strategically elicit behaviors from others or to accomplish social goals in interaction. Similarly, other researchers argue that not all emotions have corresponding facial expressions (Cacioppo et al., 1997). Nonetheless, most scholars would agree that there is some value to examining facial expressions of another if one’s goal is to gain an understanding of that person’s current mental state. Ekman’s groundbreaking work on emotions has provided tools to begin forming dimensions on which to classify his set of six basic emotions (Ekman & Friesen, 1975). Figure 1 provides a framework for the facial classifications developed by those scholars.",
"title": ""
}
] |
[
{
"docid": "b645a2c65b82dbb3ad474b047ec858c3",
"text": "The traffic in urban areas is mainly regularized by traffic lights, which may contribute to the unnecessary long waiting times for vehicles if not efficiently configured. This inefficient configuration is unfortunately still the case in a lot of urban areas, where most of the traffic lights are based on a 'fixed cycle' protocol. To improve the traffic light configuration, this paper proposed monitoring system to be as an additional component (or additional subsystem) to the intelligent traffic light system, this component will be able to determine three street cases (empty street case, normal street case and crowded street case) by using small associative memory. The proposed monitoring system is working in two phases: training phase and recognition phase. The experiments presented promising results when the proposed approach was applied by using a program to monitor one intersection in Penang Island in Malaysia. The program could determine all street cases with different weather conditions depending on the stream of images, which are extracted from the streets video cameras. In addition, the observations which are pointed out to the proposed approach show a high flexibility to learn all the street cases using a few training images, thus the adaptation to any intersection can be done quickly.",
"title": ""
},
{
"docid": "5afbdf2bb2229d09f57d9ba78f10ff26",
"text": "Efficacy of Vitamin D supplements in depression is controversial, awaiting further literature analysis. Biological flaws in primary studies is a possible reason meta-analyses of Vitamin D have failed to demonstrate efficacy. This systematic review and meta-analysis of Vitamin D and depression compared studies with and without biological flaws. The systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The literature search was undertaken through four databases for randomized controlled trials (RCTs). Studies were critically appraised for methodological quality and biological flaws, in relation to the hypothesis and study design. Meta-analyses were performed for studies according to the presence of biological flaws. The 15 RCTs identified provide a more comprehensive evidence-base than previous systematic reviews; methodological quality of studies was generally good and methodology was diverse. A meta-analysis of all studies without flaws demonstrated a statistically significant improvement in depression with Vitamin D supplements (+0.78 CI +0.24, +1.27). Studies with biological flaws were mainly inconclusive, with the meta-analysis demonstrating a statistically significant worsening in depression by taking Vitamin D supplements (-1.1 CI -0.7, -1.5). Vitamin D supplementation (≥800 I.U. daily) was somewhat favorable in the management of depression in studies that demonstrate a change in vitamin levels, and the effect size was comparable to that of anti-depressant medication.",
"title": ""
},
{
"docid": "a7be4f9177e6790756b7ede4a2d9ca79",
"text": "Metabolomics, or the comprehensive profiling of small molecule metabolites in cells, tissues, or whole organisms, has undergone a rapid technological evolution in the past two decades. These advances have led to the application of metabolomics to defining predictive biomarkers for incident cardiometabolic diseases and, increasingly, as a blueprint for understanding those diseases' pathophysiologic mechanisms. Progress in this area and challenges for the future are reviewed here.",
"title": ""
},
{
"docid": "2788ad279b96e830ba957106374e2537",
"text": "We present a new lock-free parallel algorithm for computing betweenness centrality of massive complex networks that achieves better spatial locality compared with previous approaches. Betweenness centrality is a key kernel in analyzing the importance of vertices (or edges) in applications ranging from social networks, to power grids, to the influence of jazz musicians, and is also incorporated into the DARPA HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph analytics. We design an optimized implementation of betweenness centrality for the massively multithreaded Cray XMT system with the Thread-storm processor. For a small-world network of 268 million vertices and 2.147 billion edges, the 16-processor XMT system achieves a TEPS rate (an algorithmic performance count for the number of edges traversed per second) of 160 million per second, which corresponds to more than a 2× performance improvement over the previous parallel implementation. We demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for the large IMDb movie-actor network.",
"title": ""
},
{
"docid": "840555a134e7606f1f3caa24786c6550",
"text": "Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people’s emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs.",
"title": ""
},
{
"docid": "d434ef675b4d8242340f4d501fdbbae3",
"text": "We study the problem of selecting a subset of k random variables to observe that will yield the best linear prediction of another variable of interest, given the pairwise correlations between the observation variables and the predictor variable. Under approximation preserving reductions, this problem is equivalent to the \"sparse approximation\" problem of approximating signals concisely. The subset selection problem is NP-hard in general; in this paper, we propose and analyze exact and approximation algorithms for several special cases of practical interest. Specifically, we give an FPTAS when the covariance matrix has constant bandwidth, and exact algorithms when the associated covariance graph, consisting of edges for pairs of variables with non-zero correlation, forms a tree or has a large (known) independent set. Furthermore, we give an exact algorithm when the variables can be embedded into a line such that the covariance decreases exponentially in the distance, and a constant-factor approximation when the variables have no \"conditional suppressor variables\". Much of our reasoning is based on perturbation results for the R2 multiple correlation measure, which is frequently used as a natural measure for \"goodness-of-fit statistics\". It lies at the core of our FPTAS, and also allows us to extend our exact algorithms to approximation algorithms when the matrix \"nearly\" falls into one of the above classes. We also use our perturbation analysis to prove approximation guarantees for the widely used \"Forward Regression\" heuristic under the assumption that the observation variables are nearly independent.",
"title": ""
},
{
"docid": "62f455d95a65eb2454753414f01d8435",
"text": "Metabolic glycoengineering is a technique introduced in the early 90s of the last century by Reutter et al.. It utilises the ability of cells to metabolically convert sugar derivatives with bioorthogonal side chains like azides or alkynes and by that incorporation into several glyco structures. Afterwards, the carbohydrates can be labelled to study their distribution, dynamics and roles in different biological processes. So far many studies were performed on mammal cell lines as well as in small animals. Very recently, bacterial glyco-structures were targeted by glycoengineering, showing promising results in infection prevention by reducing pathogen adhesion towards human epithelial cells. Introduction Bacteria were among the first life forms to appear on earth, and are present in most habitats on the planet, e. g., they live in symbiosis with plants and animals. Compared to human cells there are ten times as many bacterial cells in our body. Most of them are harmless or even beneficial. But some species are pathogenic and cause infectious diseases with more than 1.2 million deaths each year [1]. Those infections include cholera, syphilis, anthrax, leprosy, and bubonic plague as well as respiratory infections like tuberculosis. 1 This article is part of the Proceedings of the Beilstein Glyco-Bioinformatics Symposium 2013. www.proceedings.beilstein-symposia.org Discovering the Subtleties of Sugars June 10 – 14, 2013, Potsdam, Germany",
"title": ""
},
{
"docid": "2c266af949495f7cd32b8abdf1a04803",
"text": "Humans rely on eye gaze and hand manipulations extensively in their everyday activities. Most often, users gaze at an object to perceive it and then use their hands to manipulate it. We propose applying a multimodal, gaze plus free-space gesture approach to enable rapid, precise and expressive touch-free interactions. We show the input methods are highly complementary, mitigating issues of imprecision and limited expressivity in gaze-alone systems, and issues of targeting speed in gesture-alone systems. We extend an existing interaction taxonomy that naturally divides the gaze+gesture interaction space, which we then populate with a series of example interaction techniques to illustrate the character and utility of each method. We contextualize these interaction techniques in three example scenarios. In our user study, we pit our approach against five contemporary approaches; results show that gaze+gesture can outperform systems using gaze or gesture alone, and in general, approach the performance of \"gold standard\" input systems, such as the mouse and trackpad.",
"title": ""
},
{
"docid": "41f0cea988e24716be77d84ea7bd5c45",
"text": "Over the last 25 years a lot of work has been undertaken on constructing continuum models for segregation of particles of different sizes. We focus on one model that is designed to predict segregation and remixing of two differently sized particle species. This model contains two dimensionless parameters, which in general depend on both the flow and particle properties. One of the weaknesses of the model is that these dependencies are not predicted; these have to be determined by either experiments or simulations. We present steady-state simulations using the discrete particle method (DPM) for bi-disperse systems with different size ratios. The aim is to determine one parameter in the continuum model, i.e., the segregation Péclet number (ratio of the segregation velocity to diffusion) as a function of the particle size ratio. Reasonable agreement is found; but, also measurable discrepancies are reported;",
"title": ""
},
{
"docid": "5087353b4888832c2c801f06c94d3c67",
"text": "Many Automatic Question Generation (AQG) approaches have been proposed focusing on reading comprehension support; however, none of them addressed academic writing. We conducted a large-scale case study with 25 supervisors and 36 research students enroled in an Engineering Research Method course. We investigated trigger questions, as a form of feedback, produced by supervisors, and how they support these students’ literature review writing. In this paper, we identified the most frequent question types according to Graesser and Person’s Question Taxonomy and discussed how the human experts generate such questions from the source text. Finally, we proposed a more practical Automatic Question Generation Framework for supporting academic writing in engineering education.",
"title": ""
},
{
"docid": "c6029c95b8a6b2c6dfb688ac049427dc",
"text": "This paper presents development of a two-fingered robotic device for amputees whose hands are partially impaired. In this research, we focused on developing a compact and lightweight robotic finger system, so the target amputee would be able to execute simple activities in daily living (ADL), such as grasping a bottle or a cup for a long time. The robotic finger module was designed by considering the impaired shape and physical specifications of the target patient's hand. The proposed prosthetic finger was designed using a linkage mechanism which was able to create underactuated finger motion. This underactuated mechanism contributes to minimizing the number of required actuators for finger motion. In addition, the robotic finger was not driven by an electro-magnetic rotary motor, but a shape-memory alloy (SMA) actuator. Having a driving method using SMA wire contributed to reducing the total weight of the prosthetic robot finger as it has higher energy density than that offered by the method using the electrical DC motor. In this paper, we confirmed the performance of the proposed robotic finger by fundamental driving tests and the characterization of the SMA actuator.",
"title": ""
},
{
"docid": "04c367bfe113af139c30e167f393acec",
"text": "A novel planar magic-T using an E-plane substrate integrate waveguide (SIW) power divider and a SIW-slotline transition is proposed in this letter. Due to the metal ground between the two input/output ports, the E-plane SIW power divider has a 180° reverse phase characteristic. A SIW-slotline transition is utilized to realize the H-plane input/output port of the magic-T. Good agreement between the measured and simulated results indicate that the planar magic-T has a fractional bandwidth (FBW) of 18% (13.2-15.8 GHz), and the amplitude and phase imbalances are less than 0.24 dB and 1.5°, respectively.",
"title": ""
},
{
"docid": "5fcda05ef200cd326ecb9c2412cf50b3",
"text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.",
"title": ""
},
{
"docid": "e8a51d5b917d300154d5c3524c61c702",
"text": "There has been several research on car license plate recognition (LPR). However, the number of research on Thai LPR is limited, besides, most of which are published in Thai. The existing work on Thai LPR have faced problems caused by low- quality license plates and a great number of similar characters that exist in Thai alphabets. Generally, car license plates in Thailand come in different conditions, ranging from new license plates of excellent quality to low-quality ones with screws on or even some paint already peeled off. Thai characters that appear on Thai license plates are also generally shape-complicated. Area-based methods, such as conventional template matching, are ineffective to low-quality or resembling characters. To cope with these problems, this paper presents a new method, which recognizes the character patterns relying only on essential elements of characters. This method lies on the concept that Thai characters can be distinguished by essential elements that form up different shapes. Similar characters are handled by directly focusing on their obvious differences among them. The overall success rate of the proposed method, tested on 300 actual license plates of various qualities, is 85.33%.",
"title": ""
},
{
"docid": "dc736509fbed0afcebc967ca31ffc4d5",
"text": "and William K. Wootters IBM Research Division, T. J. Watson Research Center, Yorktown Heights, New York 10598 Norman Bridge Laboratory of Physics 12-33, California Institute of Technology, Pasadena, California 91125 Département d’Informatique et de Recherche Ope ́rationelle, Succursale Centre-Ville, Montre ́al, Canada H3C 3J7 AT&T Shannon Laboratory, 180 Park Avenue, Building 103, Florham Park, New Jersey 07932 Physics Department, Williams College, Williamstown, Massachusetts 01267 ~Received 17 June 1998 !",
"title": ""
},
{
"docid": "0f8f269ef4cf43261981dcf5c8df6b3c",
"text": "Recently, high-power white light-emitting diodes (LEDs) have attracted much attention due to their versatility in applications and to the increasing market demand for them. So great attention has been focused on producing highly reliable LED lighting. How to accurately predict the reliability of LED lighting is emerging as one of the key issues in this field. Physics-of-failure-based prognostics and health management (PoF-based PHM) is an approach that utilizes knowledge of a product's life cycle loading and failure mechanisms to design for and assess reliability. In this paper, after analyzing the materials and geometries for high-power white LED lighting at all levels, i.e., chips, packages and systems, failure modes, mechanisms and effects analysis (FMMEA) was used in the PoF-based PHM approach to identify and rank the potential failures emerging from the design process. The second step in this paper was to establish the appropriate PoF-based damage models for identified failure mechanisms that carry a high risk.",
"title": ""
},
{
"docid": "6e8f02cfdab45ed1277e8649bd73c6cf",
"text": "Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.",
"title": ""
},
{
"docid": "aa387086c40cb9b4269277a52f6748d3",
"text": "We propose a novel approach to the well-known view update problem for the case of tree-structured data: a domain-specific programming language in which all expressions denote bi-directional transformations on trees. In one direction, these transformations---dubbed lenses---map a \"concrete\" tree into a simplified \"abstract view\"; in the other, they map a modified abstract view, together with the original concrete tree, to a correspondingly modified concrete tree. Our design emphasizes both robustness and ease of use, guaranteeing strong well-behavedness and totality properties for well-typed lenses.We identify a natural space of well-behaved bi-directional transformations over arbitrary structures, study definedness and continuity in this setting, and state a precise connection with the classical theory of \"update translation under a constant complement\" from databases. We then instantiate this semantic framework in the form of a collection of lens combinators that can be assembled to describe transformations on trees. These combinators include familiar constructs from functional programming (composition, mapping, projection, conditionals, recursion) together with some novel primitives for manipulating trees (splitting, pruning, copying, merging, etc.). We illustrate the expressiveness of these combinators by developing a number of bi-directional list-processing transformations as derived forms.",
"title": ""
},
{
"docid": "79ffa92008c29778d19b9faa05460c75",
"text": "In this work, we focus on the development and application of two Lyapunov-based model predictive control (LMPC) schemes to a large-scale nonlinear chemical process network used in the production of vinyl acetate. The nonlinear dynamic model of the process consists of 179 state variables and 13 control (manipulated) inputs and features a cooled plug-flow reactor, an eight-stage gas−liquid absorber, and both gas and liquid recycle streams. The two control schemes considered are an LMPC scheme which is formulated with a convectional quadratic cost function and a Lyapunov-based economic model predictive control (LEMPC) scheme which is formulated with an economic (nonquadratic) cost measure. The economic cost measure for the entire process network accounts for the reaction selectivity and the product separation quality. In the LMPC and LEMPC control schemes, five inputs, directly affecting the economic cost, are regulated with LMPC/LEMPC and the remaining eight inputs are computed by proportional−integral controllers. Simulations are carried out to study the economic performance of the closed-loop system under LMPC and under LEMPC formulated with the proposed economic measure. A thorough comparison of the two control schemes is provided. ■ INTRODUCTION Vinyl acetate is mostly used in manufacturing polyvinyl acetate and other vinyl acetate copolymers. Polyvinyl acetate is the fundamental ingredient for polyvinyl alcohol and polyvinyl acetate resins. Three raw materials, ethylene (C2H4), oxygen (O2), acetic acid (HAc), react to form the desired product vinyl acetate (VAc) as well as two byproducts: carbon dioxide (CO2) and water (H2O). An inert component, ethane (C2H6), enters the process with the ethylene feed stream. The exothermic and irreversible gas phase chemical reactions are",
"title": ""
},
{
"docid": "6c31a285d3548bfb6cbe9ea72f0d5192",
"text": "PURPOSE\nTo compare the effects of a 10-week training program with two different exercises -- traditional hamstring curl (HC) and Nordic hamstrings (NH), a partner exercise focusing the eccentric phase -- on muscle strength among male soccer players.\n\n\nMETHODS\nSubjects were 21 well-trained players who were randomized to NH training (n = 11) or HC training (n = 10). The programs were similar, with a gradual increase in the number of repetitions from two sets of six reps to three sets of eight to 12 reps over 4 weeks, and then increasing load during the final 6 weeks of training. Strength was measured as maximal torque on a Cybex dynamometer before and after the training period.\n\n\nRESULTS\nIn the NH group, there was an 11% increase in eccentric hamstring torque measured at 60 degrees s(-1), as well as a 7% increase in isometric hamstring strength at 90 degrees, 60 degrees and 30 degrees of knee flexion. Since there was no effect on concentric quadriceps strength, there was a significant increase in the hamstrings:quadriceps ratio from 0.89 +/- 0.12 to 0.98 +/- 0.17 (11%) in the NH group. No changes were observed in the HC group.\n\n\nCONCLUSION\nNH training for 10 weeks more effectively develops maximal eccentric hamstring strength in well-trained soccer players than a comparable program based on traditional HC.",
"title": ""
}
] |
scidocsrr
|
372b01ba2bc9d540df4011b8ab1a07cb
|
Intelligent Irrigation Control System Using Wireless Sensors and Android Application
|
[
{
"docid": "8d1797caf78004e6ba548ace7d5a1161",
"text": "An automated irrigation system was developed to optimize water use for agricultural crops. The system has a distributed wireless network of soil-moisture and temperature sensors placed in the root zone of the plants. In addition, a gateway unit handles sensor information, triggers actuators, and transmits data to a web application. An algorithm was developed with threshold values of temperature and soil moisture that was programmed into a microcontroller-based gateway to control water quantity. The system was powered by photovoltaic panels and had a duplex communication link based on a cellular-Internet interface that allowed for data inspection and irrigation scheduling to be programmed through a web page. The automated system was tested in a sage crop field for 136 days and water savings of up to 90% compared with traditional irrigation practices of the agricultural zone were achieved. Three replicas of the automated system have been used successfully in other places for 18 months. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
}
] |
[
{
"docid": "02156199912027e9230b3c000bcbe87b",
"text": "Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.",
"title": ""
},
{
"docid": "853d3d6584a32fff4f4e7c483bf0972d",
"text": "Nutritional modulation remains central to the management of metabolic syndrome. Intervention with cinnamon in individuals with metabolic syndrome remains sparsely researched. We investigated the effect of oral cinnamon consumption on body composition and metabolic parameters of Asian Indians with metabolic syndrome. In this 16-week double blind randomized control trial, 116 individuals with metabolic syndrome were randomized to two dietary intervention groups, cinnamon [6 capsules (3 g) daily] or wheat flour [6 capsules (2.5 g) daily]. Body composition, blood pressure and metabolic parameters were assessed. Significantly greater decrease [difference between means, (95% CI)] in fasting blood glucose (mmol/L) [0.3 (0.2, 0.5) p = 0.001], glycosylated haemoglobin (mmol/mol) [2.6 (0.4, 4.9) p = 0.023], waist circumference (cm) [4.8 (1.9, 7.7) p = 0.002] and body mass index (kg/m2 ) [1.3 (0.9, 1.5) p = 0.001] was observed in the cinnamon group compared to placebo group. Other parameters which showed significantly greater improvement were: waist-hip ratio, blood pressure, serum total cholesterol, low-density lipoprotein cholesterol, serum triglycerides, and high-density lipoprotein cholesterol. Prevalence of defined metabolic syndrome was significantly reduced in the intervention group (34.5%) vs. the placebo group (5.2%). A single supplement intervention with 3 g cinnamon for 16 weeks resulted in significant improvements in all components of metabolic syndrome in a sample of Asian Indians in north India. The clinical trial was retrospectively registered (after the recruitment of the participants) in ClinicalTrial.gov under the identification number: NCT02455778 on 25th May 2015.",
"title": ""
},
{
"docid": "debcc046323ffbd9a093c8e07d37960e",
"text": "This review discusses the theory and practical application of independent component analysis (ICA) to multi-channel EEG data. We use examples from an audiovisual attention-shifting task performed by young and old subjects to illustrate the power of ICA to resolve subtle differences between evoked responses in the two age groups. Preliminary analysis of these data using ICA suggests a loss of task specificity in independent component (IC) processes in frontal and somatomotor cortex during post-response periods in older as compared to younger subjects, trends not detected during examination of scalp-channel event-related potential (ERP) averages. We discuss possible approaches to component clustering across subjects and new ways to visualize mean and trial-by-trial variations in the data, including ERP-image plots of dynamics within and across trials as well as plots of event-related spectral perturbations in component power, phase locking, and coherence. We believe that widespread application of these and related analysis methods should bring EEG once again to the forefront of brain imaging, merging its high time and frequency resolution with enhanced cm-scale spatial resolution of its cortical sources.",
"title": ""
},
{
"docid": "0ccfe04a4426e07dcbd0260d9af3a578",
"text": "We present an efficient algorithm to perform approximate offsetting operations on geometric models using GPUs. Our approach approximates the boundary of an object with point samples and computes the offset by merging the balls centered at these points. The underlying approach uses Layered Depth Images (LDI) to organize the samples into structured points and performs parallel computations using multiple cores. We use spatial hashing to accelerate intersection queries and balance the workload among various cores. Furthermore, the problem of offsetting with a large distance is decomposed into successive offsetting using smaller distances. We derive bounds on the accuracy of offset computation as a function of the sampling rate of LDI and offset distance. In practice, our GPU-based algorithm can accurately compute offsets of models represented using hundreds of thousands of points in a few seconds on GeForce GTX 580 GPU. We observe more than 100 times speedup over prior serial CPU-based approximate offset computation algorithms.",
"title": ""
},
{
"docid": "2653554c6dec7e9cfa0f5a4080d251e2",
"text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.",
"title": ""
},
{
"docid": "4e7f7b1444b253a63d4012b2502f5fa4",
"text": "State-of-the-art techniques for 6D object pose recovery depend on occlusion-free point clouds to accurately register objects in 3D space. To deal with this shortcoming, we introduce a novel architecture called Iterative Hough Forest with Histogram of Control Points that is capable of estimating the 6D pose of an occluded and cluttered object, given a candidate 2D bounding box. Our Iterative Hough Forest (IHF) is learnt using parts extracted only from the positive samples. These parts are represented with Histogram of Control Points (HoCP), a “scale-variant” implicit volumetric description, which we derive from recently introduced Implicit B-Splines (IBS). The rich discriminative information provided by the scale-variant HoCP features is leveraged during inference. An automatic variable size part extraction framework iteratively refines the object’s roughly aligned initial pose due to the extraction of coarsest parts, the ones occupying the largest area in image pixels. The iterative refinement is accomplished based on finer (smaller) parts, which are represented with more discriminative control point descriptors by using our Iterative Hough Forest. Experiments conducted on a publicly available dataset report that our approach shows better registration performance than the state-of-the-art methods. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1f753b8e3c0178cabbc8a9f594c40c8c",
"text": "For easy comprehensibility, rules are preferrable to non-linear kernel functions in the analysis of bio-medical data. In this paper, we describe two rule induction approaches—C4.5 and our PCL classifier—for discovering rules from both traditional clinical data and recent gene expression or proteomic profiling data. C4.5 is a widely used method, but it has two weaknesses, the single coverage constraint and the fragmentation problem, that affect its accuracy. PCL is a new rule-based classifier that overcomes these two weaknesses of decision trees by using many significant rules. We present a thorough comparison to show that our PCL method is much more accurate than C4.5, and it is also superior to Bagging and Boosting in general.",
"title": ""
},
{
"docid": "056a1d216afd6ea3841b9d4f49c896b6",
"text": "The first car was invented in 1870 by Siegfried Marcus (Guarnieri, 2011). Actually it was just a wagon with an engine but without a steering wheel and without brakes. Instead, it was controlled by the legs of the driver. Converting traditional vehicles into autonomous vehicles was not just one step. The first step was just 28 years after the invention of cars that is to say 1898. This step's concept was moving a vehicle by a remote controller (Nikola, 1898). Since this first step and as computers have been becoming advanced and sophisticated, many functions of modern vehicles have been converted to be entirely automatic with no need of even remote controlling. Changing gears was one of the first actions that could be done automatically without an involvement of the driver (Anthony, 1908), so such cars got the title of \"automatic cars\"; however, nowadays there are vehicles that can completely travel by themselves although they are not yet allowed to travel on public roads in most of the world. Such vehicles are called \"autonomous vehicles\" or \"driverless cars\".",
"title": ""
},
{
"docid": "5cc2a5b23d2da7f281270e0ca4a097e1",
"text": "It is widely accepted that the deficiencies in public sector health system can only be overcome by significant reforms. The need for reforms in India s health sector has been emphasized by successive plan documents since the Eighth Five-Year Plan in 1992, by the 2002 national health policy and by international donor agencies. The World Bank (2001:12,14), which has been catalytic in initiating health sector reforms in many states, categorically emphasized: now is the time to carry out radical experiments in India’s health sector, particularly since the status quo is leading to a dead end. . But it is evident that there is no single strategy that would be best option The proposed reforms are not cheap, but the cost of not reforming is even greater”.",
"title": ""
},
{
"docid": "daf311fdbd3d19e09c9eca3ec04702b6",
"text": "1 Since the 1970s, investigative profilers at the FBI's Behavioral Science Unit (now part of the National Center for the Analysis of Violent Crime) have been assisting local, state, and federal agencies in narrowing investigations by providing criminal personality profiles. An attempt is now being made to describe this criminal-profile-generating process. A series of five overlapping stages lead to the sixth stage, or the goal of apprehension of the offender: (1) profiling inputs, (2) decision-process models, (3) crime assessment, (4) the criminal profile, (5) investigation, and (6) apprehension. Two key feedback filters in the process are: (a) achieving congruence with the evidence, with decision models, and with investigation recommendations, and (6) the addition of new evidence. \"You wanted to mock yourself at me!. .. You did not know your Hercule Poirot.\" He thrust out his chest and twirled his moustache. I looked at him and grinned. .. \"All right then,\" I said. \"Give us the answer to the problems-if you know it.\" \"But of course I know it.\" Hardcastle stared at him incredulously…\"Excuse me. Monsieur Poirot, you claim that you know who killed three people. And why?...All you mean is that you have a hunch\" I will not quarrel with you over a word…Come now. Inspector. I know – really know…I perceive you are still sceptic. But first let me say this. To be sure means that when the right solution is reached, everything falls into place. You perceive that in no other way could things have happened. \" The ability of Hercule Poirot to solve a crime by describing the perpetrator is a skill shared by the expert investigative profiler. Evidence speaks its own language of patterns and sequences that can reveal the offender's behavioral characteristics. Like Poirot, the profiler can say. \"I know who he must be.\" This article focuses on the developing technique of criminal profiling. Special agents at the FBI Academy have demonstrated expertise in crime scene analysis of various violent crimes, particularly those involving sexual homicide. This article discusses the history of profiling and the criminal-profile-generating process and provides a case example to illustrate the technique. Criminal profiling has been used successfully by law enforcement in several areas and is a valued means by which to narrow the field of investigation. Profiling does not provide the specific identity of the offender. Rather, it indicates the kind of person most likely to have committed a crime …",
"title": ""
},
{
"docid": "410aa6bb03299e5fda9c28f77e37bc5b",
"text": "Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.",
"title": ""
},
{
"docid": "a7d02aee0ef3730504adefc2d8c05c49",
"text": "We tackle the challenge of reliably and automatically localizing pedestrians in real-life conditions through overhead depth imaging at unprecedented high-density conditions. Leveraging upon a combination of Histogram of Oriented Gradients-like feature descriptors, neural networks, data augmentation and custom data annotation strategies, this work contributes a robust and scalable machine learning-based localization algorithm, which delivers near-human localization performance in real-time, even with local pedestrian density of about 3 ped/m2, a case in which most stateof-the art algorithms degrade significantly in performance.",
"title": ""
},
{
"docid": "fb116c7cd3ab8bd88fb7817284980d4a",
"text": "Sentence-level sentiment classification is important to understand users' fine-grained opinions. Existing methods for sentence-level sentiment classification are mainly based on supervised learning. However, it is difficult to obtain sentiment labels of sentences since manual annotation is expensive and time-consuming. In this paper, we propose an approach for sentence-level sentiment classification without the need of sentence labels. More specifically, we propose a unified framework to incorporate two types of weak supervision, i.e., document-level and word-level sentiment labels, to learn the sentence-level sentiment classifier. In addition, the contextual information of sentences and words extracted from unlabeled sentences is incorporated into our approach to enhance the learning of sentiment classifier. Experiments on benchmark datasets show that our approach can effectively improve the performance of sentence-level sentiment classification.",
"title": ""
},
{
"docid": "349ec567c15dc032e5856e4497677614",
"text": "ABSTUCT Miniature robots enable low-cost planetary surface exploration missions, and new military missions in urban terrain where small robots provide critical assistance to human operations. These space and military missions have many similar technological challenges. Robots can be deployed in environments where it may not be safe or affordable to send humans, or where robots can reduce the risk to humans. Small size is needed in urban terrain to make the robot easy to carry and deploy by military personnel. Technology to sense and perceive the environment, and to autonomously plan and execute navigation maneuvers and other remote tasks, is an important requirement for both planetary and surface robots and for urban terrain robotic assistants. Motivated by common technological needs and by a shared vision about the great technological potential, a strong, collaborative relationship exists between the NASNJPL and DARPA technology development in miniaturized robotics. This paper describes the technologies under development, the applications where these technologies are relevant to both space and military missions, and the status of the most recent technology demonstrations in terrestrial scenarios.",
"title": ""
},
{
"docid": "bd5e127cc3454bbf8a89c3f7d66fd624",
"text": "Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. Building on our prior work on anomaly detection, we investigate how to improve the anomaly detection approach to provide more details on attack types and sources. For several well-known attacks, we can apply a simple rule to identify the attack type when an anomaly is reported. In some cases, these rules can also help identify the attackers. We address the run-time resource constraint problem using a cluster-based detection scheme where periodically a node is elected as the ID agent for a cluster. Compared with the scheme where each node is its own ID agent, this scheme is much more efficient while maintaining the same level of effectiveness. We have conducted extensive experiments using the ns-2 and MobiEmu environments to validate our research.",
"title": ""
},
{
"docid": "01202e09e54a1fc9f5b36d67fbbf3870",
"text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.",
"title": ""
},
{
"docid": "30670fdaa354869da343e4025f819781",
"text": "On-Die features available for validation and test on an integrated circuit play a major role in evaluating the performance of the functionality being realized by the circuit in a post-silicon environment and can considerably reduce time to market of the end-product. In the case of high-speed IO, it is also important to note that the type of on-die hooks required to debug and validate the performance and robustness of the design depend on several factors, of which the type of I/O architecture chosen plays a key role. In order to support high data rates, the serial I/O design in this paper implements a receiver with adaptive equalization engine for the compensation of inter-symbol interference (ISI) and real-time environmental changes (temperature and voltage). This paper describes the debug hooks and their usage models in such a high-speed I/O designed using a 32nm CMOS process. These hooks have been tested in the lab and proven to be very useful. While the main focus of the paper is to describe the hooks, how they are used in the lab for observing the robustness in the dynamic behavior of the adaptive loops and the measurement results; the reader is also provided with a brief insight into the equations describing the loops behavior together with a description of the loops implementation details.",
"title": ""
},
{
"docid": "af89b3636290235e0b241c6cced2a336",
"text": "Assume we were to come up with a family of distributions parameterized by θ in order to approximate the posterior, qθ(ω). Our goal is to set θ such that qθ(ω) is as similar to the true posterior p(ω|D) as possible. For clarity, qθ(ω) is a distribution over stochastic parameters ω that is determined by a set of learnable parameters θ and some source of randomness. The approximation is therefore limited by our choice of parametric function qθ(ω) as well as the randomness.1 Given ω and an input x, an output distribution p(y|x,ω) = p(y|fω(x)) = fω(x,y) is induced by observation noise (the conditionality of which is omitted for brevity).",
"title": ""
},
{
"docid": "eeb31177629a38882fa3664ad0ddfb48",
"text": "Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car’s interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car’s indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline. ACM Classification",
"title": ""
},
{
"docid": "a75d3395a1d4859b465ccbed8647fbfe",
"text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.",
"title": ""
}
] |
scidocsrr
|
983b33dcc10d2cbc724cb7205ad0a923
|
Dynamic Task Prioritization for Multitask Learning
|
[
{
"docid": "3aeab50cf72d12ee5033f9f9c506acfc",
"text": "The approach of learning multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underlie these tasks. We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.",
"title": ""
},
{
"docid": "d15804e98b58fa5ec0985c44f6bb6033",
"text": "Urrently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iterations output. We establish that a feedback based approach has several core advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback develops a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We provide a general feedback based learning architecture, instantiated using existing RNNs, with the endpoint results on par or better than existing feedforward networks and the addition of the above advantages.",
"title": ""
}
] |
[
{
"docid": "06fb91a00e75d7dfd6be198092f41e62",
"text": "INTRODUCTION We describe indications and surgical technique for dividing the stapedius and tensor tympani tendons during middle ear surgery. Middle ear muscle reflexes involve contraction of the tensor tympani muscle (TTM) and stapedius muscle (SM) to increase impedance of the middle ear, and may shield the inner ear from loud, continuous nonimpact noise. Sectioning SM and TTM tendons has been advocated for a number of conditions, including improving access to anterior epitympanic (supratubal) recess, releasing a medialized malleus during tympanoplasty and ossiculoplasty, tensor or stapedial myoclonus, reflexive vertigo, and Meniere disease.",
"title": ""
},
{
"docid": "c8be82cceec30a4aa72cc23b844546df",
"text": "SVM is extensively used in pattern recognition because of its capability to classify future unseen data and its’ good generalization performance. Several algorithms and models have been proposed for pattern recognition that uses SVM for classification. These models proved the efficiency of SVM in pattern recognition. Researchers have compared their results for SVM with other traditional empirical risk minimization techniques, such as Artificial Neural Network, Decision tree, etc. Comparison results show that SVM is superior to these techniques. Also, different variants of SVM are developed for enhancing the performance. In this paper, SVM is briefed and some of the pattern recognition applications of SVM are surveyed and briefly summarized. Keyword Hyperplane, Pattern Recognition, Quadratic Programming Problem, Support Vector Machines.",
"title": ""
},
{
"docid": "094fb0a17d6358cc166e43872bc59b09",
"text": "This paper is a review of the evolutionary history of deep learning models. It covers from the genesis of neural networks when associationism modeling of the brain is studied, to the models that dominate the last decade of research in deep learning like convolutional neural networks, deep belief networks, and recurrent neural networks, and extends to popular recent models like variational autoencoder and generative adversarial nets. In addition to a review of these models, this paper primarily focuses on the precedents of the models above, examining how the initial ideas are assembled to construct the early models and how these preliminary models are developed into their current forms. Many of these evolutionary paths last more than half a century and have a diversity of directions. For example, CNN is built on prior knowledge of biological vision system; DBN is evolved from a trade-off of modeling power and computation complexity of graphical models and many nowadays models are neural counterparts of ancient linear models. This paper reviews these evolutionary paths and offers a concise thought flow of how these models are developed, and aims to provide a thorough background for deep learning. More importantly, along with the path, this paper summarizes the gist behind these milestones and proposes many directions to guide the future research of deep learning. 1 ar X iv :1 70 2. 07 80 0v 2 [ cs .L G ] 1 M ar 2 01 7",
"title": ""
},
{
"docid": "de1529bcfee8a06969ee35318efe3dc3",
"text": "This paper studies the prediction of head pose from still images, and summarizes the outcome of a recently organized competition, where the task was to predict the yaw and pitch angles of an image dataset with 2790 samples with known angles. The competition received 292 entries from 52 participants, the best ones clearly exceeding the state-of-the-art accuracy. In this paper, we present the key methodologies behind selected top methods, summarize their prediction accuracy and compare with the current state of the art.",
"title": ""
},
{
"docid": "a966c2222e88813574319fd0695c16f4",
"text": "Most streaming decision models evolve continuously over time, run in resource-aware environments, and detect and react to changes in the environment generating data. One important issue, not yet convincingly addressed, is the design of experimental work to evaluate and compare decision models that evolve over time. This paper proposes a general framework for assessing predictive stream learning algorithms. We defend the use of prequential error with forgetting mechanisms to provide reliable error estimators. We prove that, in stationary data and for consistent learning algorithms, the holdout estimator, the prequential error and the prequential error estimated over a sliding window or using fading factors, all converge to the Bayes error. The use of prequential error with forgetting mechanisms reveals to be advantageous in assessing performance and in comparing stream learning algorithms. It is also worthwhile to use the proposed methods for hypothesis testing and for change detection. In a set of experiments in drift scenarios, we evaluate the ability of a standard change detection algorithm to detect change using three prequential error estimators. These experiments point out that the use of forgetting mechanisms (sliding windows or fading factors) are required for fast and efficient change detection. In comparison to sliding windows, fading factors are faster and memoryless, both important requirements for streaming applications. Overall, this paper is a contribution to a discussion on best practice for performance assessment when learning is a continuous process, and the decision models are dynamic and evolve over time.",
"title": ""
},
{
"docid": "e797fbf7b53214df32d5694527ce5ba3",
"text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.",
"title": ""
},
{
"docid": "173b2464dd2706ff5ed2953877d392c5",
"text": "The multiagent optimization system (MAOS) is a nature-inspired method, which supports cooperative search by the self-organization of a group of compact agents situated in an environment with certain sharing public knowledge. Moreover, each agent in MAOS is an autonomous entity with personal declarative memory and behavioral components. In this paper, MAOS is refined for solving the traveling salesman problem (TSP), which is a classic hard computational problem. Based on a simplified MAOS version, in which each agent manipulates on extremely limited declarative knowledge, some simple and efficient components for solving TSP, including two improving heuristics based on a generalized edge assembly recombination, are implemented. Compared with metaheuristics in adaptive memory programming, MAOS is particularly suitable for supporting cooperative search. The experimental results on two TSP benchmark data sets show that MAOS is competitive as compared with some state-of-the-art algorithms, including the Lin-Kernighan-Helsgaun, IBGLK, PHGA, etc., although MAOS does not use any explicit local search during the runtime. The contributions of MAOS components are investigated. It indicates that certain clues can be positive for making suitable selections before time-consuming computation. More importantly, it shows that the cooperative search of agents can achieve an overall good performance with a macro rule in the switch mode, which deploys certain alternate search rules with the offline performance in negative correlations. Using simple alternate rules may prevent the high difficulty of seeking an omnipotent rule that is efficient for a large data set.",
"title": ""
},
{
"docid": "ffd3f8fb5c2b603690070735f87ede98",
"text": "References 1. Vlieghe E, Phe T, De Smet B, Veng C, Kham C. Increase in Salmonella enterica serovar Paratyphi A infections in Phnom Penh, Cambodia, January 2011 to August 2013. Euro Surveill. 2013; 18:20592. 2. Tourdjman M, Le Hello S, Gossner C. Unusual increase in reported cases of Paratyphoid A fever among travellers returning from Cambodia, January to September 2013. Euro Surveill. 2013;18:20594. 3. Newton AE, Mintz ED. Infectious diseases related to travel: typhoid & paratyphoid fever. CDC health information for international travel [cited 2014 May 11]. http://wwwnc.cdc.gov/travel/ yellowbook/2014/chapter-3-infectious-diseases-related-to-travel/ typhoid-and-paratyphoid-fever 4. Mahon BE, Newton AE, Mintz ED. Effectiveness of typhoid vaccination in US travelers. Vaccine. 2014;32:3577–9. http://dx.doi.org/10.1016/j.vaccine.2014.04.055 5. Akhtar S, Sarker MR, Jabeen K, Sattar A, Qamar A, Fasih N. Antimicrobial resistance in Salmonella enterica serovar typhi and paratyphi in South Asia—current status, issues and prospects. Crit Rev Microbiol. 2014;7828:1–10. http://dx.doi.org/10.3109/ 1040841X.2014.880662 6. Parry CM, Threlfall EJ. Antimicrobial resistance in typhoidal and nontyphoidal salmonellae. Curr Opin Infect Dis. 2008;21:531–8. http://dx.doi.org/10.1097/QCO.0b013e32830f453a 7. Sahastrabuddhe S, Carbis R. Increasing rates of Salmonella Paratyphi A and the current status of its vaccine development. Expert Rev Vaccines. 2013;12:1021–31. http://dx.doi.org/10.1586/ 14760584.2013.825450",
"title": ""
},
{
"docid": "cbc14c265bc1cc7199e6f76e25e5007a",
"text": "The IoT can become ubiquitous worldwide---if the pursuit of systemic trustworthiness can overcome the potential risks.",
"title": ""
},
{
"docid": "e049ae9af716773574032d9db2635705",
"text": "Existing works on single-image 3D reconstruction mainly focus on shape recovery. In this work, we study a new problem, that is, simultaneously recovering 3D shape and surface color from a single image, namely “colorful 3D reconstruction”. This problem is both challenging and intriguing because the ability to infer textured 3D model from a single image is at the core of visual understanding. Here, we propose an end-to-end trainable framework, Colorful Voxel Network (CVN), to tackle this problem. Conditioned on a single 2D input, CVN learns to decompose shape and surface color information of a 3D object into a 3D shape branch and a surface color branch, respectively. Specifically, for the shape recovery, we generate a shape volume with the state of its voxels indicating occupancy. For the surface color recovery, we combine the strength of appearance hallucination and geometric projection by concurrently learning a regressed color volume and a 2Dto-3D flow volume, which are then fused into a blended color volume. The final textured 3D model is obtained by sampling color from the blended color volume at the positions of occupied voxels in the shape volume. To handle the severe sparse volume representations, a novel loss function, Mean Squared False Cross-Entropy Loss (MSFCEL), is designed. Extensive experiments demonstrate that our approach achieves significant improvement over baselines, and shows great generalization across diverse object categories and arbitrary viewpoints.",
"title": ""
},
{
"docid": "6ecf5cb70cca991fbefafb739a0a44c9",
"text": "Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs objectand relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15",
"title": ""
},
{
"docid": "a921c4eba2d9590b9b8f4679349c985b",
"text": "Advances in micro-electro-mechanical (MEMS) techniques enable inertial measurements units (IMUs) to be small, cheap, energy efficient, and widely used in smartphones, robots, and drones. Exploiting inertial data for accurate and reliable navigation and localization has attracted significant research and industrial interest, as IMU measurements are completely ego-centric and generally environment agnostic. Recent studies have shown that the notorious issue of drift can be significantly alleviated by using deep neural networks (DNNs) [1]. However, the lack of sufficient labelled data for training and testing various architectures limits the proliferation of adopting DNNs in IMU-based tasks. In this paper, we propose and release the Oxford Inertial Odometry Dataset (OxIOD), a first-of-its-kind data collection for inertial-odometry research, with all sequences having ground-truth labels. Our dataset contains 158 sequences totalling more than 42 km in total distance, much larger than previous inertial datasets. Another notable feature of this dataset lies in its diversity, which can reflect the complex motions of phone-based IMUs in various everyday usage. The measurements were collected with four different attachments (handheld, in the pocket, in the handbag and on the trolley), four motion modes (halting, walking slowly, walking normally, and running), five different users, four types of off-the-shelf consumer phones, and large-scale localization from office buildings. Deep inertial tracking experiments were conducted to show the effectiveness of our dataset in training deep neural network models and evaluate learning-based and model-based algorithms. The OxIOD Dataset is available at: http://deepio.cs.ox.ac.uk",
"title": ""
},
{
"docid": "2d42dfd45c0759cd795896179eea113c",
"text": "We present a neural-network based approach to classifying online hate speech in general, as well as racist and sexist speech in particular. Using pre-trained word embeddings and max/mean pooling from simple, fullyconnected transformations of these embeddings, we are able to predict the occurrence of hate speech on three commonly used publicly available datasets. Our models match or outperform state of the art F1 performance on all three datasets using significantly fewer parameters and minimal feature preprocessing compared to previous methods.",
"title": ""
},
{
"docid": "93a283324fed31e4ecf81d62acae583a",
"text": "The success of the state-of-the-art deblurring methods mainly depends on the restoration of sharp edges in a coarse-to-fine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering-based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.",
"title": ""
},
{
"docid": "3388d2e88fdc2db9967da4ddb452d9f1",
"text": "Entity pair provide essential information for identifying relation type. Aiming at this characteristic, Position Feature is widely used in current relation classification systems to highlight the words close to them. However, semantic knowledge involved in entity pair has not been fully utilized. To overcome this issue, we propose an Entity-pair-based Attention Mechanism, which is specially designed for relation classification. Recently, attention mechanism significantly promotes the development of deep learning in NLP. Inspired by this, for specific instance(entity pair, sentence), the corresponding entity pair information is incorporated as prior knowledge to adaptively compute attention weights for generating sentence representation. Experimental results on SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.",
"title": ""
},
{
"docid": "c7106bb2ec2c41979ebacdba7dd55217",
"text": "Till recently, the application of the detailed combustion chemistry approach as a predictive tool for engine modeling has been a sort of a ”taboo” motivated by different reasons, but, mainly, by an exaggerated rigor to the chemistry/turbulence interaction modeling. The situation has drastically changed only recently, when STAR-CD and Reaction Design declared in the Newsletter of Compuatational Dynamics (2000/1) the aim to combine multi-dimensional flow solver with the detailed chemistry analysis based on CHEMKIN and SURFACE CHEMKIN packages. Relying on their future developments, we present here the methodology based on the KIVA code. The basic novelty of the proposed methodology is the coupling of a generalized partially stirred reactor, PaSR, model with a high efficiency numerics based on a sparse matrix algebra technique to treat detailed oxidation kinetics of hydrocarbon fuels assuming that chemical processes proceed in two successive steps: the reaction act follows after the micro-mixing resolved on a sub-grid scale. In a completed form, the technique represents detailed chemistry extension of the classic EDCturbulent combustion model. The model application is illustrated by results of numerical simulation of spray combustion and emission formation in the Volvo D12C DI Diesel engine. The results of the 3-D engine modeling on a sector mesh are in reasonable agreement with video data obtained using an endoscopic technique. INTRODUCTION As pollutant emission regulations are becoming more stringent, it turns increasingly more difficult to reconcile emission requirements with the engine economy and thermal efficiency. Soot formation in DI Diesel engines is the key environmental problem whose solution will define the future of these engines: will they survive or are they doomed to disappear? To achieve the design goals, the understanding of the salient features of spray combustion and emission formation processes is required. Diesel spray combustion is nonstationary, three-dimensional, multi-phase process that proAddress all correspondence to this author. ceeds in a high-pressure and high-temperature environment. Recent attempts to develop a ”conceptual model” of diesel spray combustion, see Dec (1997), represent it as a relatively well organized process in which events take place in a logical sequence as the fuel evolves along the jet, undergoing the various stages: spray atomization, droplet ballistics and evaporation, reactant mixing, (macroand micromixing), and, finally, heat release and emissions formation. This opens new perspectives for the modeling based on realization of idealized patterns well confirmed by optical diagnostics data. The success of engine CFD simulations depends on submodels of the physical processes incorporated into the main solver. The KIVA-3v computer code developed by Amsden (1993, July 1997) has been selected for the reason that the code source is available, thus, representing an ideal platform for modification, validation and evaluation. For Diesel engine applications, the KIVA codes solve the conservation equations for evaporating fuel sprays coupled with the three-dimensional turbulent fluid dynamics of compressible, multicomponent, reactive gases in engine cylinders with arbitrary shaped piston geometries. The code treats in different ways ”fast” chemical reactions, which are assumed to be in equilibrium, and ”slow” reactions proceeding kinetically, albeit the general trimolecular processes with different third body efficiencies are not incorporated in the mechanism. The turbulent combustion is realized in the form of Magnussen-Hjertager approach not accounting for chemistry/turbulence interaction. This is why the chemical routines in the original code were replaced with our specialized sub-models. The code fuel library has been also updated using modern property data compiled in Daubert and Danner (1989-1994). The detailed mechanism integrating the n-heptane oxidation chemistry with the kinetics of aromatics (up to four aromatic rings) formation for rich acetylene flames developed by Wang and Frenklach (1997) consisting of 117 species and 602 reactions has been validated in conventional kinetic analysis, and a reduced mechanism (60 species, including soot forming agents and N2O and NOx species, 237 reactions) has been incorporated into the KIVA-3v code. This extends capabilities of the code to predict spray combustion of hydrocarbon fuels with particulate emission.",
"title": ""
},
{
"docid": "29b67e925ec88f9106c400ccde978a86",
"text": "A number of changes are occurring in the field of computer game development: persistent online games, digital distribution platforms and portals, social and mobile games, and the emergence of new business models have pushed game development to put heavier emphasis on the live operation of games. Artificial intelligence has long been an important part of game development practices. The forces of change in the industry present an opportunity for Game AI to have new and profound impact on game production practices. Specifically, Game AI agents should act as “producers” responsible for managing a long-running set of live games, their player communities, and real-world context. We characterize a confluence of four major forces at play in the games industry today, together producing a wealth of data that opens unique research opportunities and challenges for Game AI in game production. We enumerate 12 new research areas spawned by these forces and steps toward how they can be addressed by data-driven Game AI Producers.",
"title": ""
},
{
"docid": "05c6e71b7a5cd5234d6cac50665c621e",
"text": "The problem of optimal motion planing and control is fundamental in robotics. However, this problem is intractable for continuous-time stochastic systems in general and the solution is difficult to approximate if non-instantaneous nonlinear performance indices are present. In this work, we provide an efficient algorithm, PIPC (Probabilistic Inference for Planning and Control), that yields approximately optimal policies with arbitrary higher-order nonlinear performance indices. Using probabilistic inference and a Gaussian process representation of trajectories, PIPC exploits the underlying sparsity of the problem such that its complexity scales linearly in the number of nonlinear factors. We demonstrate the capabilities of our algorithm in a receding horizon setting with multiple systems in simulation.",
"title": ""
},
{
"docid": "2f51d8d289a7c615ddb4dc01803612a7",
"text": "Feedback is an important component of the design process, but gaining access to high-quality critique outside a classroom or firm is challenging. We present CrowdCrit, a web-based system that allows designers to receive design critiques from non-expert crowd workers. We evaluated CrowdCrit in three studies focusing on the designer's experience and benefits of the critiques. In the first study, we compared crowd and expert critiques and found evidence that aggregated crowd critique approaches expert critique. In a second study, we found that designers who got crowd feedback perceived that it improved their design process. The third study showed that designers were enthusiastic about crowd critiques and used them to change their designs. We conclude with implications for the design of crowd feedback services.",
"title": ""
},
{
"docid": "64422f11afbc6ade7443b594ec9bc9ce",
"text": "Computer atural language processing (NLP), originally developed at the beginning of the Cold War1 for the mechanical translation of Soviet physics papers, is one of the first computational grails. These early efforts to analyze and model human language were characterized by a unified operational approach. They worked in batch mode, with programs reading data from files and writing results to other files. But time flew like an arrow, and eventually paper tape and punched cards gave way to teleprocessing, desktop publishing, and simulated virtual environments. As computing systems became more powerful and affordable , they also became \" interactive \"-and interactive NLP soon joined the list of computational grails. (See the \" Historical background' side-bar for more information.) Interactivity has changed computing profoundly. Without it, there would be no such things as graphical user interfaces and the World Wide Web would be a much different experience. Interactivity has also influenced system development and user expectations. It changed NLP prominently and permanently. For example, speech processing (the analysis and modeling of human utterances) barely existed in batch-mode days. Now this inherently interactive domain is one of the hotter topics in the field. The overall goal of interactive NLP is to let humans and computers effectively and quickly communicate by using natural language. But what is it about interactivity that changes the way NLP is done? How do the theoretical and processing assumptions differ when the input stream is \" live, \" thus increasing the emphasis on response time and throughput? What are the trade-offs? What are the mechanisms (symbolic , statistical, connectionist, hybrid)? Finally, what are the strategies that let interactive NLP be effected or finessed? This theme issue of Computer addresses these questions. While earlier NLP systems did not operate in real time, interactive systems which must process input and generate appropriate output within a fraction of a second to a few seconds-have more modern problems and constraints. And while unrestricted NLP is still a very complex problem , many successful interactive systems are available for specific, well-defined domains of discourse.2 These systems include natural language interfaces to interactive computer systems (databases , expert systems, and operating systems); interfaces that integrate speech understanding components for a variety of domains (both speaker-dependent and speaker-independent, discrete-word applications and continuous-speech applications) ; dialogue management and story understanding systems ; and machine translation systems. An NLP system's success depends on its knowledge of the …",
"title": ""
}
] |
scidocsrr
|
0750f7295f71856e72352caabeb59910
|
A Recipe for Soft Fluidic Elastomer Robots
|
[
{
"docid": "e259e255f9acf3fa1e1429082e1bf1de",
"text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.",
"title": ""
},
{
"docid": "a1dec377f2f17a508604d5101a5b0e44",
"text": "The goal of this work is to develop a soft robotic manipulation system that is capable of autonomous, dynamic, and safe interactions with humans and its environment. First, we develop a dynamic model for a multi-body fluidic elastomer manipulator that is composed entirely from soft rubber and subject to the self-loading effects of gravity. Then, we present a strategy for independently identifying all unknown components of the system: the soft manipulator, its distributed fluidic elastomer actuators, as well as drive cylinders that supply fluid energy. Next, using this model and trajectory optimization techniques we find locally optimal open-loop policies that allow the system to perform dynamic maneuvers we call grabs. In 37 experimental trials with a physical prototype, we successfully perform a grab 92% of the time. By studying such an extreme example of a soft robot, we can begin to solve hard problems inhibiting the mainstream use of soft machines.",
"title": ""
}
] |
[
{
"docid": "025f3fa2b4ddc50c0f40f4b3c2429524",
"text": "Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.",
"title": ""
},
{
"docid": "b010e6982626ffe76da4ade5d5a6800b",
"text": "In this communication, a triangular-shaped dielectric antenna fed by substrate-integrated waveguide (SIW) is proposed and researched. The effect of the extended substrate's length on performance of the proposed antenna is first researched. In order to reduce sidelobe level (SLL) at high frequency as well as increase the adjustability of the proposed antenna while maintaining its planar structure, air vias are periodically perforated into the loaded substrate to modify the permittivity. The variation trend of modifying permittivity with changing performance of the proposed antenna is studied, followed by analyzing function of the transition stage(s) of the extended substrate. Through optimizing the dielectric length and modifying the diameters of air vias to change the permittivity, the proposed antenna with wide operating bandwidth and low SLL can be realized. Measured results indicate that the proposed antenna works from 17.6 to 26.7 GHz, which almost covers the whole K band. Besides, stable end-fire radiation patterns in the whole operating band are obtained. Moreover, at least 8.3-dBi peak gains with low SLL are achieved as well.",
"title": ""
},
{
"docid": "8fe488bcd43d3977dd425c4a182c3973",
"text": "Using public Internet facilities in order to access information and communication technologies (ICT) is the main model of use after the more common models of home use (individual ownership) and access at work or at school/university. Especially in developing countries, public and shared facilities help to create desperately needed access and are a main strategy in several Internet access programs. In the context of public access, cybercafes play an important role as the most common Internet access model, especially in the urban areas of India. It is often argued that cybercafes could help bridge the digital divide, as they provide Internet access to people who cannot afford to have Internet connections at their homes or who need help in order to make use of ICT. The following article will take this assumption as a starting point and will present findings from empirical research on cybercafes in urban India. The research was conducted in order to explore the problems and potential of cybercafes as development tools for different urban communities. In order to examine these relationships, the reach of cybercafes, the users of cybercafes and the usage patterns have been examined. This study is part of a doctoral thesis and the following article presents some of the findings. The article has to be seen as a preliminary report on ongoing research, and it presents some of the data collected to date in order to help build understanding concerning this complex access model and its importance for urban India.",
"title": ""
},
{
"docid": "b2e86367a75fc2f11d5fb37353efa8a4",
"text": "It is a well known fact that user-chosen passwords are somewhat predictable: by using tools such as dictionaries or probabilistic models, attackers and password recovery tools can drastically reduce the number of attempts needed to guess a password. Quite surprisingly, however, existing literature does not provide a satisfying answer to the following question: given a number of guesses, what is the probability that a state-of-the-art attacker will be able to break a password? To answer the former question, we compare and evaluate the effectiveness of currently known attacks using various datasets of known passwords. We find that a \"diminishing returns\" principle applies: in the absence of an enforced password strength policy, weak passwords are common; on the other hand, as the attack goes on, the probability that a guess will succeed decreases by orders of magnitude. Even extremely powerful attackers won't be able to guess a substantial percentage of the passwords. The result of this work will help in evaluating the security of authentication means based on user- chosen passwords, and our methodology for estimating password strength can be used as a basis for creating more effective proactive password checkers for users and security auditing tools for administrators.",
"title": ""
},
{
"docid": "4a1263b1cc76aed4913e258b5a145927",
"text": "Numerous applications in computer vision and machine learning rely on representations of data that are compact, discriminative, and robust while satisfying several desirable invariances. One such recently successful representation is offered by symmetric positive definite (SPD) matrices. However, the modeling power of SPD matrices comes at a price: rather than a flat Euclidean view, SPD matrices are more naturally viewed through curved geometry (Riemannian or otherwise) which often complicates matters. We focus on models and algorithms that rely on the geometry of SPD matrices, and make our discussion concrete by casting it in terms of covariance descriptors for images. We summarize various commonly used distance metrics on SPD matrices, before highlighting formulations and algorithms for solving sparse coding and dictionary learning problems involving SPD data. Through empirical results, we showcase the benefits of mathematical models that exploit the curved geometry of SPD data across a diverse set of computer vision applications.",
"title": ""
},
{
"docid": "89c7754a85459768c7aa53309821c58e",
"text": "Recent developments in cryptography and, in particular in Fully Homomorphic Encryption (FHE), have allowed for the development of new privacy preserving machine learning schemes. In this paper, we show how these schemes can be applied to the automatic assessment of speech affected by medical conditions, allowing for patient privacy in diagnosis and monitoring scenarios. More specifically, we present results for the assessment of the degree of Parkinsons Disease, the detection of a Cold, and both the detection and assessment of the degree of Depression. To this end, we use a neural network in which all operations are performed in an FHE context. This implies replacing the activation functions by linear and second degree polynomials, as only additions and multiplications are viable. Furthermore, to guarantee that the inputs of these activation functions fall within the convergence interval of the approximation, a batch normalization layer is introduced before each activation function. After training the network with unencrypted data, the resulting model is then employed in an encrypted version of the network, to produce encrypted predictions. Our tests show that the use of this framework yields results with little to no performance degradation, in comparison to the baselines produced for the same datasets.",
"title": ""
},
{
"docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9",
"text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.",
"title": ""
},
{
"docid": "40f32d675f581230ca70fa2ba9389eb6",
"text": "We depend on exposure to light to guide us, inform us about the outside world, and regulate the biological rhythms in our bodies. We think about turning lights on to improve our lives; however, for some people, exposure to light creates pain and distress that can overwhelm their desire to see. Photophobia is ocular or headache pain caused by normal or dim light. People are symptomatic when irradiance levels inducing pain fall into a range needed for functionality and productivity, making photoallodynia a more accurate term. “Dazzle” is a momentary and normal aversion response to bright lights that subsides within seconds, but photoallodynia only subsides when light exposure is reduced. Milder degrees of sensitivity may manifest as greater perceived comfort in dim illumination. In severe cases, the pain is so debilitating that people are physically and socially isolated into darkness. The suffering and loss of function associated with photoallodynia can be devastating, but it is underappreciated in clinical assessment, treatment, and basic and clinical research. Transient photoallodynia generally improves when the underlying condition resolves, as in association with ocular inflammation, dry eye syndrome and laser-assisted in situ keratomileusis surgery. Migraine-associated light sensitivity can be severe during migraine or mild (and non-clinical) during the interictal period. With so many causes of photoallodynia, a singular underlying mechanism is unlikely, although different etiologies likely have shared and unique components and pathways. Photoallodynia may originate by alteration of a trigeminal nociceptive pathway or possibly through direct retinal projections to higher brain regions involved in pain perception, including but not limited to the periaqueductal gray, the anterior cingulate and somatorsensory cortices, which are collectively termed the “pain matrix.” However, persistent photoallodynia, occurring in a number of ocular and central brain causes, can be remarkably resistant to therapy. The initial light detection that triggers a pain response likely arises through interaction of cone photoreceptors (color and acuity), rod photoreceptors (low light vision), and intrinsically photosensitive retinal ganglion cells (ipRGCs, pupil light reflex and circadian photoentrainment). We can gain clues as to these interactions by examining retinal diseases that cause – or do not cause – photoallodynia.",
"title": ""
},
{
"docid": "350dc562863b8702208bfb41c6ceda6a",
"text": "THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-",
"title": ""
},
{
"docid": "07bb0aec18894ae389eea9e2756443f8",
"text": "Generative Adversarial Networks (GANs) and their extensions have carved open many exciting ways to tackle well known and challenging medical image analysis problems such as medical image denoising, reconstruction, segmentation, data simulation, detection or classification. Furthermore, their ability to synthesize images at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In this review paper, a broad overview of recent literature on GANs for medical applications is given, the shortcomings and opportunities of the proposed methods are thoroughly discussed and potential future work is elaborated. A total of 63 papers published until end of July 2018 are reviewed. For quick access, the papers and important details such as the underlying method, datasets and performance are summarized in tables.",
"title": ""
},
{
"docid": "427ebc0500e91e842873c4690cdacf79",
"text": "Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles interand intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed. CCS Concepts •Computing methodologies → Collision detection; Physical simulation;",
"title": ""
},
{
"docid": "823d77bc2d761810467c8eea87c2dd31",
"text": "This paper proposes a novel hybrid method to robustly and accurately localize texts in natural scene images. A text region detector is designed to generate a text confidence map, based on which text components can be segmented by local binarization approach. A Conditional Random Field (CRF) model, considering the unary component property as well as binary neighboring component relationship, is then presented to label components as \"text\" or \"non-text\". Last, text components are grouped into text lines with an energy minimization approach. Experimental results show that the proposed method gives promising performance comparing with the existing methods on ICDAR 2003 competition dataset.",
"title": ""
},
{
"docid": "0b9438fbc4cb25452f35fb1b79c7bb02",
"text": "Over the summer of 2000, the artist Paul Pfeiffer, with his collaborators John Letourneau and Lawrence Chua, videotaped a flock of chickens on a farm in upstate New York. Using three still cameras, they followed the birds lives twentyfour hours a day: beginning with incubated eggs purchased from a local supplier, through hatching at around seventeen days, to the flocks move to its outdoor pen, and ending when the chickens reached adulthood, on the seventy-fifth day.1 From April 15 to June 28, 2001, Paul Pfeiffers Orpheus Descendingthe work that resulted from this footagewas simultaneously shown on two of the information plasma screens and video monitors found throughout the public thoroughfares of the World Trade Center and the World Financial Center complex. The first, a PATHVISION information monitor wedged between a Hudson newsstand and a Quick Card machine [Fig. 1], was located in the mezzanine defined by a bank of nineteen escalators and the New Jersey PATH train turnstiles [Fig. 2]. The second was a plasma screen that placed the video between directional signage and advertisements promoting local businesses and cultural events on the North Bridge, a glass-enclosed pedestrian overpass spanning the World Financial Center and the World Trade Center [Fig. 3].",
"title": ""
},
{
"docid": "693751cc1d963c63498d56012fe3f8b6",
"text": "Automatic license plate recognition (LPR) plays an important role in numerous applications and a number of techniques have been proposed. However, most of them worked under restricted conditions, such as fixed illumination, limited vehicle speed, designated routes, and stationary backgrounds. In this study, as few constraints as possible on the working environment are considered. The proposed LPR technique consists of two main modules: a license plate locating module and a license number identification module. The former characterized by fuzzy disciplines attempts to extract license plates from an input image, while the latter conceptualized in terms of neural subjects aims to identify the number present in a license plate. Experiments have been conducted for the respective modules. In the experiment on locating license plates, 1088 images taken from various scenes and under different conditions were employed. Of which, 23 images have been failed to locate the license plates present in the images; the license plate location rate of success is 97.9%. In the experiment on identifying license number, 1065 images, from which license plates have been successfully located, were used. Of which, 47 images have been failed to identify the numbers of the license plates located in the images; the identification rate of success is 95.6%. Combining the above two rates, the overall rate of success for our LPR algorithm is 93.7%.",
"title": ""
},
{
"docid": "80ff6fcad8465011dec1b2271b5fe5a8",
"text": "In face of high partial and complete disk failure rates and untimely system crashes, the executions of low-priority background tasks become increasingly frequent in large-scale data centers. However, the existing algorithms are all reactive optimizations and only exploit the temporal locality of workloads to reduce the user I/O requests during the low-priority background tasks. To address the problem, this paper proposes Intelligent Data Outsourcing (IDO), a zone-based and proactive data migration optimization, to significantly improve the efficiency of the low-priority background tasks. The main idea of IDO is to proactively identify the hot data zones of RAID-structured storage systems in the normal operational state. By leveraging the prediction tools to identify the upcoming events, IDO proactively migrates the data blocks belonging to the hot data zones on the degraded device to a surrogate RAID set in the large-scale data centers. Upon a disk failure or crash reboot, most user I/O requests addressed to the degraded RAID set can be serviced directly by the surrogate RAID set rather than the much slower degraded RAID set. Consequently, the performance of the background tasks and user I/O performance during the background tasks are improved simultaneously. Our lightweight prototype implementation of IDO and extensive trace-driven experiments on two case studies demonstrate that, compared with the existing state-of-the-art approaches, IDO effectively improves the performance of the low-priority background tasks. Moreover, IDO is portable and can be easily incorporated into any existing algorithms for RAID-structured storage systems.",
"title": ""
},
{
"docid": "8e1e3f2ac677adb783f1b88f4ca93d4e",
"text": "Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the (Social) Science Citation Index and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as three alternatives to the JIF to assess journal impact (the h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS generally results in more comprehensive citation coverage in the area of Management and International Business. The use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI. Among these are: books, conference papers, non-US journals, and in general journals in the field of Strategy and International Business. The three alternative GS-based metrics showed strong correlations with the traditional JIF. As such, they provide academics and universities committed to JIFs with a good alternative for journals that are not ISI-indexed. However, we argue that these metrics provide additional advantages over the JIF and that the free availability of GS allows for a democratization of citation analysis as it provides every academic access to citation data regardless of their institution’s financial means.",
"title": ""
},
{
"docid": "a63cc19137ead27acf5530c0bdb924f5",
"text": "We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications.",
"title": ""
},
{
"docid": "0ac38422284d164095882a3f3dd74e4f",
"text": "This paper introduces the status of social recommender system research in general and collaborative filtering in particular. For the collaborative filtering, the paper shows the basic principles and formulas of two basic approaches, the user-based collaborative filtering and the item-based collaborative filtering. For the user or item similarity calculation, the paper compares the differences between the cosine-based similarity, the revised cosine-based similarity and the Pearson-based similarity. The paper also analyzes the three main challenges of the collaborative filtering algorithm and shows the related works facing the challenges. To solve the Cold Start problem and reduce the cost of best neighborhood calculation, the paper provides several solutions. At last it discusses the future of the collaborative filtering algorithm in social recommender system.",
"title": ""
},
{
"docid": "0b56a411692b4c0c051ef318d996511f",
"text": "The pathophysiology of perinatal brain injury is multifactorial and involves hypoxia-ischemia (HI) and inflammation. N-methyl-d-aspartate receptors (NMDAR) are present on neurons and glia in immature rodents, and NMDAR antagonists are protective in HI models. To enhance clinical translation of rodent data, we examined protein expression of 6 NMDAR subunits in postmortem human brains without injury from 20 postconceptional weeks through adulthood and in cases of periventricular leukomalacia (PVL). We hypothesized that the developing brain is intrinsically vulnerable to excitotoxicity via maturation-specific NMDAR levels and subunit composition. In normal white matter, NR1 and NR2B levels were highest in the preterm period compared with adult. In gray matter, NR2A and NR3A expression were highest near term. NR2A was significantly elevated in PVL white matter, with reduced NR1 and NR3A in gray matter compared with uninjured controls. These data suggest increased NMDAR-mediated vulnerability during early brain development due to an overall upregulation of individual receptors subunits, in particular, the presence of highly calcium permeable NR2B-containing and magnesium-insensitive NR3A NMDARs. These data improve understanding of molecular diversity and heterogeneity of NMDAR subunit expression in human brain development and supports an intrinsic prenatal vulnerability to glutamate-mediated injury; validating NMDAR subunit-specific targeted therapies for PVL.",
"title": ""
},
{
"docid": "2bb988a1d2b3269e7ebe989a65f44487",
"text": "The future connectivity landscape and, notably, the 5G wireless systems will feature Ultra-Reliable Low Latency Communication (URLLC). The coupling of high reliability and low latency requirements in URLLC use cases makes the wireless access design very challenging, in terms of both the protocol design and of the associated transmission techniques. This paper aims to provide a broad perspective on the fundamental tradeoffs in URLLC as well as the principles used in building access protocols. Two specific technologies are considered in the context of URLLC: massive MIMO and multi-connectivity, also termed interface diversity. The paper also touches upon the important question of the proper statistical methodology for designing and assessing extremely high reliability levels.",
"title": ""
}
] |
scidocsrr
|
d4fcfdf7e7815fe964a2b2aebfdf9733
|
iBump: Smartphone application to detect car accidents
|
[
{
"docid": "945f94bd0022e14c1726cb36dd5deefc",
"text": "This paper introduces a mobile human airbag system designed for fall protection for the elderly. A Micro Inertial Measurement Unit ( muIMU) of 56 mm times 23 mm times 15 mm in size is built. This unit consists of three dimensional MEMS accelerometers, gyroscopes, a Bluetooth module and a Micro Controller Unit (MCU). It records human motion information, and, through the analysis of falls using a high-speed camera, a lateral fall can be determined by gyro threshold. A human motion database that includes falls and other normal motions (walking, running, etc.) is set up. Using a support vector machine (SVM) training process, we can classify falls and other normal motions successfully with a SVM filter. Based on the SVM filter, an embedded digital signal processing (DSP) system is developed for real-time fall detection. In addition, a smart mechanical airbag deployment system is finalized. The response time for the mechanical trigger is 0.133 s, which allows enough time for compressed air to be released before a person falls to the ground. The integrated system is tested and the feasibility of the airbag system for real-time fall protection is demonstrated.",
"title": ""
}
] |
[
{
"docid": "0de1e9759b4c088a15d84a108ba21c33",
"text": "MillWheel is a framework for building low-latency data-processing applications that is widely used at Google. Users specify a directed computation graph and application code for individual nodes, and the system manages persistent state and the continuous flow of records, all within the envelope of the framework’s fault-tolerance guarantees. This paper describes MillWheel’s programming model as well as its implementation. The case study of a continuous anomaly detector in use at Google serves to motivate how many of MillWheel’s features are used. MillWheel’s programming model provides a notion of logical time, making it simple to write time-based aggregations. MillWheel was designed from the outset with fault tolerance and scalability in mind. In practice, we find that MillWheel’s unique combination of scalability, fault tolerance, and a versatile programming model lends itself to a wide variety of problems at Google.",
"title": ""
},
{
"docid": "24b970bdf722a0b036acf9c81a846598",
"text": "We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB).",
"title": ""
},
{
"docid": "f6b974c04dceaea3176a0092304bab72",
"text": "Information-Centric Networking (ICN) has recently emerged as a promising Future Internet architecture that aims to cope with the increasing demand for highly scalable and efficient distribution of content. Moving away from the Internet communication model based in addressable hosts, ICN leverages in-network storage for caching, multi-party communication through replication, and interaction models that decouple senders and receivers. This novel networking approach has the potential to outperform IP in several dimensions, besides just content dissemination. Concretely, the rise of the Internet of Things (IoT), with its rich set of challenges and requirements placed over the current Internet, provide an interesting ground for showcasing the contribution and performance of ICN mechanisms. This work analyses how the in-network caching mechanisms associated to ICN, particularly those implemented in the Content-Centric Networking (CCN) architecture, contribute in IoT environments, particularly in terms of energy consumption and bandwidth usage. A simulation comparing IP and the CCN architecture (an instantiation of ICN) in IoT environments demonstrated that CCN leads to a considerable reduction of the energy consumed by the information producers and to a reduction of bandwidth requirements, as well as highlighted the flexibility for adapting current ICN caching mechanisms to target specific requirements of IoT.",
"title": ""
},
{
"docid": "d4a248a947e753f32f95f6b1328b7213",
"text": "A new paradigm of recommendation is emerging in intelligent personal assistants such as Apple's Siri, Google Now, and Microsoft Cortana, which recommends \"the right information at the right time\" and proactively helps you \"get things done\". This type of recommendation requires precisely tracking users' contemporaneous intent, i.e., what type of information (e.g., weather, stock prices) users currently intend to know, and what tasks (e.g., playing music, getting taxis) they intend to do. Users' intent is closely related to context, which includes both external environments such as time and location, and users' internal activities that can be sensed by personal assistants. The relationship between context and intent exhibits complicated co-occurring and sequential correlation, and contextual signals are also heterogeneous and sparse, which makes modeling the context intent relationship a challenging task. To solve the intent tracking problem, we propose the Kalman filter regularized PARAFAC2 (KP2) nowcasting model, which compactly represents the structure and co-movement of context and intent. The KP2 model utilizes collaborative capabilities among users, and learns for each user a personalized dynamic system that enables efficient nowcasting of users' intent. Extensive experiments using real-world data sets from a commercial personal assistant show that the KP2 model significantly outperforms various methods, and provides inspiring implications for deploying large-scale proactive recommendation systems in personal assistants.",
"title": ""
},
{
"docid": "a84ee8a0f06e07abd53605bf5b542519",
"text": "Abeta peptide accumulation is thought to be the primary event in the pathogenesis of Alzheimer's disease (AD), with downstream neurotoxic effects including the hyperphosphorylation of tau protein. Glycogen synthase kinase-3 (GSK-3) is increasingly implicated as playing a pivotal role in this amyloid cascade. We have developed an adult-onset Drosophila model of AD, using an inducible gene expression system to express Arctic mutant Abeta42 specifically in adult neurons, to avoid developmental effects. Abeta42 accumulated with age in these flies and they displayed increased mortality together with progressive neuronal dysfunction, but in the apparent absence of neuronal loss. This fly model can thus be used to examine the role of events during adulthood and early AD aetiology. Expression of Abeta42 in adult neurons increased GSK-3 activity, and inhibition of GSK-3 (either genetically or pharmacologically by lithium treatment) rescued Abeta42 toxicity. Abeta42 pathogenesis was also reduced by removal of endogenous fly tau; but, within the limits of detection of available methods, tau phosphorylation did not appear to be altered in flies expressing Abeta42. The GSK-3-mediated effects on Abeta42 toxicity appear to be at least in part mediated by tau-independent mechanisms, because the protective effect of lithium alone was greater than that of the removal of tau alone. Finally, Abeta42 levels were reduced upon GSK-3 inhibition, pointing to a direct role of GSK-3 in the regulation of Abeta42 peptide level, in the absence of APP processing. Our study points to the need both to identify the mechanisms by which GSK-3 modulates Abeta42 levels in the fly and to determine if similar mechanisms are present in mammals, and it supports the potential therapeutic use of GSK-3 inhibitors in AD.",
"title": ""
},
{
"docid": "1f677c07ba42617ac590e6e0a5cdfeab",
"text": "Network Functions Virtualization (NFV) is an emerging initiative to overcome increasing operational and capital costs faced by network operators due to the need to physically locate network functions in specific hardware appliances. In NFV, standard IT virtualization evolves to consolidate network functions onto high volume servers, switches and storage that can be located anywhere in the network. Services are built by chaining a set of Virtual Network Functions (VNFs) deployed on commodity hardware. The implementation of NFV leads to the challenge: How several network services (VNF chains) are optimally orchestrated and allocated on the substrate network infrastructure? In this paper, we address this problem and propose CoordVNF, a heuristic method to coordinate the composition of VNF chains and their embedding into the substrate network. CoordVNF aims to minimize bandwidth utilization while computing results within reasonable runtime.",
"title": ""
},
{
"docid": "e48e208c01fb6f8918aec8aa68e2ad86",
"text": "We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives. To isolate the object from cluttered backgrounds or other surrounding objects in the camera view, we first propose an efficient and effective motion-based method to define a region of interest (ROI) in the video by asking the user to shake the object. This method extracts moving object region by a mixture-of-Gaussians-based background subtraction method. In the extracted ROI, text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object ROI, we propose a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binarized and recognized by off-the-shelf optical character recognition software. The recognized text codes are output to blind users in speech. Performance of the proposed text localization algorithm is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 Robust Reading Datasets. Experimental results demonstrate that our algorithm achieves the state of the arts. The proof-of-concept prototype is also evaluated on a dataset collected using ten blind persons to evaluate the effectiveness of the system's hardware. We explore user interface issues and assess robustness of the algorithm in extracting and reading text from different objects with complex backgrounds.",
"title": ""
},
{
"docid": "1d6623c3d70030b27c4f1b583c0d9aa3",
"text": "This paper proposes a new image enhancement technique known as Average Intensity Replacement based on Adaptive Histogram Equalization (AIR-AHE) for FLAIR image based on intensities and contrast mapping techniques. The proposed algorithm consists of partial contrast stretching, contrast limiting enhancement, window sliding neighborhood operation and new pixel centroid replacement. The fluid attenuated inversion recovery (FLAIR) sequences of MRI images which are used for segmentation have low contrast. Therefore, contrast stretching is used to improve the quality of the image. After improving the quality of image, the regions of high intensity are determined to represent potential WMH areas. The result shows that the image has a moderate enhancement on the WMH region which is significant to the image contrast enhancement. With complete brightness preservation, the proposed method gives a relatively natural brightness improvement on the WMH of the periventricular region.",
"title": ""
},
{
"docid": "8b7aab188ac4b6e4e777dfd1c670fab3",
"text": "In this paper, we have designed a newly shaped narrowband microstrip antenna operating at nearly 2.45 GHz based on transmission-line model. We have created a reversed `Arrow' shaped slot at the edge of opposite side of microstrip line feed to improve return loss and minimize VSWR, which are required for better impedance matching. After simulating the design, we have got higher return loss (approximately -41 dB) and lower VSWR (approximately 1.02:1) at 2.442 GHz. The radiation pattern of the antenna is unidirectional, which is suitable for both fixed RFID tag and reader. The gain of this antenna is 9.67 dB. The design has been simulated in CST Microwave Studio 2011.",
"title": ""
},
{
"docid": "18851774e598f4cb66dbc770abe4a83f",
"text": "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.",
"title": ""
},
{
"docid": "f31f45176e89163d27b065a52b429973",
"text": "Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.",
"title": ""
},
{
"docid": "9ceb5c36d0b790eccc80968fb496b9f0",
"text": "This article discusses different modelling approaches considered to address problems related to ambulances' location and relocation as well as dispatching decisions arising in Emergency Medical Services (EMS) management. Over the past 10 years, a considerable amount of research has been devoted to this field and specifically, to the development of strategies that more explicitly take into account the uncertainty and dynamism inherent to EMS. This article thus reviews the most recent advances related to EMS management from an operational level standpoint. It briefly reviews early works on static ambulances' location problems and presents new modelling and solution strategies proposed to address it. However, it intends to concentrate on relocation strategies and dispatching rules, and discuss the interaction between these two types of decisions. Finally, conclusions and perspectives are presented.",
"title": ""
},
{
"docid": "a355cf6fb2ae39038e5ac29e0a4fbab2",
"text": "Manga (Japanese comics) are popular worldwide. However, current e-manga archives offer very limited search support, i.e., keyword-based search by title or author. To make the manga search experience more intuitive, efficient, and enjoyable, we propose a manga-specific image retrieval system. The proposed system consists of efficient margin labeling, edge orientation histogram feature description with screen tone removal, and approximate nearest-neighbor search using product quantization. For querying, the system provides a sketch-based interface. Based on the interface, two interactive reranking schemes are presented: relevance feedback and query retouch. For evaluation, we built a novel dataset of manga images, Manga109, which consists of 109 comic books of 21,142 pages drawn by professional manga artists. To the best of our knowledge, Manga109 is currently the biggest dataset of manga images available for research. Experimental results showed that the proposed framework is efficient and scalable (70 ms from 21,142 pages using a single computer with 204 MB RAM).",
"title": ""
},
{
"docid": "82d3217331a70ead8ec3064b663de451",
"text": "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer’s output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.",
"title": ""
},
{
"docid": "173a3ae90795c129016e3b126d719cb8",
"text": "While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal. Likewise, we demonstrate that the common practice of using very few epochs during the main NAS and much larger numbers of epochs during a post-processing step is inefficient due to little correlation in the relative rankings for these two training regimes. To combat both of these problems, we propose to use a recent combination of Bayesian optimization and Hyperband for efficient joint neural architecture and hyperparameter search.",
"title": ""
},
{
"docid": "3b8f2694d8b6f7177efa8716d72b9129",
"text": "Behara, B and Jacobson, BH. Acute effects of deep tissue foam rolling and dynamic stretching on muscular strength, power, and flexibility in Division I linemen. J Strength Cond Res 31(4): 888-892, 2017-A recent strategy to increase sports performance is a self-massage technique called myofascial release using foam rollers. Myofascial restrictions are believed to be brought on by injuries, muscle imbalances, overrecruitment, and/or inflammation, all of which can decrease sports performance. The purpose of this study was to compare the acute effects of a single-bout of lower extremity self-myofascial release using a custom deep tissue roller (DTR) and a dynamic stretch protocol. Subjects consisted of NCAA Division 1 offensive linemen (n = 14) at a Midwestern university. All players were briefed on the objectives of the study and subsequently signed an approved IRB consent document. A randomized crossover design was used to assess each dependent variable (vertical jump [VJ] power and velocity, knee isometric torque, and hip range of motion was assessed before and after: [a] no treatment, [b] deep tissue foam rolling, and [c] dynamic stretching). Results of repeated-measures analysis of variance yielded no pretest to posttest significant differences (p > 0.05) among the groups for VJ peak power (p = 0.45), VJ average power (p = 0.16), VJ peak velocity (p = 0.25), VJ average velocity (p = 0.23), peak knee extension torque (p = 0.63), average knee extension torque (p = 0.11), peak knee flexion torque (p = 0.63), or average knee flexion torque (p = 0.22). However, hip flexibility was statistically significant when tested after both dynamic stretching and foam rolling (p = 0.0001). Although no changes in strength or power was evident, increased flexibility after DTR may be used interchangeably with traditional stretching exercises.",
"title": ""
},
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
},
{
"docid": "c66fc0dbd8774fdb5fea3990985e65d7",
"text": "Since 1985 various evolutionary approaches to multiobjective optimization have been developed, capable of searching for multiple solutions concurrently in a single run. But the few comparative studies of different methods available to date are mostly qualitative and restricted to two approaches. In this paper an extensive, quantitative comparison is presented, applying four multiobjective evolutionary algorithms to an extended ~0/1 knapsack problem. 1 I n t r o d u c t i o n Many real-world problems involve simultaneous optimization of several incommensurable and often competing objectives. Usually, there is no single optimal solution, but rather a set of alternative solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered. They are known as Pareto-optimal solutions. Mathematically, the concept of Pareto-optimality can be defined as follows: Let us consider, without loss of generality, a multiobjective maximization problem with m parameters (decision variables) and n objectives: Maximize y = f (x ) = ( f l (x ) , f 2 ( x ) , . . . , f,~(x)) (1) where x = ( x l , x 2 , . . . , x m ) e X and y = ( y l , y 2 , . . . , y ~ ) E Y are tuple. A decision vector a E X is said to dominate a decision vector b E X (also written as a >-b) iff V i e { 1 , 2 , . . . , n } : l ~ ( a ) > _ f ~ ( b ) A ~ j e { 1 , 2 , . . . , n } : f j ( a ) > f j ( b ) (2) Additionally, in this study we say a covers b iff a ~b or a = b. All decision vectors which are not dominated by any other decision vector are called nondominated or Pareto-optimal. Often, there is a special interest in finding or approximating the Paretooptimal set, mainly to gain deeper insight into the problem and knowledge about alternate solutions, respectively. Evolutionary algorithms (EAs) seem to be especially suited for this task, because they process a set of solutions in parallel, eventually exploiting similarities of solutions by crossover. Some researcher suggest that multiobjective search and optimization might be a problem area where EAs do better than other blind search strategies [1][12]. Since the mid-eighties various multiob]ective EAs have been developed, capable of searching for multiple Pareto-optimal solutions concurrently in a single",
"title": ""
},
{
"docid": "170873ad959b33eea76e9f542c5dbff6",
"text": "This paper reports on a development framework, two prototypes, and a comparative study in the area of multi-tag Near-Field Communication (NFC) interaction. By combining NFC with static and dynamic displays, such as posters and projections, services are made more visible and allow users to interact with them easily by interacting directly with the display with their phone. In this paper, we explore such interactions, in particular, the combination of the phone display and large NFC displays. We also compare static displays and dynamic displays, and present a list of deciding factors for a particular deployment situation. We discuss one prototype for each display type and developed a corresponding framework which can be used to accelerate the development of such prototypes whilst supporting a high level of versatility. The findings of a controlled comparative study indicate, among other things, that all participants preferred the dynamic display, although the static display has advantages, e.g. with respect to privacy and portability.",
"title": ""
},
{
"docid": "90b502cb72488529ec0d389ca99b57b8",
"text": "The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.",
"title": ""
}
] |
scidocsrr
|
98601a74f18d97a27bf7f9688cbbcfc8
|
Few-Shot Object Recognition from Machine-Labeled Web Images
|
[
{
"docid": "c7e3fc9562a02818bba80d250241511d",
"text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.",
"title": ""
},
{
"docid": "4f58d355a60eb61b1c2ee71a457cf5fe",
"text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"title": ""
}
] |
[
{
"docid": "0cfda368edafe21e538f2c1d7ed75056",
"text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.",
"title": ""
},
{
"docid": "07657456a2328be11dfaf706b5728ddc",
"text": "Knowledge of wheelchair kinematics during a match is prerequisite for performance improvement in wheelchair basketball. Unfortunately, no measurement system providing key kinematic outcomes proved to be reliable in competition. In this study, the reliability of estimated wheelchair kinematics based on a three inertial measurement unit (IMU) configuration was assessed in wheelchair basketball match-like conditions. Twenty participants performed a series of tests reflecting different motion aspects of wheelchair basketball. During the tests wheelchair kinematics were simultaneously measured using IMUs on wheels and frame, and a 24-camera optical motion analysis system serving as gold standard. Results showed only small deviations of the IMU method compared to the gold standard, once a newly developed skid correction algorithm was applied. Calculated Root Mean Square Errors (RMSE) showed good estimates for frame displacement (RMSE≤0.05 m) and speed (RMSE≤0.1m/s), except for three truly vigorous tests. Estimates of frame rotation in the horizontal plane (RMSE<3°) and rotational speed (RMSE<7°/s) were very accurate. Differences in calculated Instantaneous Rotation Centres (IRC) were small, but somewhat larger in tests performed at high speed (RMSE up to 0.19 m). Average test outcomes for linear speed (ICCs>0.90), rotational speed (ICC>0.99) and IRC (ICC> 0.90) showed high correlations between IMU data and gold standard. IMU based estimation of wheelchair kinematics provided reliable results, except for brief moments of wheel skidding in truly vigorous tests. The IMU method is believed to enable prospective research in wheelchair basketball match conditions and contribute to individual support of athletes in everyday sports practice.",
"title": ""
},
{
"docid": "d27490a870706634ab418d96cd1ce9d8",
"text": "Interferometric synthetic aperture radar observations provide a means for obtaining high-resolution digital topographic maps from measurements of amplitude and phase of two complex radar images. The phase of the radar echoes may only be measured modulo 2•; however, the whole phase at each point in the image is needed to obtain elevations. We present here our approach to \"unwrapping\" the 2r• ambiguities in the two-dimensional data set. We find that noise and geometrical radar layover corrupt our measurements locally, and these local errors can propagate to form global phase errors that affect the entire image. We show that the local errors, or residues, can be readily identified and avoided in the global phase estimation. We present a rectified digital topographic map derived from our unwrapped phase values.",
"title": ""
},
{
"docid": "52115901d15b2c0d75748ac6f4cf2851",
"text": "This paper presents the development of the CYBERLEGs Alpha-Prototype prosthesis, a new transfemoral prosthesis incorporating a new variable stiffness ankle actuator based on the MACCEPA architecture, a passive knee with two locking mechanisms, and an energy transfer mechanism that harvests negative work from the knee and delivers it to the ankle to assist pushoff. The CYBERLEGs Alpha-Prosthesis is part of the CYBERLEGs FP7-ICT project, which combines a prosthesis system to replace a lost limb in parallel with an exoskeleton to assist the sound leg, and sensory array to control both systems. The prosthesis attempts to produce a natural level ground walking gait that approximates the joint torques and kinematics of a non-amputee while maintaining compliant joints, which has the potential to decrease impulsive losses, and ultimately reduce the end user energy consumption. This first prototype consists of a passive knee and an active ankle which are energetically coupled to reduce the total power consumption of the device. Here we present simulations of the actuation system of the ankle and the passive behavior of the knee module with and without the energy transfer effects, the mechanical design of the prosthesis, and empirical results from testing of the A preliminary version of this paper was presented at the Wearable Robotics Workshop, Neurotechnix 2013. ∗Corresponding author Email addresses: lflynn@vub.ac.be (Louis Flynn), jgeeroms@vub.ac.be (Joost Geeroms), rjimenez@vub.ac.be (Rene Jimenez-Fabian), bram.vanderborght@vub.ac.be (Bram Vanderborght), n.vitiello@sssup.it (Nicola Vitiello), dlefeber@vub.ac.be (Dirk Lefeber) Preprint submitted to Journal of Robotics and Autonomous Systems November 30, 2014 physical device with amputee subjects.",
"title": ""
},
{
"docid": "c2195ae053d1bbf712c96a442a911e31",
"text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.",
"title": ""
},
{
"docid": "afde659640388181f3acd71fd43cedb1",
"text": "In this paper, the variable universe adaptive fuzzy controller based on variable gain H∞ regulator (VGH∞R) is designed to stabilize a quadruple inverted pendulum. The VGH∞R is a novel robust gain-scheduling approach. By utilizing VGH∞R technique, a more precise real-time feedback gain matrix, which is changing with states, is obtained. Via the variable gain matrix 10 state variables of quadruple inverted pendulum are transformed into a kind of synthesis error (E) and synthesis rate of change of error (EC) at sampling time. Therefore, the dimension of the multivariable system is reduced and the variable universe adaptive fuzzy controller is built. Experiments illustrate the effectiveness of the proposed control scheme.",
"title": ""
},
{
"docid": "0737e99613b83104bc9390a46fbc4aeb",
"text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.",
"title": ""
},
{
"docid": "d390b0e5b1892297af37659fb92c03b5",
"text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.",
"title": ""
},
{
"docid": "1a3f4ce529e2a876421e17f0333df7a6",
"text": "In this response to Pinker and Jackendoff's critique, we extend our previous framework for discussion of language evolution, clarifying certain distinctions and elaborating on a number of points. In the first half of the paper, we reiterate that profitable research into the biology and evolution of language requires fractionation of \"language\" into component mechanisms and interfaces, a non-trivial endeavor whose results are unlikely to map onto traditional disciplinary boundaries. Our terminological distinction between FLN and FLB is intended to help clarify misunderstandings and aid interdisciplinary rapprochement. By blurring this distinction, Pinker and Jackendoff mischaracterize our hypothesis 3 which concerns only FLN, not \"language\" as a whole. Many of their arguments and examples are thus irrelevant to this hypothesis. Their critique of the minimalist program is for the most part equally irrelevant, because very few of the arguments in our original paper were tied to this program; in an online appendix we detail the deep inaccuracies in their characterization of this program. Concerning evolution, we believe that Pinker and Jackendoff's emphasis on the past adaptive history of the language faculty is misplaced. Such questions are unlikely to be resolved empirically due to a lack of relevant data, and invite speculation rather than research. Preoccupation with the issue has retarded progress in the field by diverting research away from empirical questions, many of which can be addressed with comparative data. Moreover, offering an adaptive hypothesis as an alternative to our hypothesis concerning mechanisms is a logical error, as questions of function are independent of those concerning mechanism. The second half of our paper consists of a detailed response to the specific data discussed by Pinker and Jackendoff. Although many of their examples are irrelevant to our original paper and arguments, we find several areas of substantive disagreement that could be resolved by future empirical research. We conclude that progress in understanding the evolution of language will require much more empirical research, grounded in modern comparative biology, more interdisciplinary collaboration, and much less of the adaptive storytelling and phylogenetic speculation that has traditionally characterized the field.",
"title": ""
},
{
"docid": "03cf94634730b9c5847f297b2162ddd9",
"text": "Music tag words that describe music audio by text have different levels of abstraction. Taking this issue into account, we propose a music classification approach that aggregates multilevel and multi-scale features using pre-trained feature extractors. In particular, the feature extractors are trained in sample-level deep convolutional neural networks using raw waveforms. We show that this approach achieves state-of-the-art results on several music classification datasets.",
"title": ""
},
{
"docid": "dbcdaa0413f31407ffc61708d03a693e",
"text": "There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world—petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to several terabytes, and perform primarily computeintensive operations. Targeting these users fundamentally changes the way we should build analytics systems. This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware’s architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems.",
"title": ""
},
{
"docid": "885b3b33fad3dee064f47201ec10f3bb",
"text": "Traceability is the only means to ensure that the source code of a system is consistent with its requirements and that all and only the specified requirements have been implemented by developers. During software maintenance and evolution, requirement traceability links become obsolete because developers do not/cannot devote effort to updating them. Yet, recovering these traceability links later is a daunting and costly task for developers. Consequently, the literature has proposed methods, techniques, and tools to recover these traceability links semi-automatically or automatically. Among the proposed techniques, the literature showed that information retrieval (IR) techniques can automatically recover traceability links between free-text requirements and source code. However, IR techniques lack accuracy (precision and recall). In this paper, we show that mining software repositories and combining mined results with IR techniques can improve the accuracy (precision and recall) of IR techniques and we propose Trustrace, a trust--based traceability recovery approach. We apply Trustrace on four medium-size open-source systems to compare the accuracy of its traceability links with those recovered using state-of-the-art IR techniques from the literature, based on the Vector Space Model and Jensen-Shannon model. The results of Trustrace are up to 22.7 percent more precise and have 7.66 percent better recall values than those of the other techniques, on average. We thus show that mining software repositories and combining the mined data with existing results from IR techniques improves the precision and recall of requirement traceability links.",
"title": ""
},
{
"docid": "e27d949155cef2885a4ab93f4fba18b3",
"text": "Because of its richness and availability, micro-blogging has become an ideal platform for conducting psychological research. In this paper, we proposed to predict active users' personality traits through micro-blogging behaviors. 547 Chinese active users of micro-blogging participated in this study. Their personality traits were measured by the Big Five Inventory, and digital records of micro-blogging behaviors were collected via web crawlers. After extracting 839 micro-blogging behavioral features, we first trained classification models utilizing Support Vector Machine (SVM), differentiating participants with high and low scores on each dimension of the Big Five Inventory [corrected]. The classification accuracy ranged from 84% to 92%. We also built regression models utilizing PaceRegression methods, predicting participants' scores on each dimension of the Big Five Inventory. The Pearson correlation coefficients between predicted scores and actual scores ranged from 0.48 to 0.54. Results indicated that active users' personality traits could be predicted by micro-blogging behaviors.",
"title": ""
},
{
"docid": "c6b5558373f3ba0e9ed7c87e1acb91b7",
"text": "Little attention has been paid so far to physiological signals for emotion recognition compared to audio-visual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection/reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification",
"title": ""
},
{
"docid": "7735668d4f8407d9514211d9f5492ce6",
"text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.",
"title": ""
},
{
"docid": "9858c1f5045c402213c5ce82c6d732f4",
"text": "Bug triage, deciding what to do with an incoming bug report, is taking up increasing amount of developer resources in large open-source projects. In this paper, we propose to apply machine learning techniques to assist in bug triage by using text categorization to predict the developer that should work on the bug based on the bug’s description. We demonstrate our approach on a collection of 15,859 bug reports from a large open-source project. Our evaluation shows that our prototype, using supervised Bayesian learning, can correctly predict 30% of the report assignments to",
"title": ""
},
{
"docid": "a74aef75f5b1d5bc44da2f6d2c9284cf",
"text": "In this paper, we define irregular bipolar fuzzy graphs and its various classifications. Size of regular bipolar fuzzy graphs is derived. The relation between highly and neighbourly irregular bipolar fuzzy graphs are established. Some basic theorems related to the stated graphs have also been presented.",
"title": ""
},
{
"docid": "2e6a47d8ec4b955992ec344d58984297",
"text": "Businesses increasingly attempt to learn more about their customers, suppliers, and operations by using millions of networked sensors integrated, for example, in mobile phones, cashier systems, automobiles, or weather stations. This development raises the question of how companies manage to cope with these ever-increasing amounts of data, referred to as Big Data. Consequently, the aim of this paper is to identify different Big Data strategies a company may implement and provide a set of organizational contingency factors that influence strategy choice. In order to do so, we reviewed existing literature in the fields of Big Data analytics, data warehousing, and business intelligence and synthesized our findings into a contingency matrix that may support practitioners in choosing a suitable Big Data approach. We find that while every strategy can be beneficial under certain corporate circumstances, the hybrid approach - a combination of traditional relational database structures and MapReduce techniques - is the strategy most often valuable for companies pursuing Big Data analytics.",
"title": ""
},
{
"docid": "379361a68388bda81375f7fb543689cc",
"text": "A novel circular aperture antenna is presented. The proposed nonprotruding antenna is composed of a cylindrical cavity embedding an inverted hat antenna whose profile is defined by three elliptical segments. This profile can be used to control and shape the return loss response over a wide frequency band (31% at -15 dB). The present design combines the benefits of a low-profile antenna and the broad bandwidth of the inverted hat monopole. A parametric analysis of the antenna is conducted. Omnidirectional monopole-like radiation pattern and vertical polarization are also verified with measurements and simulations. This antenna constitutes a low drag candidate for a distance measurement equipment (DME) aircraft navigation system.",
"title": ""
},
{
"docid": "873be467576bff16904d7abc6c961394",
"text": "A bunny ear shaped combline element for dual-polarized compact aperture arrays is presented which provides relatively low noise temperature and low level cross polarization over a wide bandwidth and wide scanning angles. The element is corrugated along the outer edges between the elements to control the complex mutual coupling at high scan angles. This produces nearly linear polarized waves in the principle planes and lower than -10 dB cross polarization in the intercardinal plane. To achieve a low noise temperature, only metal conductors are used, which also results in a low cost of manufacture. Dual linear polarization or circular polarization can be realized by adopting two different arrangements of the orthogonal elements. The performances for both mechanical arrangements are investigated. The robustness of the new design over the conventional Vivaldi-type antennas is highlighted.",
"title": ""
}
] |
scidocsrr
|
672a6cc92ccba3eb79ebedbd71d92494
|
CYC: A Large-Scale Investment in Knowledge Infrastructure
|
[
{
"docid": "87bd2fc53cbe92823af786e60e82f250",
"text": "Cyc is a bold attempt to assemble a massive knowledge base (on the order of 108 axioms) spanning human consensus knowledge. This article examines the need for such an undertaking and reviews the authos' efforts over the past five years to begin its construction. The methodology and history of the project are briefly discussed, followed by a more developed treatment of the current state of the representation language used (epistemological level), techniques for efficient inferencing and default reasoning (heuristic level), and the content and organization of the knowledge base.",
"title": ""
}
] |
[
{
"docid": "881d38d8f7ca47ca2f478c1dc1870c7f",
"text": "What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered.",
"title": ""
},
{
"docid": "be692c1251cb1dc73b06951c54037701",
"text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.",
"title": ""
},
{
"docid": "5c85e5965c2c887baa5a658beae2d0c9",
"text": "Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.",
"title": ""
},
{
"docid": "92cd9386b9a3915b02bf8cd6fe047765",
"text": "In this paper, we describe our work in the third Emotion Recognition in the Wild (EmotiW 2015) Challenge. For each video clip, we extract MSDF, LBP-TOP, HOG, LPQ-TOP and acoustic features to recognize the emotions of film characters. For the static facial expression recognition based on video frame, we extract MSDF, DCNN and RCNN features. We train linear SVM classifiers for these kinds of features on the AFEW and SFEW dataset, and we propose a novel fusion network to combine all the extracted features at decision level. The final achievement we gained is 51.02% on the AFEW testing set and 51.08% on the SFEW testing set, which are much better than the baseline recognition rate of 39.33% and 39.13%.",
"title": ""
},
{
"docid": "742596f0ab5bddd930eb4081ce8097b3",
"text": "We show how third-party web trackers can deanonymize users of cryptocurrencies. We present two distinct but complementary attacks. On most shopping websites, third party trackers receive information about user purchases for purposes of advertising and analytics. We show that, if the user pays using a cryptocurrency, trackers typically possess enough information about the purchase to uniquely identify the transaction on the blockchain, link it to the user’s cookie, and further to the user’s real identity. Our second attack shows that if the tracker is able to link two purchases of the same user to the blockchain in this manner, it can identify the user’s entire cluster of addresses and transactions on the blockchain, even if the user employs blockchain anonymity techniques such as CoinJoin. The attacks are passive and hence can be retroactively applied to past purchases. We discuss several mitigations, but none are perfect.",
"title": ""
},
{
"docid": "ca6b556eb4de9a8f66aefd5505c20f3d",
"text": "Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich",
"title": ""
},
{
"docid": "0e0b0b6b0fdab06fa9d3ebf6a8aefd6b",
"text": "Hippocampal place fields have been shown to reflect behaviorally relevant aspects of space. For instance, place fields tend to be skewed along commonly traveled directions, they cluster around rewarded locations, and they are constrained by the geometric structure of the environment. We hypothesize a set of design principles for the hippocampal cognitive map that explain how place fields represent space in a way that facilitates navigation and reinforcement learning. In particular, we suggest that place fields encode not just information about the current location, but also predictions about future locations under the current transition distribution. Under this model, a variety of place field phenomena arise naturally from the structure of rewards, barriers, and directional biases as reflected in the transition policy. Furthermore, we demonstrate that this representation of space can support efficient reinforcement learning. We also propose that grid cells compute the eigendecomposition of place fields in part because is useful for segmenting an enclosure along natural boundaries. When applied recursively, this segmentation can be used to discover a hierarchical decomposition of space. Thus, grid cells might be involved in computing subgoals for hierarchical reinforcement learning.",
"title": ""
},
{
"docid": "814890ae698f83d980faf513f6ed6740",
"text": "In this paper, we propose a method for urban building damage detection from multitemporal high resolution images using spectral and spatial information combined. Given the spectral similarity between damaged and undamaged areas in the images, two spatial features are used in the damage detection, i.e. invariant moments and LISA (local indicator of spatial association) index. These two spatial features were computed for each image object, which is produced by image segmentation. The One-Class Support Vector Machine (OCSVM), a recently developed one-class classifier was used to classify the multitemporal data to obtain building damage information. The uses of spectral data alone and plus obtained spatial features for building damage detection were separately evaluated using bitemporal Quickbird images acquired in Dujiangyan area of China, which was heavily hit by the Wenchuan earthquake. The results show that the combined use of spectral and spatial features significantly improved the damage detection accuracy, compared to that of using spectral information alone.",
"title": ""
},
{
"docid": "b0d959bdb58fbcc5e324a854e9e07b81",
"text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "c78e0662b9679a70f1ec4416b3abd2b4",
"text": "This article offers possibly the first peer-reviewed study on the training routines of elite eathletes, with special focus on the subjects’ physical exercise routines. The study is based on a sample of 115 elite e-athletes. According to their responses, e-athletes train approximately 5.28 hours every day around the year on the elite level. Approximately 1.08 hours of that training is physical exercise. More than half (55.6%) of the elite e-athletes believe that integrating physical exercise in their training programs has a positive effect on esport performance; however, no less than 47.0% of the elite e-athletes do their physical exercise chiefly to maintain overall health. Accordingly, the study indicates that elite e-athletes are active athletes as well, those of age 18 and older exercising physically more than three times the daily 21-minute activity recommendation given by World Health Organization.",
"title": ""
},
{
"docid": "1b78650b979b0043eeb3e7478a263846",
"text": "Our solutions was launched using a want to function as a full on-line digital local library that gives use of many PDF guide catalog. You may find many different types of e-guide as well as other literatures from my papers data bank. Specific popular topics that spread out on our catalog are famous books, answer key, assessment test questions and answer, guideline paper, training guideline, quiz test, consumer guide, consumer guidance, service instructions, restoration handbook, and so forth.",
"title": ""
},
{
"docid": "5d5742db6d7a4c95451f071bf7841077",
"text": "Automatic detection of diseases is a growing field of interest, and machine learning in form of deep learning neural networks are frequently explored as a potential tool for the medical video analysis. To both improve the \"black box\"-understanding and assist in the administrative duties of writing an examination report, we release an automated multimedia reporting software dissecting the neural network to learn the intermediate analysis steps, i.e., we are adding a new level of understanding and explainability by looking into the deep learning algorithms decision processes. The presented open-source software can be used for easy retrieval and reuse of data for automatic report generation, comparisons, teaching and research. As an example, we use live colonoscopy as a use case which is the gold standard examination of the large bowel, commonly performed for clinical and screening purposes. The added information has potentially a large value, and reuse of the data for the automatic reporting may potentially save the doctors large amounts of time.",
"title": ""
},
{
"docid": "620652a31904be950376332c7f97304d",
"text": "We combine two of the most popular approaches to automated Grammatical Error Correction (GEC): GEC based on Statistical Machine Translation (SMT) and GEC based on Neural Machine Translation (NMT). The hybrid system achieves new state-of-the-art results on the CoNLL-2014 and JFLEG benchmarks. This GEC system preserves the accuracy of SMT output and, at the same time, generates more fluent sentences as it typical for NMT. Our analysis shows that the created systems are closer to reaching human-level performance than any other GEC system reported so far.",
"title": ""
},
{
"docid": "ce29e17a4fb9c67676fb534e58e2e20d",
"text": "OBJECTIVE\nTo examine the association between frequency of assisting with home meal preparation and fruit and vegetable preference and self-efficacy for making healthier food choices among grade 5 children in Alberta, Canada.\n\n\nDESIGN\nA cross-sectional survey design was used. Children were asked how often they helped prepare food at home and rated their preference for twelve fruits and vegetables on a 3-point Likert-type scale. Self-efficacy was measured with six items on a 4-point Likert-type scale asking children their level of confidence in selecting and eating healthy foods at home and at school.\n\n\nSETTING\nSchools (n =151) located in Alberta, Canada.\n\n\nSUBJECTS\nGrade 5 students (n = 3398).\n\n\nRESULTS\nA large majority (83-93 %) of the study children reported helping in home meal preparation at least once monthly. Higher frequency of helping prepare and cook food at home was associated with higher fruit and vegetable preference and with higher self-efficacy for selecting and eating healthy foods.\n\n\nCONCLUSIONS\nEncouraging children to be more involved in home meal preparation could be an effective health promotion strategy. These findings suggest that the incorporation of activities teaching children how to prepare simple and healthy meals in health promotion programmes could potentially lead to improvement in dietary habits.",
"title": ""
},
{
"docid": "21dd193ec6849fa78ba03333708aebea",
"text": "Since the inception of Bitcoin technology, its underlying data structureâĂŞ-the blockchainâĂŞ-has garnered much attention due to properties such as decentralization, transparency, and immutability. These properties make blockchains suitable for apps that require disintermediation through trustless exchange, consistent and incorruptible transaction records, and operational models beyond cryptocurrency. In particular, blockchain and its programmable smart contracts have the potential to address healthcare interoperability issues, such as enabling effective interactions between users and medical applications, delivering patient data securely to a variety of organizations and devices, and improving the overall efficiency of medical practice workflow. Despite the interest in using blockchain technology for healthcare interoperability, however, little information is available on the concrete architectural styles and recommendations for designing blockchain-based apps targeting healthcare. This paper provides an initial step in filling this gap by showing: (1) the features and implementation challenges in healthcare interoperability, (2) an end-to-end case study of a blockchain-based healthcare app that we are developing, and (3) how designing blockchain-based apps using familiar software patterns can help address healthcare specific challenges.",
"title": ""
},
{
"docid": "a5abd5f11b83afdccbdfc190b8351b07",
"text": "Named Data Networking (NDN) is a recently proposed general- purpose network architecture that leverages the strengths of Internet architecture while aiming to address its weaknesses. NDN names packets rather than end-hosts, and most of NDN's characteristics are a consequence of this fact. In this paper, we focus on the packet forwarding model of NDN. Each packet has a unique name which is used to make forwarding decisions in the network. NDN forwarding differs substantially from that in IP; namely, NDN forwards based on variable-length names and has a read-write data plane. Designing and evaluating a scalable NDN forwarding node architecture is a major effort within the overall NDN research agenda. In this paper, we present the concepts, issues and principles of scalable NDN forwarding plane design. The essential function of NDN forwarding plane is fast name lookup. By studying the performance of the NDN reference implementation, known as CCNx, and simplifying its forwarding structure, we identify three key issues in the design of a scalable NDN forwarding plane: 1) exact string matching with fast updates, 2) longest prefix matching for variable-length and unbounded names and 3) large- scale flow maintenance. We also present five forwarding plane design principles for achieving 1 Gbps throughput in software implementation and 10 Gbps with hardware acceleration.",
"title": ""
},
{
"docid": "2d3b452d7a8cf8f29ac1896f14c43faa",
"text": "Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1% ROUGE points, leading to a conclusion that it provides a better document representation and a better document summary as a result.",
"title": ""
},
{
"docid": "60da71841669948e0a57ba4673693791",
"text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.",
"title": ""
},
{
"docid": "36e42f2e4fd2f848eaf82440c2bcbf62",
"text": "Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is based on an object carrying an RFID reader module, which reads low-cost passive tags installed next to the object path. A positioning system using a Kalman filter is proposed. The inputs of the proposed algorithm are the measurements of the backscattered signal power propagated from nearby RFID tags and a tag-path position database. The proposed algorithm first estimates the location of the reader, neglecting tag-reader angle-path loss. Based on the location estimate, an iterative procedure is implemented, targeting the estimation of the tag-reader angle-path loss, where the latter is iteratively compensated from the received signal strength information measurement. Experimental results are presented, illustrating the high performance of the proposed positioning system.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.