query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
9b73161237bb7d8c36e82c555821c5e3
A Comparative Survey of DBpedia , Freebase , OpenCyc , Wikidata , and YAGO
[ { "docid": "ec5abeb42b63ed1976cd47d3078c35c9", "text": "In semistructured data, the information that is normally associated with a schema is contained within the data, which is sometimes called “self-describing”. In some forms of semistructured data there is no separate schema, in others it exists but only places loose constraints on the data. Semistructured data has recently emerged as an important topic of study for a variety of reasons. First, there are data sources such as the Web, which we would like to treat as databases but which cannot be constrained by a schema. Second, it may be desirable to have an extremely flexible format for data exchange between disparate databases. Third, even when dealing with structured data, it may be helpful to view it. as semistructured for the purposes of browsing. This tutorial will cover a number of issues surrounding such data: finding a concise formulation, building a sufficiently expressive language for querying and transformation, and optimizat,ion problems.", "title": "" } ]
[ { "docid": "3b72c70213ccd3d5f3bda5cc2e2c6945", "text": "Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the softmax layer over a large vocabulary. We observe that, in decoding of many NLP tasks, only the probabilities of the top-K hypotheses need to be calculated preciously and K is often much smaller than the vocabulary size. This paper proposes a novel softmax layer approximation algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a given context, a set of K words that are most likely to occur according to a NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude while attaining close to the full softmax baseline accuracy on neural machine translation and language modeling tasks. We also prove the theoretical guarantee on the softmax approximation quality.", "title": "" }, { "docid": "b4d9afd6bca3e1dcb5a3f3c69a4c8848", "text": "Internet that covers a large information gives an opportunity to obtain knowledge from it. Internet contains large unstructured and unorganized data such as text, video, and image. Problems arise on how to organize large amount of data and obtain a useful information from it. This information can be used as knowledge in the intelligent computer system. Ontology as one of knowledge representation covers a large area topic. For constructing domain specified ontology, we use large text dataset on Internet and organize it into specified domain before ontology building process is done. We try to implement naive bayes text classifier using map reduce programming model in our research for organizing our large text dataset. In this experiment, we use animal and plant domain article in Wikipedia online encyclopedia as our dataset. Our proposed method can achieve highest accuracy with score about 98.8%. This experiment shows that our proposed method provides a robust system and good accuracy for classifying document into specified domain in preprocessing phase for domain specified ontology building.", "title": "" }, { "docid": "ca7afb87dae38ee0cf079f91dbd91d43", "text": "Diet is associated with the development of CHD. The incidence of CHD is lower in southern European countries than in northern European countries and it has been proposed that this difference may be a result of diet. The traditional Mediterranean diet emphasises a high intake of fruits, vegetables, bread, other forms of cereals, potatoes, beans, nuts and seeds. It includes olive oil as a major fat source and dairy products, fish and poultry are consumed in low to moderate amounts. Many observational studies have shown that the Mediterranean diet is associated with reduced risk of CHD, and this result has been confirmed by meta-analysis, while a single randomised controlled trial, the Lyon Diet Heart study, has shown a reduction in CHD risk in subjects following the Mediterranean diet in the secondary prevention setting. However, it is uncertain whether the benefits of the Mediterranean diet are transferable to other non-Mediterranean populations and whether the effects of the Mediterranean diet will still be feasible in light of the changes in pharmacological therapy seen in patients with CHD since the Lyon Diet Heart study was conducted. Further randomised controlled trials are required and if the risk-reducing effect is confirmed then the best methods to effectively deliver this public health message worldwide need to be considered.", "title": "" }, { "docid": "94b00d09c303d92a44c08fb211c7a8ed", "text": "Pull-Request (PR) is the primary method for code contributions from thousands of developers in GitHub. To maintain the quality of software projects, PR review is an essential part of distributed software development. Assigning new PRs to appropriate reviewers will make the review process more effective which can reduce the time between the submission of a PR and the actual review of it. However, reviewer assignment is now organized manually in GitHub. To reduce this cost, we propose a reviewer recommender to predict highly relevant reviewers of incoming PRs. Combining information retrieval with social network analyzing, our approach takes full advantage of the textual semantic of PRs and the social relations of developers. We implement an online system to show how the reviewer recommender helps project managers to find potential reviewers from crowds. Our approach can reach a precision of 74% for top-1 recommendation, and a recall of 71% for top-10 recommendation.", "title": "" }, { "docid": "43e456ac8007df8324b8a7bb2329a060", "text": "Malware detection is getting more and more attention due to the rapid growth of new malware. As a result, machine learning (ML) has become a popular way to detect malware variants. However, machine learning models can also be cheated. Through reinforcement learning (RL), we can generate new malware samples which can bypass the detection of machine learning. In this paper, a RL model on malware generation named gym-plus is designed. Gym-plus is built based on gym-malware with some improvements. As a result, the probability of evading machine learning based static PE malware detection models is increased by 30%. Based on these newly generated samples, we retrain our detecting model to detect unknown threats. In our test, the detection accuracy of malware increased from 15.75% to 93.5%.", "title": "" }, { "docid": "76812e97df665591052d42e3a01a1cec", "text": "OBJECTIVE\nAbout 10 years ago, Gratz and Roemer (2004) introduced the Difficulties in Emotion Regulation Scale (DERS), a 36-item self-report instrument measuring 6 areas of emotion regulation problems. Recently, Bjureberg et al. (2015) have introduced a new, briefer version of the DERS comprising only 16 of the 36 items included in the original version. Because no studies have yet cross-validated the recently introduced 16-item DERS and the 36-item DERS has never been tested in Brazil, we sought to inspect the psychometric properties of scores from both DERS versions with a nonclinical Brazilian sample.\n\n\nMETHOD\nParticipants were 725 adult volunteers aged 18-70 years (mean = 30.54, standard deviation = 10.59), 82.3% of whom were women. All were administered the DERS along with a number of other self-report and performance-based instruments. Data analyses inspected internal consistency, factor structure, and convergent as well as divergent validity of scores from both DERS versions.\n\n\nRESULTS\nResults show that scores from both DERS versions possess good psychometric properties. Interestingly, both versions correlated, in the expected direction, with psychopathology and showed no significant correlations with cognitive measures. Like in other studies, however, the Awareness factor of the 36-item DERS did not produce optimal validity and reliability indexes.\n\n\nCONCLUSION\nTaken together, our findings indicate that the 16-item DERS may be preferred over the 36-item version and provide additional support to the differentiation between emotion regulation and cognitive tasks of emotional perception and abstract and verbal reasoning.", "title": "" }, { "docid": "0b595a06d8d05e9fc706db5fdb2dd661", "text": "In this paper, we argue that the process of developing travel recommender systems (TRS) can be simplified. By studying the application domain of tourism information systems, and examining the algorithms and architectures available for recommender systems today, we discuss the dependencies and present a methodology for developing TRS, which can be applied at very early stages of TRS development. The methodology aims to be insightful without overburdening the project team with the mathematical basis and technical detail of the state of the art in recommender systems and give guidance on design choices to the project team.", "title": "" }, { "docid": "43e5146e4a7723cf391b013979a1da32", "text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.", "title": "" }, { "docid": "7a47dde6f7cc68c092922718000a807a", "text": "In the present study k-Nearest Neighbor classification method, have been studied for economic forecasting. Due to the effects of companies’ financial distress on stakeholders, financial distress prediction models have been one of the most attractive areas in financial research. In recent years, after the global financial crisis, the number of bankrupt companies has risen. Since companies' financial distress is the first stage of bankruptcy, using financial ratios for predicting financial distress have attracted too much attention of the academics as well as economic and financial institutions. Although in recent years studies on predicting companies’ financial distress in Iran have been increased, most efforts have exploited traditional statistical methods; and just a few studies have used nonparametric methods. Recent studies demonstrate this method is more capable than other methods.", "title": "" }, { "docid": "afc96e4003d7d5fbc281aced794e3e43", "text": "The increasing use of imaging necessitates familiarity with a wide variety of pathologic conditions, both common and rare, that affect the fallopian tube. These conditions should be considered in the differential diagnosis for pelvic disease in the nonpregnant patient. The most common condition is pelvic inflammatory disease, which represents a spectrum ranging from salpingitis to pyosalpinx to tubo-ovarian abscess. Isolated tubal torsion is rare but is nevertheless an important diagnosis to consider in the acute setting. Hematosalpinx in a nonpregnant patient can be an indicator of tubal endometriosis; however, care should be taken to exclude tubal torsion or malignancy. Current evidence suggests that the prevalence of primary fallopian tube carcinoma (PFTC) is underestimated and that there is a relationship between PFTC and breast cancer. PFTC has characteristic imaging features that can aid in its detection and in differentiating it from other pelvic masses. Familiarity with fallopian tube disease and the imaging appearances of both the normal and abnormal fallopian tube is crucial for optimal diagnosis and management in emergent as well as ambulatory settings.", "title": "" }, { "docid": "38bea301ed3ad1ef99893d0ab84a94d1", "text": "Artificial barriers, such as nest boxes and metal collars, are sometimes used, with variable success, to exclude predators and/or competitors from tree nests of vulnerable bird species. This paper describes the observed response of captive stoats (Mustela erminea) to a nest box design and an aluminium sheet collar used to protect kaka (Nestor meridionalis) nest cavities. The nest box, a prototype for kaka, was manufactured from PVC pipe. Initial trials failed to exclude stoats until an overhanging roof was added. All subsequent trials successfully prevented access by stoats. Trials with a 590 mm wide aluminium collar were less successful, but this was mainly due to restrictions enforced by enclosure design: Stoats gained access above the collar via the enclosure walls and ceiling. In only one of twelve trials was a stoat able to climb past the collar itself. The conservation implications of these trials and directions for future research are discussed. __________________________________________________________________________________________________________________________________", "title": "" }, { "docid": "62bc89c06c044fdaf01f623860750d8e", "text": "PURPOSE\nThe objective of this study was to evaluate the clinical quality of 191 porcelain laminate veneers and to explore the gingival response in a long-term survey.\n\n\nMATERIALS AND METHODS\nThe clinical examination was made by two calibrated examiners following modified California Dental Association/Ryge criteria. In addition, margin index, papillary bleeding index, sulcus probing depth, and increase in gingival recession were recorded. Two age groups were formed to evaluate the influence of wearing time upon the clinical results. The results were statistically evaluated using the Kaplan-Meier survival estimation method, Chi-squared test, and Kruskal-Wallis test.\n\n\nRESULTS\nA failure rate of 4% was found. Six of the total of seven failures were seen when veneers were partially bonded to dentin. Marginal integrity was acceptable in 99% and was rated as excellent in 63%. Superficial marginal discoloration was present in 17%. Slight marginal recession was detected in 31%, and bleeding on probing was found in 25%.\n\n\nCONCLUSION\nPorcelain laminate veneers offer a predictable and successful treatment modality that preserves a maximum of sound tooth structure. An increased risk of failure is present only when veneers are partially bonded to dentin. The estimated survival probability over a period of 10 years is 91%.", "title": "" }, { "docid": "a66a8516c2defbaf3be8b4acbf747e89", "text": "This paper presents a mono-camera based simultaneous obstacle recognition and distance estimation method for power transmission lines(PTLs) inspection robot to avoid obstacles on or around them. The proposed robot inspects the PTLs while moving along them between power transmission towers. For autonomous navigation, our robot recognizes obstacles and avoids a collision with them. In addition, it can stop at the ends of the PTLs by recognizing its structures, such as insulators, installed at the ends of them. In order to recognize obstacles or insulators efficiently, initially, a robust PTLs detection method based on vanishing point estimation is proposed since they are installed on or around the PTLs. Then, obstacle models with various scales are built, and multiple regions-of-interest(ROIs) according to the scales of the model are constructed along the detected PTLs. Finally, obstacles are recognized within the multiple ROIs. Because each ROI represents a corresponding scale of the obstacle model to be matched, the proposed approach can efficiently deal with scale changes and also estimate distance between the robot and the obstacle simultaneously with obstacle recognition. Experimental results not only show that the proposed method correctly recognizes obstacles but also that the distance between the robot and the obstacles is estimated efficiently.", "title": "" }, { "docid": "061241a299f805f5fb8c4acab1b668a2", "text": "This paper presents our participation in SemEval-2015 task 12 (Aspect Based Sentiment Analysis). We participated employing only unsupervised or weakly-supervised approaches. Our attempt is based on requiring the minimum annotated or hand-crafted content, and avoids training a model using the provided training set. We use a continuous word representations (Word2Vec) to leverage in-domain semantic similarities of words for many of the involved subtasks.", "title": "" }, { "docid": "5031a14305009f69107d14f7a4612b5b", "text": "Recently, Yuan et al. (IEEE Infocom ’13, pp.2652–2660) proposed an efficient secure nearest neighbor (SNN) search scheme on encrypted cloud database. Their scheme is claimed to be secure against the collusion attack of query clients and cloud server, because the colluding attackers cannot infer the encryption/decryption key. In this letter, we observe that the encrypted dataset in Yuan’s scheme can be broken by the collusion attack without deducing the key, and present a simple but powerful attack to their scheme. Experiment results validate the high efficiency of our attacking approach. Additionally, we also indicate an upper bound of collusion-resistant ability of any accurate SNN query scheme. key words: cloud computing, encrypted query, nearest neighbor, attack", "title": "" }, { "docid": "4899e13d5c85b63a823db9c4340824e7", "text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.", "title": "" }, { "docid": "19d8b6ff70581307e0a00c03b059964f", "text": "We propose a novel approach for analysing time series using complex network theory. We identify the recurrence matrix (calculated from time series) with the adjacency matrix of a complex network and apply measures for the characterisation of complex networks to this recurrence matrix. By using the logistic map, we illustrate the potential of these complex network measures for the detection of dynamical transitions. Finally, we apply the proposed approach to a marine palaeo-climate record and identify the subtle changes to the climate regime.", "title": "" }, { "docid": "96d1204b05289190635af23942b8c289", "text": "In this paper a social network is extracted from a literary text. The social network shows, how frequent the characters interact and how similar their social behavior is. Two types of similarity measures are used: the first applies co-occurrence statistics, while the second exploits cosine similarity on different types of word embedding vectors. The results are evaluated by a paid micro-task crowdsourcing survey. The experiments suggest that specific types of word embeddings like word2vec are well-suited for the task at hand and the specific circumstances of literary fiction text.", "title": "" }, { "docid": "6fd9793e9f44b726028f8c879157f1f7", "text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.", "title": "" }, { "docid": "a7b8986dbfde4a7ccc3a4ad6e07319a7", "text": "This article tests expectations generated by the veto players theory with respect to the over time composition of budgets in a multidimensional policy space. The theory predicts that countries with many veto players (i.e., coalition governments, bicameral political systems, presidents with veto) will have difficulty altering the budget structures. In addition, countries that tend to make significant shifts in government composition will have commensurate modifications of the budget. Data collected from 19 advanced industrialized countries from 1973 to 1995 confirm these expectations, even when one introduces socioeconomic controls for budget adjustments like unemployment variations, size of retired population and types of government (minimum winning coalitions, minority or oversized governments). The methodological innovation of the article is the use of empirical indicators to operationalize the multidimensional policy spaces underlying the structure of budgets. The results are consistent with other analyses of macroeconomic outcomes like inflation, budget deficits and taxation that are changed at a slower pace by multiparty governments. The purpose of this article is to test empirically the expectations of the veto players theory in a multidimensional setting. The theory defines ‘veto players’ as individuals or institutions whose agreement is required for a change of the status quo. The basic prediction of the theory is that when the number of veto players and their ideological distances increase, policy stability also increases (only small departures from the status quo are possible) (Tsebelis 1995, 1999, 2000, 2002). The theory was designed for the study of unidimensional and multidimensional policy spaces. While no policy domain is strictly unidimensional, existing empirical tests have only focused on analyzing political economy issues in a single dimension. These studies have confirmed the veto players theory’s expectations (see Bawn (1999) on budgets; Hallerberg & Basinger (1998) on taxes; Tsebelis (1999) on labor legislation; Treisman (2000) on inflation; Franzese (1999) on budget deficits). This article is the first attempt to test whether the predictions of the veto players theory hold in multidimensional policy spaces. We will study a phenomenon that cannot be considered unidimensional: the ‘structure’ of budgets – that is, their percentage composition, and the change in this composition over © European Consortium for Political Research 2004 Published by Blackwell Publishing Ltd., 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA", "title": "" } ]
scidocsrr
158047df67e3e45c32dff618a66ea66e
Quality in use : incorporating human factors into the software engineering lifecycle
[ { "docid": "75efc4354998b5e0ce8cc3b4f5f69703", "text": "It is often assumed that a standard means a precise specification. Such standards have brought benefits in many fields, eg: bolts which screw into nuts, ATMs which can read credit cards, and compilers which can read programming languages. Some HCI standards are also of this type: many design guides provide a detailed specification of the nature of the user interface. Although standard user interfaces provide the benefit of consistency, they become out of date as technology changes, and are usually only appropriate for limited types of users and tasks (Bevan and Holdaway, 1993). Thus most work on international standards for HCI has not been about precise specification, but instead has concentrated on the principles which need to be applied in order to produce an interface which meets user and task needs. These standards broadly fall into two categories. One is a \"top-down\" approach which is concerned with usability as a broad quality objective: the ability to use a product for its intended purpose. The other is a product-oriented \"bottom-up\" view which is concerned with aspects of the interface which make a system easier to use. The broad quality view originates from human factors, and standards of this type are applicable in the broad context of design and quality objectives. The product-oriented view concentrates on the design of specific attributes, and relates more closely to the needs of the interface designer and the role of usability in software engineering (see Bevan, 1995). Section 4 explains how standards can be used to provide a means of meeting the requirements for the operator-computer interface in the European Directive on Display Screen Equipment.", "title": "" } ]
[ { "docid": "15178c76d92c5e59e4d03582bc852a49", "text": "This paper presents a new path planning method which operates in two steps. In the first step the safest areas in the environment are extracted by means of a Voronoi diagram. In the second step fast marching method is applied to the Voronoi extracted areas in order to obtain the shortest path. In this way the trajectory obtained is the shortest between the safe possible ones. This two step method combines speed and reliability, because the map dimensions is reduced to a unidimensional map and this map represents the safest areas in the environment for moving the robot", "title": "" }, { "docid": "26585cd252a919776c8d757f87b03106", "text": "The ability to simultaneously leverage multiple modes of sensor information is critical for perception of an automated vehicle's physical surroundings. Spatio-temporal alignment of registration of the incoming information is often a prerequisite to analyzing the fused data. The persistence and reliability of multi-modal registration is therefore the key to the stability of decision support systems ingesting the fused information. LiDAR-video systems like on those many driverless cars are a common example of where keeping the LiDAR and video channels registered to common physical features is important. We develop a deep learning method that takes multiple channels of heterogeneous data, to detect the misalignment of the LiDAR-video inputs. A number of variations were tested on the Ford LiDAR-video driving test data set and will be discussed. To the best of our knowledge the use of multi-modal deep convolutional neural networks for dynamic real-time LiDAR-video registration has not been presented.", "title": "" }, { "docid": "e003dd850e8ca294a45e2bec122945c3", "text": "In this paper, we address the problem of determining optimal hyper-parameters for support vector machines (SVMs). The standard way for solving the model selection problem is to use grid search. Grid search constitutes an exhaustive search over a pre-defined discretized set of possible parameter values and evaluating the cross-validation error until the best is found. We developed a bi-level optimization approach to solve the model selection problem for linear and kernel SVMs, including the extension to learn several kernel parameters. Using this method, we can overcome the discretization of the parameter space using continuous optimization, and the complexity of the method only increases linearly with the number of parameters (instead of exponentially using grid search). In experiments, we determine optimal hyper-parameters based on different smooth estimates of the cross-validation error and find that only very few iterations of bi-level optimization yield good classification rates.", "title": "" }, { "docid": "41aafcc55ecc1dbc5d1dd71fe5532f39", "text": "This paper was executed in order to design side lobe suppression of Vivaldi antenna using shorting pin structure at S-band frequency. Nowadays, Vivaldi antenna has been used to satellite communications, remote sensing and radio telescope. There are a variety of designs such as tapered slot Vivaldi antenna, antipodal Vivaldi antenna and balanced antipodal Vivaldi antenna. Tapered slot Vivaldi antenna has been selected to design the project. The design of the antenna is start with patch antenna by using basis of transmission line model (TLM) formula to calculate the length and width of the patch. It is shaped into Vivaldi antenna. For the final design, the shorting pin structure will connect the patch on top side and the ground plane to absorb the unbalanced current. The shorting pin will reduce the side lobe. The antenna was simulated by using CST Microwave Studio. Based on the simulation results, the Vivaldi antenna shows a good performance as the return loss less than - 10 dB, voltage standing wave ratio, gain, directivity and radiation pattern was obtained the desired result as for a resonant frequency of 3 GHz. The radiation pattern of Vivaldi antenna with shorting pin structure reducing as compared with the design without shorting pin structure. The final antenna design shows that it has a few main lobe magnitudes where the antenna can focus and direct the energy for transmission or receiving of the energy signal.", "title": "" }, { "docid": "450f13659ece54bee1b4fe61cc335eb2", "text": "Though considerable effort has recently been devoted to hardware realization of one-dimensional chaotic systems, the influence of implementation inaccuracies is often underestimated and limited to non-idealities in the non-linear map. Here we investigate the consequences of sample-and-hold errors. Two degrees of freedom in the design space are considered: the choice of the map and the sample-and-hold architecture. Current-mode systems based on Bernoulli Shift, on Tent Map and on Tailed Tent Map are taken into account and coupled with an order-one model of sample-and-hold to ascertain error causes and suggest implementation improvements. key words: chaotic systems, analog circuits, sample-and-hold errors", "title": "" }, { "docid": "46170fe683c78a767cb15c0ac3437e83", "text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.", "title": "" }, { "docid": "a3f6781adeca64763156ac41dff32c82", "text": "A multilayer bandpass filter (BPF) with harmonic suppression using meander line inductor and interdigital capacitor (MLI-IDC) resonant structure is presented in this letter. The BPF is fabricated with three unit cells and its measured passband center frequency is 2.56 GHz with a bandwidth of 0.38 GHz and an insertion loss of 1.5 dB. The harmonics are suppressed up to 11 GHz. A diplexer using the proposed BPF is also presented. The proposed diplexer consists of 4.32 mm sized unit cells to couple 2.5 GHz signal into port 2, and 3.65 mm sized unit cells to couple 3.7 GHz signal into port 3. The notch circuit is placed on the output lines of the diplexer to improve isolation. The proposed diplexer has demonstrated insertion loss of 1.35 dB with 0.45 GHz bandwidth in port 2 and 1.73 dB insertion loss with 0.44 GHz bandwidth in port 3. The isolation is better than 18 dB in the first passband with 38 dB maximum isolation at 2.5 GHz. The isolation in the second passband is better than 26 dB with 45 dB maximum isolation at 3.7 GHz.", "title": "" }, { "docid": "319a24bca0b0849e05ce8cce327c549b", "text": "This paper presents a summary of the Computational Linguistics and Clinical Psychology (CLPsych) 2015 shared and unshared tasks. These tasks aimed to provide apples-to-apples comparisons of various approaches to modeling language relevant to mental health from social media. The data used for these tasks is from Twitter users who state a diagnosis of depression or post traumatic stress disorder (PTSD) and demographically-matched community controls. The unshared task was a hackathon held at Johns Hopkins University in November 2014 to explore the data, and the shared task was conducted remotely, with each participating team submitted scores for a held-back test set of users. The shared task consisted of three binary classification experiments: (1) depression versus control, (2) PTSD versus control, and (3) depression versus PTSD. Classifiers were compared primarily via their average precision, though a number of other metrics are used along with this to allow a more nuanced interpretation of the performance measures.", "title": "" }, { "docid": "047949b0dba35fb11f9f3b716893701d", "text": "Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.", "title": "" }, { "docid": "1ed5f0c149eb7d5b9de30c59dff7bcd0", "text": "In this paper we develop, implement and evaluate an approach to quickly reassign resources for a virtualized utility computing platform. The approach provides this platform agility using ghost virtual machines (VMs), which participate in application clusters, but do not handle client requests until needed. We show that our approach is applicable to and can benefit different virtualization technologies.\n We tested an implementation of our approach on two virtualization platforms with agility results showing that a sudden increase in application load could be detected and a ghost VM activated handling client load in 18 seconds. In comparison with legacy systems needing to resume VMs in the face of sharply increased demand, our approach exhibits much better performance across a set of metrics. We also found that it demonstrates competitive performance when compared with scripted resource changes based on a known workload. Finally the approach performs well when used with multiple applications exhibiting periodic workload changes.", "title": "" }, { "docid": "6403b543937832f641d98b9212d2428e", "text": "Information edge and 3 millennium predisposed so many of revolutions. Business organization with emphasize on information systems is try to gathering desirable information for decision making. Because of comprehensive change in business background and emerge of computers and internet, the business structure and needed information had change, the competitiveness as a major factor for life of organizations in information edge is preyed of information technology challenges. In this article we have reviewed in the literature of information systems and discussed the concepts of information system as a strategic tool.", "title": "" }, { "docid": "5e1b6813fc03947079ce4aade92b305b", "text": "Jonathon Leipsic MD, FSCCT Co-Chair*, Suhny Abbara MD, FSCCT, Stephan Achenbach MD, FSCCT, Ricardo Cury MD, FSCCT, James P. Earls MD, FSCCT, GB John Mancini MD, Koen Nieman MD, PhD, Gianluca Pontone MD, Gilbert L. Raff MD, FSCCT Co-Chair University of British Columbia, Vancouver, Canada University of Texas Southwestern Medical Center, Dallas, Texas University of Erlangen, Erlangen, Germany Baptist Cardiac and Vascular Institute, Miami, Florida Fairfax Radiological Consultants, PC, Fairfax, Virginia University of British Columbia, Vancouver, Canada Erasmus MC, Rotterdam, Netherlands Centro Cardiologico Monzino, Milan, Italy William Beaumont Hospital, Royal Oak, Michigan", "title": "" }, { "docid": "0680b5b340b528c7d4356b37675727a1", "text": "BCL-2 family proteins, which have either pro- or anti-apoptotic activities, have been studied intensively for the past decade owing to their importance in the regulation of apoptosis, tumorigenesis and cellular responses to anti-cancer therapy. They control the point of no return for clonogenic cell survival and thereby affect tumorigenesis and host–pathogen interactions and regulate animal development. Recent structural, phylogenetic and biological analyses, however, suggest the need for some reconsideration of the accepted organizational principles of the family and how the family members interact with one another during programmed cell death. Although these insights into interactions among BCL-2 family proteins reveal how these proteins are regulated, a unifying hypothesis for the mechanisms they use to activate caspases remains elusive.", "title": "" }, { "docid": "13c366b8a069ca6054e8123d8da24aea", "text": "The DBSCAN [1] algorithm is a popular algorithm in Data Mining field as it has the ability to mine the noiseless arbitrary shape Clusters in an elegant way. As the original DBSCAN algorithm uses the distance measures to compute the distance between objects, it consumes so much processing time and its computation complexity comes as O (N2). In this paper we have proposed a new algorithm to improve the performance of DBSCAN algorithm. The existing algorithms A Fast DBSCAN Algorithm[6] and Memory effect in DBSCAN algorithm[7] has been combined in the new solution to speed up the performance as well as improve the quality of the output. As the RegionQuery operation takes long time to process the objects, only few objects are considered for the expansion and the remaining missed border objects are handled differently during the cluster expansion. Eventually the performance analysis and the cluster output show that the proposed solution is better to the existing algorithms.", "title": "" }, { "docid": "b60474e6e2fa0f08241819bac709d6fd", "text": "Patriarchy is the prime obstacle to women’s advancement and development. Despite differences in levels of domination the broad principles remain the same, i.e. men are in control. The nature of this control may differ. So it is necessary to understand the system, which keeps women dominated and subordinate, and to unravel its workings in order to work for women’s development in a systematic way. In the modern world where women go ahead by their merit, patriarchy there creates obstacles for women to go forward in society. Because patriarchal institutions and social relations are responsible for the inferior or secondary status of women. Patriarchal society gives absolute priority to men and to some extent limits women’s human rights also. Patriarchy refers to the male domination both in public and private spheres. In this way, feminists use the term ‘patriarchy’ to describe the power relationship between men and women as well as to find out the root cause of women’s subordination. This article, hence, is an attempt to analyse the concept of patriarchy and women’s subordination in a theoretical perspective.", "title": "" }, { "docid": "a73ab842f63ef5f70beeb971a3ab14cb", "text": "We introduce a model-based image reconstruction framework with a convolution neural network (CNN)-based regularization prior. The proposed formulation provides a systematic approach for deriving deep architectures for inverse problems with the arbitrary structure. Since the forward model is explicitly accounted for, a smaller network with fewer parameters is sufficient to capture the image information compared to direct inversion approaches. Thus, reducing the demand for training data and training time. Since we rely on end-to-end training with weight sharing across iterations, the CNN weights are customized to the forward model, thus offering improved performance over approaches that rely on pre-trained denoisers. Our experiments show that the decoupling of the number of iterations from the network complexity offered by this approach provides benefits, including lower demand for training data, reduced risk of overfitting, and implementations with significantly reduced memory footprint. We propose to enforce data-consistency by using numerical optimization blocks, such as conjugate gradients algorithm within the network. This approach offers faster convergence per iteration, compared to methods that rely on proximal gradients steps to enforce data consistency. Our experiments show that the faster convergence translates to improved performance, primarily when the available GPU memory restricts the number of iterations.", "title": "" }, { "docid": "52584b104551495dc0691cf8567f33ef", "text": "A 58-year-old man presented with whitish patches on both great toenails for four weeks prior to visiting our hospital; the patches spread rapidly to other finger- and toe-nails. Prior to presentation, the patient had been diagnosed with idiopathic thrombocytopenic purpura two months ago and Kaposi's sarcoma three weeks ago. The patient was treated with human immunoglobulin for five days, and then received prednisolone 40 mg bid. Serology showed that the patient was negative for HIV and results of other laboratory tests were normal. The KOH slide preparation of the nail scraping showed long septated hyphae and numerous arthrospores. The fungus culture revealed whitish downy colonies on the front side and wine-red reverse pigmentation on Sabouraud's dextrose agar. Trichophyton rubrum was isolated on fungus culture and slide culture. The internal transcribed space (ITS) regions of ribosomal DNA of the cultured fungus were identical to Trichophyton rubrum. Proximal subungual onychomycosis (PSO) is the rarest form of onychomycosis. PSO initially presents as whitish patch(es) on the proximal side of the nail plate(s). Because PSO shows whitish to yellowish patches on the nail plate, the result of KOH examination of nail scrapings from the nail plate is almost always negative. Herein, we report on a case of multiple PSO in a patient with classic Kaposi sarcoma and suggest a method for easy KOH scraping on PSO.", "title": "" }, { "docid": "3a50df4f64df3c65fbac1727ebe7725a", "text": "Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.", "title": "" }, { "docid": "cadc9c709b6bd8675d11f23a87d1165b", "text": "A restricted Boltzmann machine (RBM) is often used as a building block for constructing deep neural networks and deep generative models which have gained popularity recently as one way to learn complex and large probabilistic models. In these deep models, it is generally known that the layer-wise pretraining of RBMs facilitates finding a more accurate model for the data. It is, hence, important to have an efficient learning method for RBM. The conventional learning is mostly performed using the stochastic gradients, often, with the approximate method such as contrastive divergence (CD) learning to overcome the computational difficulty. Unfortunately, training RBMs with this approach is known to be difficult, as learning easily diverges after initial convergence. This difficulty has been reported recently by many researchers. This thesis contributes important improvements that address the difficulty of training RBMs. Based on an advanced Markov-Chain Monte-Carlo sampling method called parallel tempering (PT), the thesis proposes a PT learning which can replace CD learning. In terms of both the learning performance and the computational overhead, PT learning is shown to be superior to CD learning through various experiments. The thesis also tackles the problem of choosing the right learning parameter by proposing a new algorithm, the adaptive learning rate, which is able to automatically choose the right learning rate during learning. A closer observation into the update rules suggested that learning by the traditional update rules is easily distracted depending on the representation of data sets. Based on this observation, the thesis proposes a new set of gradient update rules that are more robust to the representation of training data sets and the learning parameters. Extensive experiments on various data sets confirmed that the proposed rules indeed improve learning significantly. Additionally, a Gaussian-Bernoulli RBM (GBRBM) which is a variant of an RBM that can learn continuous real-valued data sets is reviewed, and the proposed improvements are tested upon it. The experiments showed that the improvements could also be made for GBRBMs. It is impossible for me to express my appreciation to Dr. Alexander Ilin and Dr. Tapani Raiko enough for their enormous help. If it were not them, this thesis would not have been made possible. I would like to thank my fellow MACADAMIA students and wish best for them all. Also, my three Korean friends–Byungjin Cho, Sungin Cho, and Eunah Cho– in Finland who happen to share the same last name with me have …", "title": "" } ]
scidocsrr
68cf3f9fcf45ece00ff4385ad0820509
Autofocus survey: a comparison of algorithms
[ { "docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb", "text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.", "title": "" } ]
[ { "docid": "bd3717bd46869b9be3153478cbd19f2a", "text": "The study was conducted to assess the effectiveness of jasmine oil massage on labour pain during first stage of labour among 40 primigravida women. The study design adopted was true experimental approach with pre-test post-test control group design. The demographic Proforma were collected from the women by interview and Visual analogue scale was used to measure the level of labour pain in both the groups. Data obtained in these areas were analysed by descriptive and inferential statistics. A significant difference was found in the experimental group( t 9.869 , p<0.05) . A significant difference was found between experimental group and control group. cal", "title": "" }, { "docid": "646f6456904a6ffe968c0f79a5286f65", "text": "Both ray tracing and point-based representations provide means to efficiently display very complex 3D models. Computational efficiency has been the main focus of previous work on ray tracing point-sampled surfaces. For very complex models efficient storage in the form of compression becomes necessary in order to avoid costly disk access. However, as ray tracing requires neighborhood queries, existing compression schemes cannot be applied because of their sequential nature. This paper introduces a novel acceleration structure called the quantized kd-tree, which offers both efficient traversal and storage. The gist of our new representation lies in quantizing the kd-tree splitting plane coordinates. We show that the quantized kd-tree reduces the memory footprint up to 18 times, not compromising performance. Moreover, the technique can also be employed to provide LOD (level-of-detail) to reduce aliasing problems, with little additional storage cost", "title": "" }, { "docid": "5f9e86aea387a69fdf81fbfe27ea731b", "text": "Antilock brake system (ABS) technology with powerful electronic components has shown superior braking performance to conventional vehicles on test tracks. On real highways, however, the performance of the ABS-equipped car has been disappointing. The poor braking performance with ABS has resulted from the questionable design of the human-machine interface for the brake system. The goal of this study is to design brake systems that provide more intuitive brake control and proper braking-performance information for the driver. In this study automotive brake systems are modeled as a type of master-slave telemanipulator. Human force-displacement interaction at the brake pedal has a strong effect on braking performance. As a preliminary study in brake-system design, the characteristics of human leg motion and its underlying motor-control scheme are studied through experiments and simulations, and a model of braking motion by the driver's leg is developed. This paper proposes novel brake systems based on two new aspects. First, the mechanical impedance characteristics of the leg action of the driver are taken into consideration in designing the brake systems. Second, the brake systems provide the driver with kinesthetic feedback of braking conditions or performance. The effectiveness of the proposed designs in a combined driver-vehicle system is investigated using driving simulation.", "title": "" }, { "docid": "f462cb7fb501c561dea600ca6e815ff2", "text": "This study assessed the role of rape myth acceptance (RMA) and situational factors in the perception of three different rape scenarios (date rape, marital rape, and stranger rape). One hundred and eighty-two psychology undergraduates were asked to emit four judgements about each rape situation: victim responsibility, perpetrator responsibility, intensity of trauma, and likelihood to report the crime to the police. It was hypothesized that neither RMA nor situational factors alone can explain how rape is perceived; it is the interaction between these two factors that best account for social reactions to sexual aggression. The results generally supported the authors' hypothesis: Victim blame, estimation of trauma, and the likelihood of reporting the crime to the police were best explained by the interaction between observer characteristics, such as RMA, and situational clues. That is, the less stereotypic the rape situation was, the greater was the influence of attitudes toward rape on attributions.", "title": "" }, { "docid": "114492ca2cef179a39b5ad5edbc80de0", "text": "We review early and recent psychological theories of dehumanization and survey the burgeoning empirical literature, focusing on six fundamental questions. First, we examine how people are dehumanized, exploring the range of ways in which perceptions of lesser humanness have been conceptualized and demonstrated. Second, we review who is dehumanized, examining the social targets that have been shown to be denied humanness and commonalities among them. Third, we investigate who dehumanizes, notably the personality, ideological, and other individual differences that increase the propensity to see others as less than human. Fourth, we explore when people dehumanize, focusing on transient situational and motivational factors that promote dehumanizing perceptions. Fifth, we examine the consequences of dehumanization, emphasizing its implications for prosocial and antisocial behavior and for moral judgment. Finally, we ask what can be done to reduce dehumanization. We conclude with a discussion of limitations of current scholarship and directions for future research.", "title": "" }, { "docid": "af013d9eb2365034f587e407b6824540", "text": "Marching Cubes is the most frequently used method to reconstruct isosurface from a point cloud. However, the point clouds are getting denser and denser, thus the efficiency of Marching cubes method has become an obstacle. This paper presents a novel GPU-based parallel surface reconstruction algorithm. The algorithm firstly creates a GPU-based uniform grid structure to manage point cloud. Then directed distances from vertices of cubes to the point cloud are computed in a newly put forwarded parallel way. Finally, after the generation of triangles, a space indexing scheme is adopted to reconstruct the connectivity of the resulted surface. The results show that our algorithm can run more than 10 times faster compared to the CPU-based implementations.", "title": "" }, { "docid": "bc9f689b59e04502a3a44006be6183e8", "text": "With the recent development and availability of wide bandgap devices in the market, more and more power converters are being designed with such devices. Given their fast commutation, when compared to their equivalent Si-based counterparts, these new devices allow increasing the converter's efficiency and/or power density. However, in order to fully avail these new devices, one should precisely know their switching characteristics and exploit it the best way possible. This paper recalls our own precise method to measure separately turn-on and turn-off energies of wide bandgap devices. This method is applied to commercially available SiC and GaN transistors and results show that they present much lower turn-off than turn-on energies. For that reason, we show that a SiC-based buck converter must have high current ripple in the output filter inductor in order to decrease transistor losses. Analysis of these losses as well as experimental results are presented. Finally, the precise design of a 2-kW SiC-based buck converter for aircraft applications is performed for different current ripples and switching frequencies. We show that current ripple higher than 250% of the dc load current significantly decreases the converter's losses, and consequently allows the increase of the switching frequency, which reduces the system volume and weight.", "title": "" }, { "docid": "73be22065cb341d25969c8fc5e833dc7", "text": "The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences. This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds. The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.", "title": "" }, { "docid": "fccf3c5ed97b76594e94dd3d9869236b", "text": "For an automatic speech recognition system to produce sensibly formatted, readable output, the spoken-form token sequence produced by the core speech recognizer must be converted to a written-form string. This process is known as inverse text normalization (ITN). Here we present a mostly data-driven ITN system that leverages a set of simple rules and a few handcrafted grammars to cast ITN as a labeling problem. To this labeling problem, we apply a compact bi-directional LSTM. We show that the approach performs well using practical amounts of training data.", "title": "" }, { "docid": "9c43da9473facdecda86442e157736db", "text": "The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices’ capacity. What is worse, app service providers need to collect and utilize a large volume of users’ data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83, 10−6)-differential privacy is guaranteed, the compact model trained by RONA can obtain 20× compression ratio and 19× speed-up with merely 0.97% accuracy loss.", "title": "" }, { "docid": "34ab20699d12ad6cca34f67cee198cd9", "text": "Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.", "title": "" }, { "docid": "80bb8f4af70a6c0b6dc5fd149c128154", "text": "The skin care product market is growing due to the threat of ultraviolet (UV) radiation caused by the destruction of the ozone layer, increasing demand for tanning, and the tendency to wear less clothing. Accordingly, there is a potential demand for a personalized UV monitoring device, which can play a fundamental role in skin cancer prevention by providing measurements of UV radiation intensities and corresponding recommendations. This paper highlights the development and initial validation of a wireless and portable embedded device for personalized UV monitoring which is based on a novel software architecture, a high-end UV sensor, and conventional PDA (or a cell phone). In terms of short-term applications, by calculating the UV index, it informs the users about their maximum recommended sun exposure time by taking their skin type and sun protection factor (SPF) of the applied sunscreen into consideration. As for long-term applications, given that the damage caused by UV light is accumulated over days, it displays the amount of UV received over a certain course of time, from a single day to a month.", "title": "" }, { "docid": "7ca62c2da424c826744bca7196f07def", "text": "Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel ‘fact-based’ visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to ‘reason’ about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.", "title": "" }, { "docid": "03466b16fdc3829280914c52e20a239a", "text": "With the eruption of online social networks, like Twitter and Facebook, a series of new APIs have appeared to allow access to the data that these new sources of information accumulate. One of most popular online social networks is the micro-blogging site Twitter. Its APIs allow many machines to access the torrent simultaneously to Twitter data, listening to tweets and accessing other useful information such as user profiles. A number of tools have appeared for processing Twitter data with different algorithms and for different purposes. In this paper T-Hoarder is described: a framework that enables tweet crawling, data filtering, and which is also able to display summarized and analytical information about the Twitter activity with respect to a certain topic or event in a web-page. This information is updated on a daily basis. The tool has been validated with real use-cases that allow making a series of analysis on the performance one may expect from this type of infrastructure.", "title": "" }, { "docid": "84e8986eff7cb95808de8df9ac286e37", "text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.", "title": "" }, { "docid": "2b03868a73808a0135547427112dcaf8", "text": "In this article we focus attention on ethnography’s place in CSCW by reflecting on how ethnography in the context of CSCW has contributed to our understanding of the sociality and materiality of work and by exploring how the notion of the ‘field site’ as a construct in ethnography provides new ways of conceptualizing ‘work’ that extends beyond the workplace. We argue that the well known challenges of drawing design implications from ethnographic research have led to useful strategies for tightly coupling ethnography and design. We also offer some thoughts on recent controversies over what constitutes useful and proper ethnographic research in the context of CSCW. Finally, we argue that as the temporal and spatial horizons of inquiry have expanded, along with new domains of collaborative activity, ethnography continues to provide invaluable perspectives.", "title": "" }, { "docid": "9f13ba2860e70e0368584bb4c36d01df", "text": "Network log messages (e.g., syslog) are expected to be valuable and useful information to detect unexpected or anomalous behavior in large scale networks. However, because of the huge amount of system log data collected in daily operation, it is not easy to extract pinpoint system failures or to identify their causes. In this paper, we propose a method for extracting the pinpoint failures and identifying their causes from network syslog data. The methodology proposed in this paper relies on causal inference that reconstructs causality of network events from a set of time series of events. Causal inference can filter out accidentally correlated events, thus it outputs more plausible causal events than traditional cross-correlation-based approaches can. We apply our method to 15 months’ worth of network syslog data obtained from a nationwide academic network in Japan. The proposed method significantly reduces the number of pseudo correlated events compared with the traditional methods. Also, through three case studies and comparison with trouble ticket data, we demonstrate the effectiveness of the proposed method for practical network operation.", "title": "" }, { "docid": "51e2f490072820230d71f648d70babcb", "text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.", "title": "" }, { "docid": "6b200a2fe32af23d40fd45d340435892", "text": "Otocephaly, characterized by mandibular hypoplasia or agnathia, ventromedial auricular malposition (melotia) and/or auricular fusion (synotia), and microstomia with oroglossal hypoplasia or aglossia, is an extremely rare anomalad, identified in less than 1 in 70,000 births. The malformation spectrum is essentially lethal, because of ventilatory problems, and represents a developmental field defect of blastogenesis primarily affecting thefirst branchial arch derivatives. Holoprosencephaly is the most commonly identified association, but skeletal, genitourinary, and cardiovascular anomalies, and situs inversus have been reported. Polyhydramnios may be the presenting feature, but prenatal diagnosis has been uncommon. We present five new cases of otocephaly, the largest published series to date, with comprehensive review of the literature and an update of research in the etiopathogenesis of this malformation complex. One of our cases had situs inversus, and two presented with unexplained polyhydramnios. Otocephaly, while quite rare, should be considered in the differential diagnosis of this gestational complication.", "title": "" }, { "docid": "424f871e0e2eabf8b1e636f73d0b1c7d", "text": "Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.", "title": "" } ]
scidocsrr
657a221698b7b78cc4ded97765ac72ad
FPGA-Based Test-Bench for Resonant Inverter Load Characterization
[ { "docid": "87f0810dde0447cea2cff24149b49e0a", "text": "The design of new power-converter solutions optimized for specific applications requires, at a certain step, the design and implementation of several prototypes in order to verify the converter operation. This is a time-consuming task which also involves a significant economical cost. The aim of this paper is to present a versatile power electronics architecture which provides a tool to make the implementation and evaluation of new power converters straightforward. The adopted platform includes a versatile control architecture and a modular power electronics hardware solution. The control architecture is a field-programmable-gate-array-based system-on-programmable-chip solution which combines the advantages of the processor-firmware versatility and the effectiveness of ad hoc paralleled digital hardware. Moreover, the modular power electronics hardware provides a fast method to reconfigure the power-converter topology. The architecture proposed in this paper has been applied to the development of power converters for domestic induction heating, although it can be extended to other applications with similar requirements. A complete development test bench has been carried out, and some experimental results are shown in order to verify the proper system operation.", "title": "" }, { "docid": "9123ff1c2e6c52bf9a16a6ed4c67f151", "text": "Domestic induction cookers operation is based on a resonant inverter which supplies medium-frequency currents (20-100 kHz) to an inductor, which heats up the pan. The variable load that is inherent to this application requires the use of a reliable and load-adaptive control algorithm. In addition, a wide output power range is required to get a satisfactory user performance. In this paper, a control algorithm to cover the variety of loads and the output power range is proposed. The main design criteria are efficiency, power balance, acoustic noise, flicker emissions, and user performance. As a result of the analysis, frequency limit and power level limit algorithms are proposed based on square wave and pulse density modulations. These have been implemented in a field-programmable gate array, including output power feedback and mains-voltage zero-cross-detection circuitry. An experimental verification has been performed using a commercial induction heating inverter. This provides a convenient experimental test bench to analyze the viability of the proposed algorithm.", "title": "" } ]
[ { "docid": "99e604a84b6d56d2f42efe7b0a2ddec8", "text": "This work aims at providing a RLCG modeling ofthe 10 µm fine-pitch microbump type interconnects in the 100 MHz-40 GHz frequency band based on characterization approach. RF measurements are performed on two-port test structures within a short-loop with chip to wafer assembly using the fine pitch 10 µm Cu-pillar on a 10 Ohm.cm substrate resistivity silicon interposer. Accuracy is obtained thanks to a coplanar transmission line using 44 Cu-pillar transitions. To the author knowledge, it is the first time that RLCG modeling of fine-pitch Cu-pillar is extracted from experimental results. Another goal of this work is to get a better understanding of the main physical effects over a wide frequency range, especially concerning the key parameter of fine pitch Cu-pillar, i.e. the resistance. Finally, analysis based on the proposed RLCG modeling are performed to optimize over frequency the resistive interposer-to-chip link thanks to process modifications mitigating high frequency parasitic effects.", "title": "" }, { "docid": "d4cf614c352b3bbef18d7f219a3da2d1", "text": "In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine.", "title": "" }, { "docid": "f69b9816e8f8716d12eaa43e3d1222f4", "text": "BACKGROUND\nIn 1986, the European Organization for Research and Treatment of Cancer (EORTC) initiated a research program to develop an integrated, modular approach for evaluating the quality of life of patients participating in international clinical trials.\n\n\nPURPOSE\nWe report here the results of an international field study of the practicality, reliability, and validity of the EORTC QLQ-C30, the current core questionnaire. The QLQ-C30 incorporates nine multi-item scales: five functional scales (physical, role, cognitive, emotional, and social); three symptom scales (fatigue, pain, and nausea and vomiting); and a global health and quality-of-life scale. Several single-item symptom measures are also included.\n\n\nMETHODS\nThe questionnaire was administered before treatment and once during treatment to 305 patients with nonresectable lung cancer from centers in 13 countries. Clinical variables assessed included disease stage, weight loss, performance status, and treatment toxicity.\n\n\nRESULTS\nThe average time required to complete the questionnaire was approximately 11 minutes, and most patients required no assistance. The data supported the hypothesized scale structure of the questionnaire with the exception of role functioning (work and household activities), which was also the only multi-item scale that failed to meet the minimal standards for reliability (Cronbach's alpha coefficient > or = .70) either before or during treatment. Validity was shown by three findings. First, while all interscale correlations were statistically significant, the correlation was moderate, indicating that the scales were assessing distinct components of the quality-of-life construct. Second, most of the functional and symptom measures discriminated clearly between patients differing in clinical status as defined by the Eastern Cooperative Oncology Group performance status scale, weight loss, and treatment toxicity. Third, there were statistically significant changes, in the expected direction, in physical and role functioning, global quality of life, fatigue, and nausea and vomiting, for patients whose performance status had improved or worsened during treatment. The reliability and validity of the questionnaire were highly consistent across the three language-cultural groups studied: patients from English-speaking countries, Northern Europe, and Southern Europe.\n\n\nCONCLUSIONS\nThese results support the EORTC QLQ-C30 as a reliable and valid measure of the quality of life of cancer patients in multicultural clinical research settings. Work is ongoing to examine the performance of the questionnaire among more heterogenous patient samples and in phase II and phase III clinical trials.", "title": "" }, { "docid": "adf0a2cad66a7e48c16f02ef1bc4e9da", "text": "Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance.", "title": "" }, { "docid": "1cbd13de915d2a4cedd736345ebb2134", "text": "This paper deals with the design and implementation of a nonlinear control algorithm for the attitude tracking of a four-rotor helicopter known as quadrotor. This algorithm is based on the second order sliding mode technique known as Super-Twisting Algorithm (STA) which is able to ensure robustness with respect to bounded external disturbances. In order to show the effectiveness of the proposed controller, experimental tests were carried out on a real quadrotor. The obtained results show the good performance of the proposed controller in terms of stabilization, tracking and robustness with respect to external disturbances.", "title": "" }, { "docid": "a6d26826ee93b3b5dec8282d0c632f8e", "text": "Superficial Acral Fibromyxoma is a rare tumor of soft tissues. It is a relatively new entity described in 2001 by Fetsch et al. It probably represents a fibrohistiocytic tumor with less than 170 described cases. We bring a new case of SAF on the 5th toe of the right foot, in a 43-year-old woman. After surgical excision with safety margins which included the nail apparatus, it has not recurred (22 months of follow up). We carried out a review of the location of all SAF published up to the present day.", "title": "" }, { "docid": "f3278416976069448fd7e6d0ea797dc6", "text": "Data Type (ADT), 45 abstraction mechanisms, 134 active sever pages, 252 affine transformation, 227 aggregation, 83, 125 anaglyphic stereo, 223 Apache HTTP, 250 ArcView 3D Analyst, 18 association, 84 ATKIS, 73 AutoCad, 2 AVS, 149 backward pass, 173 Bentley, 252 boolean, 199 Borgefors DT, 153 Boundary Representation (BR), 55 Boundary representation (B-rep), 17 CAD, 1, 4, 224 cartesian coordinate, 41 cell Complex, 66 CGI, 247 chamfer 3-4, 154 chamfer 3-4-5, 172 chamfer 5-7-11, 154 Classification, 82, 118, 135 Client-Server, 232 COBRA, 249 computer graphics, 224 conceptual data model, 45 conceptual design, 48 constrained triangulation, 94, 210 Constructive Solid Geometry (CSG), 13, 17, 55 Contouring, 190 contouring algorithm, 190 Cortona, 251 CSG, 34 DB2, 236 dBASE, 109 DBMS, 46, 228 Delaunay triangulation, 164 DEMViewer, 246 dependency diagram, 111 depth sorting algorithm, 224", "title": "" }, { "docid": "1f2f6aab0e3c813392ecab46cdc171b5", "text": "Theory of mind (ToM) refers to the ability to represent one's own and others' cognitive and affective mental states. Recent imaging studies have aimed to disentangle the neural networks involved in cognitive as opposed to affective ToM, based on clinical observations that the two can functionally dissociate. Due to large differences in stimulus material and task complexity findings are, however, inconclusive. Here, we investigated the neural correlates of cognitive and affective ToM in psychologically healthy male participants (n = 39) using functional brain imaging, whereby the same set of stimuli was presented for all conditions (affective, cognitive and control), but associated with different questions prompting either a cognitive or affective ToM inference. Direct contrasts of cognitive versus affective ToM showed that cognitive ToM recruited the precuneus and cuneus, as well as regions in the temporal lobes bilaterally. Affective ToM, in contrast, involved a neural network comprising prefrontal cortical structures, as well as smaller regions in the posterior cingulate cortex and the basal ganglia. Notably, these results were complemented by a multivariate pattern analysis (leave one study subject out), yielding a classifier with an accuracy rate of more than 85% in distinguishing between the two ToM-conditions. The regions contributing most to successful classification corresponded to those found in the univariate analyses. The study contributes to the differentiation of neural patterns involved in the representation of cognitive and affective mental states of others.", "title": "" }, { "docid": "b4f47ddd8529fe3859869b9e7c85bb2f", "text": "This paper studies the problem of building text classifiers using positive and unlabeled examples. The key feature of this problem is that there is no negative example for learning. Recently, a few techniques for solving this problem were proposed in the literature. These techniques are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. In this paper, we first introduce some new methods for the two steps, and perform a comprehensive evaluation of all possible combinations of methods of the two steps. We then propose a more principled approach to solving the problem based on a biased formulation of SVM, and show experimentally that it is more accurate than the existing techniques.", "title": "" }, { "docid": "74fb6f153fe8d6f8eac0f18c1040a659", "text": "The DAVID Gene Functional Classification Tool http://david.abcc.ncifcrf.gov uses a novel agglomeration algorithm to condense a list of genes or associated biological terms into organized classes of related genes or biology, called biological modules. This organization is accomplished by mining the complex biological co-occurrences found in multiple sources of functional annotation. It is a powerful method to group functionally related genes and terms into a manageable number of biological modules for efficient interpretation of gene lists in a network context.", "title": "" }, { "docid": "b4ae619b0b9cc966622feb2dceda0f2e", "text": "A novel pressure sensing circuit for non-invasive RF/microwave blood glucose sensors is presented in this paper. RF sensors are of interest to researchers for measuring blood glucose levels non-invasively. For the measurements, the finger is a popular site that has a good amount of blood supply. When a finger is placed on top of the RF sensor, the electromagnetic fields radiating from the sensor interact with the blood in the finger and the resulting sensor response depends on the permittivity of the blood. The varying glucose level in the blood results in a permittivity change causing a shift in the sensor's response. Therefore, by observing the sensor's frequency response it may be possible to predict the blood glucose level. However, there are two crucial points in taking and subsequently predicting the blood glucose level. These points are; the position of the finger on the sensor and the pressure applied onto the sensor. A variation in the glucose level causes a very small frequency shift. However, finger positioning and applying inconsistent pressure have more pronounced effect on the sensor response. For this reason, it may not be possible to take a correct reading if these effects are not considered carefully. Two novel pressure sensing circuits are proposed and presented in this paper to accurately monitor the pressure applied.", "title": "" }, { "docid": "52318d0743e2a6ec215076efde8cd21c", "text": "We survey the recent wave of extensions to the popular map-reduce systems, including those that have begun to address the implementation of recursive queries using the same computing environment as map-reduce. A central problem is that recursive tasks cannot deliver their output only at the end, which makes recovery from failures much more complicated than in map-reduce and its nonrecursive extensions. We propose several algorithmic ideas for efficient implementation of recursions in the map-reduce environment and discuss several alternatives for supporting recovery from failures without restarting the entire job.", "title": "" }, { "docid": "4520cafacd4794ec942030252652ae7c", "text": "While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. OPEN ACCESS Sensors 2014, 14 18852", "title": "" }, { "docid": "323e37bdf09bb65d232eb7e78360e77a", "text": "Breast cancer is a heterogeneous disease that can be subdivided into clinical, histopathological and molecular subtypes (luminal A-like, luminal B-like/HER2-negative, luminal B-like/HER2-positive, HER2-positive, and triple-negative). The study of new molecular factors is essential to obtain further insights into the mechanisms involved in the tumorigenesis of each tumor subtype. RASSF2 is a gene that is hypermethylated in breast cancer and whose clinical value has not been previously studied. The hypermethylation of RASSF1 and RASSF2 genes was analyzed in 198 breast tumors of different subtypes. The effect of the demethylating agent 5-aza-2'-deoxycytidine in the re-expression of these genes was examined in triple-negative (BT-549), HER2 (SK-BR-3), and luminal cells (T-47D). Different patterns of RASSF2 expression for distinct tumor subtypes were detected by immunohistochemistry. RASSF2 hypermethylation was much more frequent in luminal subtypes than in non-luminal tumors (p = 0.001). The re-expression of this gene by lentiviral transduction contributed to the differential cell proliferation and response to antineoplastic drugs observed in luminal compared with triple-negative cell lines. RASSF2 hypermethylation is associated with better prognosis in multivariate statistical analysis (P = 0.039). In conclusion, RASSF2 gene is differently methylated in luminal and non-luminal tumors and is a promising suppressor gene with clinical involvement in breast cancer.", "title": "" }, { "docid": "38808b99d3aa8f08ea9164ee30ed53ca", "text": "This paper presents two novel microstrip-to-slotline baluns. Their design is based on the slotted microstrip cross-junction and its multi-mode equivalent circuit model, i.e., each slotted microstrip supports two modes that have even and odd symmetry. The first balun is a modified version of the conventional 90° via-less microstrip to slotline one with different microstrip and slotline impedances. The 3 dB bandwidth is 2.44 GHz and the minimum insertion loss is 0.5 dB at 2.4 GHz. The second balun is a via-less straight microstrip-to-slotline one that has 3 dB bandwidth of 2.29 GHz and minimum insertion loss of 0.46 dB at 2.4 GHz. Theoretical predictions have been confirmed by EM simulations and measurements.", "title": "" }, { "docid": "54ad1c4a7a6fcb858ad18029fdbbef24", "text": "We can often detect from a person’s utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest—their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.", "title": "" }, { "docid": "a085131dda55d95a52fa0d4653f77410", "text": "Numerous studies show that happy individuals are successful across multiple life domains, including marriage, friendship, income, work performance, and health. The authors suggest a conceptual model to account for these findings, arguing that the happiness-success link exists not only because success makes people happy, but also because positive affect engenders success. Three classes of evidence--crosssectional, longitudinal, and experimental--are documented to test their model. Relevant studies are described and their effect sizes combined meta-analytically. The results reveal that happiness is associated with and precedes numerous successful outcomes, as well as behaviors paralleling success. Furthermore, the evidence suggests that positive affect--the hallmark of well-being--may be the cause of many of the desirable characteristics, resources, and successes correlated with happiness. Limitations, empirical issues, and important future research questions are discussed.", "title": "" }, { "docid": "ae1e110d99dee36a37be3e89b4839bd0", "text": "We describe two techniques for rendering isosurfaces in multiresolution volume data such that the uncertainty (error) in the data is shown in the resulting visualization. In general the visualization of uncertainty in data is difficult, but the nature of isosurface rendering makes it amenable to an effective solution. In addition to showing the error in the data used to generate the isosurface, we also show the value of an additional data variate on the isosurface. The results combine multiresolution and uncertainty visualization techniques into a hybrid approach. Our technique is applied to multiresolution examples from the medical domain.", "title": "" } ]
scidocsrr
c05f671a7e22a0a956dff880aac564a1
AISeL ) All Sprouts Content Sprouts 11-20-2008 Proposing the Hedonic Affect Model ( HAM ) to Explain how Stimuli and Performance Expectations Predict Affect in Individual and Group Hedonic Systems Use
[ { "docid": "ce3d81c74ef3918222ad7d2e2408bdb0", "text": "This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.\nA key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.\nSection 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.", "title": "" }, { "docid": "705b2a837b51ac5e354e1ec0df64a52a", "text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).", "title": "" } ]
[ { "docid": "400cf64eb8dd062237f6a63a1781083f", "text": "A practical step-by-step guide to wavelet analysis is given, with examples taken from time series of the El Niño– Southern Oscillation (ENSO). The guide includes a comparison to the windowed Fourier transform, the choice of an appropriate wavelet basis function, edge effects due to finite-length time series, and the relationship between wavelet scale and Fourier frequency. New statistical significance tests for wavelet power spectra are developed by deriving theoretical wavelet spectra for white and red noise processes and using these to establish significance levels and confidence intervals. It is shown that smoothing in time or scale can be used to increase the confidence of the wavelet spectrum. Empirical formulas are given for the effect of smoothing on significance levels and confidence intervals. Extensions to wavelet analysis such as filtering, the power Hovmöller, cross-wavelet spectra, and coherence are described. The statistical significance tests are used to give a quantitative measure of changes in ENSO variance on interdecadal timescales. Using new datasets that extend back to 1871, the Niño3 sea surface temperature and the Southern Oscillation index show significantly higher power during 1880–1920 and 1960–90, and lower power during 1920–60, as well as a possible 15-yr modulation of variance. The power Hovmöller of sea level pressure shows significant variations in 2–8-yr wavelet power in both longitude and time. Corresponding author address: Dr. Christopher Torrence, Advanced Study Program, National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307-3000. E-mail: torrence@ucar.edu In final form 20 October 1997. ©1998 American Meteorological Society", "title": "" }, { "docid": "56a35139eefd215fe83811281e4e2279", "text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f95f65ae362ceeaaa924c33e02899553", "text": "The massive proliferation of affordable computers, Internet broadband connectivity and rich education content has created a global phenomenon in which information an d communication technology (ICT) is being used to tra nsform education. Therefore, there is a need to redesign t he educational system to meet the needs better. The advent of comp uters with sophisticated software has made it possible to solv e many complex problems very fast and at a lower cost. This paper introduces the characteristics of the current E-Learning and then analyses the concept of cloud computing and describes the archit e ture of cloud computing platform by combining the features of E-L earning. The authors have tried to introduce cloud computing to e-learning, build an e-learning cloud, and make an active research an d exploration for it from the following aspects: architecture, const ruc ion method and external interface with the model. Keywords—Architecture, Cloud Computing, E-learning, Information Technology", "title": "" }, { "docid": "c5021fd377f1d7ebd8f87fb114ed07d9", "text": "In this essay a new theory of stress and linguistic rhythm will be elaborated, based on the proposals of Liberman (1975).' It will be argued that certain features of prosodic systems like that of English, in particular the phenomenon of \"stress subordination\", are not to be referred primarily to the properties of individual segments (or syllables), but rather reflect a hierarchical rhythmic structuring that organizes the syllables, words, and syntactic phrases of a sentence. The character of this structuring, properly understood, will give fresh insight into phenomena that have been apprehended in terms of the phonological cycle, the stress-subordination convention, the theory of disjunctive ordering, and the use of crucial variables in phonological rules. Our theory will employ two basic ideas about the representation of traditional prosodic concepts: first, we represent the notion relative prominence in terms of a relation defined on constituent structure; and second, we represent certain aspects of the notion linguistic rhythm in terms of the alignment of linguistic material with a \"metrical grid\". The perceived \"stressing\" of an utterance, we think, reflects the combined influence of a constituent-structure pattern and its grid alignment. This pattern-grid combination is reminiscent of the traditional picture of verse scansion, so that the theory as a whole deserves the name \"metrical\". We will also use the expression \"'metrical theory\" as a convenient term for that portion of the theory which deals with the assignment of relative prominence in terms of a relation defined on constituent structure. Section 1 will apply the metrical theory of stress-pattern assignment to the system of English phrasal stress, arguing this theory's value in rationalizing otherwise arbitrary characteristics of stress features and stress rules. Section 2 will extend this treatment to the domain of English word stress, adopting a somewhat traditional view of the assignment of the feature [+stress], but explaining the generation of word-level * We would like to thank", "title": "" }, { "docid": "e57168f624200cdfd6798cfd42ecce23", "text": "Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a well-known example of this modeling philosophy. Here a temporal algorithm is transferred into an architectural framework such that the learning can be performed by an extension of standard error backpropagation. We introduce 12 tricks that not only provide deeper insights in the functioning of RNNs but also improve the identification of underlying dynamical system from data.", "title": "" }, { "docid": "d725c63647485fd77412f16e1f6485f2", "text": "The ongoing discussions about a „digital revolution― and ―disruptive competitive advantages‖ have led to the creation of such a business vision as ―Industry 4.0‖. Yet, the term and even more its actual impact on businesses is still unclear.This paper addresses this gap and explores more specifically, the consequences and potentials of Industry 4.0 for the procurement, supply and distribution management functions. A blend of literature-based deductions and results from a qualitative study are used to explore the phenomenon.The findings indicate that technologies of Industry 4.0 legitimate the next level of maturity in procurement (Procurement &Supply Management 4.0). Empirical findings support these conceptual considerations, revealing the ambitious expectations.The sample comprises seven industries and the employed method is qualitative (telephone and face-to-face interviews). The empirical findings are only a basis for further quantitative investigation , however, they support the necessity and existence of the maturity level. The findings also reveal skepticism due to high investment costs but also very high expectations. As recent studies about digitalization are rather rare in the context of single company functions, this research work contributes to the understanding of digitalization and supply management.", "title": "" }, { "docid": "aa27a709441a6991412a16c93481675b", "text": "We develop a reduced-complexity approach for the detection of coded shaped-offset quadrature phase-shift keying (SOQPSK), a highly bandwidth-efficient and popular constant-envelope modulation. The complexity savings result from viewing the signal as a continuous-phase modulation (CPM). We give a simple and convenient closed-form expression for a recursive binary-to-ternary precoder for SOQPSK. The recursive nature of this formulation is necessary in serially concatenated systems where SOQPSK serves as the inner code. We show that the proposed detectors are optimal in the full-response case, and are near-optimal in the partial-response case due to some additional complexity reducing approximations. In all cases, the proposed detectors achieve large coding gains for serially concatenated coded SOQPSK. These gains are similar to those reported recently by Li and Simon, which were obtained using a more complicated cross-correlated trellis-coded quadrature modulation (XTCQM) interpretation.", "title": "" }, { "docid": "bcb71f55375c1948283281d60ace5549", "text": "This paper proposes a novel approach named AGM to e ciently mine the association rules among the frequently appearing substructures in a given graph data set. A graph transaction is represented by an adjacency matrix, and the frequent patterns appearing in the matrices are mined through the extended algorithm of the basket analysis. Its performance has been evaluated for the arti cial simulation data and the carcinogenesis data of Oxford University and NTP. Its high e ciency has been con rmed for the size of a real-world problem. . . .", "title": "" }, { "docid": "ce21a811ea260699c18421d99221a9f2", "text": "Medical image processing is the most challenging and emerging field now a day’s processing of MRI images is one of the parts of this field. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. This is a computer aided diagnosis systems for detecting malignant texture in biological study. This paper presents an approach in computer-aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. This paper describes the proposed strategy for detection; extraction and classification of brain tumour from MRI scan images of brain; which incorporates segmentation and morphological functions which are the basic functions of image processing. Here we detect the tumour, segment the tumour and we calculate the area of the tumour. Severity of the disease can be known, through classes of brain tumour which is done through neuro fuzzy classifier and creating a user friendly environment using GUI in MATLAB. In this paper cases of 10 patients is taken and severity of disease is shown and different features of images are calculated.", "title": "" }, { "docid": "851de4b014dfeb6f470876896b0416b3", "text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.", "title": "" }, { "docid": "ffaef3eae73ce46f8844646e12fd8309", "text": "A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using generalpurpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we generate causal embeddings cost-effectively by bootstrapping cause-effect pairs extracted from free text using a small set of seed patterns. Second, we train dedicated embeddings over this data, by using task-specific contexts, i.e., the context of a cause is its effect. Finally, we extend a state-of-the-art reranking approach for QA to incorporate these causal embeddings. We evaluate the causal embedding models both directly with a casual implication task, and indirectly, in a downstream causal QA task using data from Yahoo! Answers. We show that explicitly modeling causality improves performance in both tasks. In the QA task our best model achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7% (relative).", "title": "" }, { "docid": "5eb9c6540de63be3e7c645286f263b4d", "text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.", "title": "" }, { "docid": "04908df0a617a240edb3d1b69c44f64e", "text": "Deep Neural Networks (DNNs) have substantially improved the state-of-the-art in salient object detection. However, training DNNs requires costly pixel-level annotations. In this paper, we leverage the observation that image-level tags provide important cues of foreground salient objects, and develop a weakly supervised learning method for saliency detection using image-level tags only. The Foreground Inference Network (FIN) is introduced for this challenging task. In the first stage of our training method, FIN is jointly trained with a fully convolutional network (FCN) for image-level tag prediction. A global smooth pooling layer is proposed, enabling FCN to assign object category tags to corresponding object regions, while FIN is capable of capturing all potential foreground regions with the predicted saliency maps. In the second stage, FIN is fine-tuned with its predicted saliency maps as ground truth. For refinement of ground truth, an iterative Conditional Random Field is developed to enforce spatial label consistency and further boost performance. Our method alleviates annotation efforts and allows the usage of existing large scale training sets with image-level tags. Our model runs at 60 FPS, outperforms unsupervised ones with a large margin, and achieves comparable or even superior performance than fully supervised counterparts.", "title": "" }, { "docid": "b3a80316fc98ded7c106018afb5acc0a", "text": "Adaptive antenna array processing is widely known to provide significant anti-interference capabilities within a Global Navigation Satellite Systems (GNSS) receiver. A main challenge in the quest for such receiver architecture has always been the computational/processing requirements. Even more demanding would be to try and incorporate the flexibility of the Software-Defined Radio (SDR) design philosophy in such an implementation. This paper documents a feasible approach to a real-time SDR implementation of a beam-steered GNSS receiver and validates its performance. This research implements a real-time software receiver on a widely-available x86-based multi-core microprocessor to process four-element antenna array data streams sampled with 16-bit resolution. The software receiver is capable of 12 channels all-in-view Controlled Reception Pattern Antenna (CRPA) array processing capable of rejecting multiple interferers. Single Instruction Multiple Data (SIMD) instructions assembly coding and multithreaded programming, the key to such an implementation to reduce computational complexity, are fully documented within the paper. In conventional antenna array systems, receivers use the geometry of antennas and cable lengths known in advance. The documented CRPA implementation is architected to operate without extensive set-up and pre-calibration and leverages Space-Time Adaptive Processing (STAP) to provide adaptation in both the frequency and space domains. The validation component of the paper demonstrates that the developed software receiver operates in real time with live Global Positioning System (GPS) and Wide Area Augmentation System (WAAS) L1 C/A code signal. Further, interference rejection capabilities of the implementation are also demonstrated using multiple synthetic interferers which are added to the live data stream.", "title": "" }, { "docid": "e4f79788494b0bee0a313c794ba56fdc", "text": "The identification of bacterial secretion systems capable of translocating substrates into eukaryotic cells via needle-like appendages has opened fruitful and exciting areas of microbial pathogenesis research. The recent discovery of the type VI secretion system (T6SS) was met with early speculation that it too acts as a 'needle' that pathogens aim at host cells. New reports demonstrate that certain T6SSs are potent mediators of interbacterial interactions. In light of these findings, we examined earlier data indicating its role in pathogenesis. We conclude that although T6S can, in rare instances, directly influence interactions with higher organisms, the broader physiological significance of the system is likely to provide defense against simple eukaryotic cells and other bacteria in the environment. The crucial role of T6S in bacterial interactions, along with its presence in many organisms relevant to disease, suggests that it might be a key determinant in the progression and outcome of certain human polymicrobial infections.", "title": "" }, { "docid": "bbc936a3b4cd942ba3f2e1905d237b82", "text": "Silkworm silk is among the most widely used natural fibers for textile and biomedical applications due to its extraordinary mechanical properties and superior biocompatibility. A number of physical and chemical processes have also been developed to reconstruct silk into various forms or to artificially produce silk-like materials. In addition to the direct use and the delicate replication of silk's natural structure and properties, there is a growing interest to introduce more new functionalities into silk while maintaining its advantageous intrinsic properties. In this review we assess various methods and their merits to produce functional silk, specifically those with color and luminescence, through post-processing steps as well as biological approaches. There is a highlight on intrinsically colored and luminescent silk produced directly from silkworms for a wide range of applications, and a discussion on the suitable molecular properties for being incorporated effectively into silk while it is being produced in the silk gland. With these understanding, a new generation of silk containing various functional materials (e.g., drugs, antibiotics and stimuli-sensitive dyes) would be produced for novel applications such as cancer therapy with controlled release feature, wound dressing with monitoring/sensing feature, tissue engineering scaffolds with antibacterial, anticoagulant or anti-inflammatory feature, and many others.", "title": "" }, { "docid": "71d1ec46c47aacab15e2c34f279a3c7a", "text": "Although additive layer manufacturing is well established for rapid prototyping the low throughput and historic costs have prevented mass-scale adoption. The recent development of the RepRap, an open source self-replicating rapid prototyper, has made low-cost 3-D printers readily available to the public at reasonable prices (<$1,000). The RepRap (Prusa Mendell variant) currently prints 3-D objects in a 200x200x140 square millimeters build envelope from acrylonitrile butadiene styrene (ABS) and polylactic acid (PLA). ABS and PLA are both thermoplastics that can be injection-molded, each with their own benefits, as ABS is rigid and durable, while PLA is plant-based and can be recycled and composted. The melting temperature of ABS and PLA enable use in low-cost 3-D printers, as these temperature are low enough to use in melt extrusion in the home, while high enough for prints to retain their shape at average use temperatures. Using 3-D printers to manufacture provides the ability to both change the fill composition by printing voids and fabricate shapes that are impossible to make using tradition methods like injection molding. This allows more complicated shapes to be created while using less material, which could reduce environmental impact. As the open source 3-D printers continue to evolve and improve in both cost and performance, the potential for economically-viable distributed manufacturing of products increases. Thus, products and components could be customized and printed on-site by individual consumers as needed, reversing the historical trend towards centrally mass-manufactured and shipped products. Distributed manufacturing reduces embodied transportation energy from the distribution of conventional centralized manufacturing, but questions remain concerning the potential for increases in the overall embodied energy of the manufacturing due to reduction in scale. In order to quantify the environmental impact of distributed manufacturing using 3-D printers, a life cycle analysis was performed on a plastic juicer. The energy consumed and emissions produced from conventional large-scale production overseas are compared to experimental measurements on a RepRap producing identical products with ABS and PLA. The results of this LCA are discussed in relation to the environmental impact of distributed manufacturing with 3-D printers and polymer selection for 3-D printing to reduce this impact. The results of this study show that distributed manufacturing uses less energy than conventional manufacturing due to the RepRap's unique ability to reduce fill composition. Distributed manufacturing also has less emissions than conventional manufacturing when using PLA and when using ABS with solar photovoltaic power. The results of this study indicate that opensource additive layer distributed manufacturing is both technically viable and beneficial from an ecological perspective. Mater. Res. Soc. Symp. Proc. Vol. 1492 © 2013 Materials Research Society DOI: 1 557/op 013 0.1 l.2 .319", "title": "" }, { "docid": "e4e60c0ea93a2297636c265c00277bb1", "text": "Event studies, which look at stock market reactions to assess corporate business events, represent a relatively new research approach in the information systems field. In this paper we present a systematic review of thirty event studies related to information technology. After a brief discussion of each of the papers included in our review, we call attention to several limitations of the published studies and propose possible future research avenues.", "title": "" }, { "docid": "27b5cf1967c6dc0a91d04565ae5dbf70", "text": "Crowdsourcing provides a popular paradigm for data collection at scale. We study the problem of selecting subsets of workers from a given worker pool to maximize the accuracy under a budget constraint. One natural question is whether we should hire as many workers as the budget allows, or restrict on a small number of topquality workers. By theoretically analyzing the error rate of a typical setting in crowdsourcing, we frame the worker selection problem into a combinatorial optimization problem and propose an algorithm to solve it efficiently. Empirical results on both simulated and real-world datasets show that our algorithm is able to select a small number of high-quality workers, and performs as good as, sometimes even better than, the much larger crowds as the budget allows.", "title": "" }, { "docid": "c302699cb7dec9f813117bfe62d3b5fb", "text": "Pipe networks constitute the means of transporting fluids widely used nowadays. Increasing the operational reliability of these systems is crucial to minimize the risk of leaks, which can cause serious pollution problems to the environment and have disastrous consequences if the leak occurs near residential areas. Considering the importance in developing efficient systems for detecting leaks in pipelines, this work aims to detect the characteristic frequencies (predominant) in case of leakage and no leakage. The methodology consisted of capturing the experimental data through a microphone installed inside the pipeline and coupled to a data acquisition card and a computer. The Fast Fourier Transform (FFT) was used as the mathematical approach to the signal analysis from the microphone, generating a frequency response (spectrum) which reveals the characteristic frequencies for each operating situation. The tests were carried out using distinct sizes of leaks, situations without leaks and cases with blows in the pipe caused by metal instruments. From the leakage tests, characteristic peaks were found in the FFT frequency spectrum using the signal generated by the microphone. Such peaks were not observed in situations with no leaks. Therewith, it was realized that it was possible to distinguish, through spectral analysis, an event of leakage from an event without leakage.", "title": "" } ]
scidocsrr
7e132f870dc7a4973f7d10e66b0b09f5
A Comprehensive Review on Metabolic Syndrome
[ { "docid": "f1e5f8ab0b2ce32553dd5e08f1113b36", "text": "We examined the hypothesis that an excess accumulation of intramuscular lipid (IMCL) is associated with insulin resistance and that this may be mediated by the oxidative capacity of muscle. Nine sedentary lean (L) and 11 obese (O) subjects, 8 obese subjects with type 2 diabetes mellitus (D), and 9 lean, exercise-trained (T) subjects volunteered for this study. Insulin sensitivity (M) determined during a hyperinsulinemic (40 mU x m(-2)min(-1)) euglycemic clamp was greater (P < 0.01) in L and T, compared with O and D (9.45 +/- 0.59 and 10.26 +/- 0.78 vs. 5.51 +/- 0.61 and 1.15 +/- 0.83 mg x min(-1)kg fat free mass(-1), respectively). IMCL in percutaneous vastus lateralis biopsy specimens by quantitative image analysis of Oil Red O staining was approximately 2-fold higher in D than in L (3.04 +/- 0.39 vs. 1.40 +/- 0.28% area as lipid; P < 0.01). IMCL was also higher in T (2.36 +/- 0.37), compared with L (P < 0.01). The oxidative capacity of muscle determined with succinate dehydrogenase staining of muscle fibers was higher in T, compared with L, O, and D (50.0 +/- 4.4, 36.1 +/- 4.4, 29.7 +/- 3.8, and 33.4 +/- 4.7 optical density units, respectively; P < 0.01). IMCL was negatively associated with M (r = -0.57, P < 0.05) when endurance-trained subjects were excluded from the analysis, and this association was independent of body mass index. However, the relationship between IMCL and M was not significant when trained individuals were included. There was a positive association between the oxidative capacity and M among nondiabetics (r = 0.37, P < 0.05). In summary, skeletal muscle of trained endurance athletes is markedly insulin sensitive and has a high oxidative capacity, despite having an elevated lipid content. In conclusion, the capacity for lipid oxidation may be an important mediator of the association between excess muscle lipid accumulation and insulin resistance.", "title": "" } ]
[ { "docid": "8d0086cd2c32798231170749dcbf1ce1", "text": "Despite their many advantages, e-Businesses lag behind brick and mortar businesses in several fundamental respects. This paper concerns one of these: relationships based on trust and reputation. Recent studies on simple reputation systems for eBusinesses such as eBay have pointed to the importance of such rating systems for deterring moral hazard and encouraging trusting interactions. However, despite numerous studies on trust and reputation systems, few have taken studies across disciplines to provide an integrated account of these concepts and their relationships. This paper first surveys existing literatures on trust, reputation and a related concept: reciprocity. Based on sociological and biological understandings of these concepts, a computational model is proposed. This model can be implemented in a real system to consistently calculate agents’ trust and reputation scores.", "title": "" }, { "docid": "b886fbb9b40e6d634f59288bb60960a7", "text": "Antithrombotic therapy has recently become more frequent for the treatment of venous thromboembolism (VTE) in the paediatric population. This can be explained by the increased awareness of morbidities and mortalities of VTE in children, as well as the improved survival rate of children with various kinds of serious illnesses. Considering the large number of years a child is expected to survive, associated morbidities such as postthrombotic syndrome and risk of recurrence can significantly impact on the quality of life in children. Therefore, timely diagnosis, evidence-based treatment and prophylaxis strategies are critical to avoid such complications. This review summarizes the current literature about the antithrombotic treatment for VTE in infants and children. It guides the paediatric medical care provider for making a logical and justifiable decision.", "title": "" }, { "docid": "c347f61d0bb2c61ba5b9e66fe5f6681e", "text": "Interaction with mobile applications is often awkward due to the limited and miniaturized input modalities available. This is especially problematic for games where the only incentive to use an application is the pleasure derived from the interaction. It is therefore interesting to examine novel forms of interaction in order to increase the \"playability\" of mobile games.In this paper we present a simple mobile gaming application on a standard Pocket PC PDA that employs computer vision (CV) as it's main interaction modality. Practical experience with the application demonstrates the feasibility of CV as a primary interaction modality and indicates the high potential of CV as an input modality for mobile devices in the future. Our approach exploits the video capabilities that are becoming ubiquitous on camera equipped smart-phones and PDAs to provide a fun solution for interaction tasks in games like \"Pong\", \"Break-out\" or soccer.", "title": "" }, { "docid": "e90755afe850d597ad7b3f4b7e590b66", "text": "Privacy is considered to be a fundamental human right (Movius and Krup, 2009). Around the world this has led to a large amount of legislation in the area of privacy. Nearly all national governments have imposed local privacy legislation. In the United States several states have imposed their own privacy legislation. In order to maintain a manageable scope this paper only addresses European Union wide and federal United States laws. In addition several US industry (self) regulations are also considered. Privacy regulations in emerging technologies are surrounded by uncertainty. This paper aims to clarify the uncertainty relating to privacy regulations with respect to Cloud Computing and to identify the main open issues that need to be addressed for further research. This paper is based on existing literature and a series of interviews and questionnaires with various Cloud Service Providers (CSPs) that have been performed for the first author’s MSc thesis (Ruiter, 2009). The interviews and questionnaires resulted in data on privacy and security procedures from ten CSPs and while this number is by no means large enough to make any definite conclusions the results are, in our opinion, interesting enough to publish in this paper. The remainder of the paper is organized as follows: the next section gives some basic background on Cloud Computing. Section 3 provides", "title": "" }, { "docid": "b83641785927e3788479d67af9804fb7", "text": "In recent years, an increasing popularity of deep learning model for intelligent condition monitoring and diagnosis as well as prognostics used for mechanical systems and structures has been observed. In the previous studies, however, a major assumption accepted by default, is that the training and testing data are taking from same feature distribution. Unfortunately, this assumption is mostly invalid in real application, resulting in a certain lack of applicability for the traditional diagnosis approaches. Inspired by the idea of transfer learning that leverages the knowledge learnt from rich labeled data in source domain to facilitate diagnosing a new but similar target task, a new intelligent fault diagnosis framework, i.e., deep transfer network (DTN), which generalizes deep learning model to domain adaptation scenario, is proposed in this paper. By extending the marginal distribution adaptation (MDA) to joint distribution adaptation (JDA), the proposed framework can exploit the discrimination structures associated with the labeled data in source domain to adapt the conditional distribution of unlabeled target data, and thus guarantee a more accurate distribution matching. Extensive empirical evaluations on three fault datasets validate the applicability and practicability of DTN, while achieving many state-of-the-art transfer results in terms of diverse operating conditions, fault severities and fault types.", "title": "" }, { "docid": "efc4af51a92facff03e1009b039139fe", "text": "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.", "title": "" }, { "docid": "5a8f8b9094c62b77d9f71cf5b2a3a562", "text": "Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential-a phenomenon referred to as \"phase-of-firing coding\" (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions-only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents ( approximately 10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.", "title": "" }, { "docid": "3c31cd33da9c07604a91dcb7f52bf2de", "text": "The Distributed Constraint Optimization Problem (DCOP) is able to model a wide variety of distributed reasoning tasks that arise in multiagent systems. Unfortunately, existing methods for DCOP are not able to provide theoretical guarantees on global solution quality while allowing agents to operate asynchronously. We show how this failure can be remedied by allowing agents to make local decisions based on conservative cost estimates rather than relying on global certainty as previous approaches have done. This novel approach results in a polynomial-space algorithm for DCOP named Adopt that is guaranteed to find the globally optimal solution while allowing agents to execute asynchronously and in parallel. Detailed experimental results show that on benchmark problems Adopt obtains speedups of several orders of magnitude over other approaches. Adopt can also perform bounded-error approximation – it has the ability to quickly find approximate solutions and, unlike heuristic search methods, still maintain a theoretical guarantee on solution quality.", "title": "" }, { "docid": "2272325860332d5d41c02f317ab2389e", "text": "For a developing nation, deploying big data (BD) technology and introducing data science in higher education is a challenge. A pessimistic scenario is: Mis-use of data in many possible ways, waste of trained manpower, poor BD certifications from institutes, under-utilization of resources, disgruntled management staff, unhealthy competition in the market, poor integration with existing technical infrastructures. Also, the questions in the minds of students, scientists, engineers, teachers and managers deserve wider attention. Besides the stated perceptions and analyses perhaps ignoring socio-political and scientific temperaments in developing nations, the following questions arise: How did the BD phenomenon naturally occur, post technological developments in Computer and Communications Technology and how did different experts react to it? Are academicians elsewhere agreeing on the fact that BD is a new science? Granted that big data science is a new science what are its foundations as compared to conventional topics in Physics, Chemistry or Biology? Or, is it similar in an esoteric sense to astronomy or nuclear science? What are the technological and engineering implications locally and globally and how these can be advantageously used to augment business intelligence, for example? In other words, will the industry adopt the changes due to tactical advantages? How can BD success stories be faithfully carried over elsewhere? How will BD affect the Computer Science and other curricula? How will BD benefit different segments of our society on a large scale? To answer these, an appreciation of the BD as a science and as a technology is necessary. This paper presents a quick BD overview, relying on the contemporary literature; it addresses: characterizations of BD and the BD people, the background required for the students and teachers to join the BD bandwagon, the management challenges in embracing BD so that the bottomline is clear.", "title": "" }, { "docid": "51a859f71bd2ec82188826af18204f02", "text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.", "title": "" }, { "docid": "34cf5a712504c8adf15b0bb89d5185f7", "text": "Due to deregulation and the use of renewable power, especially solar power, the proportion of PV system is rapidly increasing and most of them are connected to distribution networks to supply power to the grid as well as local loads. There are many power quality issues to be considered with PV systems and one of the main issues is islanding. It is important to estimate and detect an islanding situation quickly and accurately as it can lead to serious plant damage if the grid is suddenly reconnected. This paper investigates the characteristics of ROCOF relays which affect the operation responses and analyze the ROCOF performance for PV systems islanding detection. The equivalent impedance value V2/P is introduced as an interlock function to cooperate with ROCOF relays to avoid false operation during the non-islanding situation. It is shown that it can block nuisance tripping signals quickly and accurately. The range of effectiveness is established for the small power imbalances where there is a small potentially non-detection zone due to the overlap range of the impedance value.", "title": "" }, { "docid": "babe00fba5a009b116f7b8d438146447", "text": "The ability to predict cyber incidents before they occur will help mitigate malicious activities and their impact. This is a challenging task and a departure from intrusion detection where observables of malicious activities are analyzed. Since there is no direct observable before the cyber incident actually happens, the predictive analysis need to be based on non-conventional signals that may or may not be directly related to the potential victim entity. This paper presents our preliminary findings through the use of Bayesian classifier to process signals drawn from global events and social media. The preliminary results show promising prediction performance for an anonymized organization even though the signals are not specific to that organization.", "title": "" }, { "docid": "e1298fb35b1bdc3f0d70071a6514a793", "text": "With the wide availability of GPS trajectory data, sustainable development on understanding travel behaviors has been achieved in recent years. But relatively less attention has been paid to uncovering the trip purposes, i.e., why people make the trips. Unlike to the GPS trajectory data, the trip purposes cannot be easily and directly collected on a large scale, which necessitates the inference of trip purposes automatically. To this end, in this paper, we propose a device-free and novel model called Trip2Vec, which consists of three components. In the first component, it augments the context on trip origins and destinations, respectively, by extracting the information about the nearby point of interest configurations and human activity popularity at particular time periods (i.e., activity period popularity) from two crowdsourced datasets. Such context is well-recognized as the clear clue of trip purposes. In the second component, on the top of the augmented context, a deep embedding approach is developed to get a more semantical and discriminative context representation in the latent space. In the third component, we simply adopt the common clustering algorithm (i.e., K-means) to aggregate trips with similar latent representation, then conduct trip purpose interpretation based on the clustering results, followed by understanding the time-evolving tendency of trip purpose patterns (i.e., profiling) in the city-wide level. Finally, we present extensive experiment results with real-world taxi trajectory and Foursquare check-in data generated in New York City (NYC) to demonstrate the effectiveness of the proposed model, and moreover, the obtained city-wide trip purpose patterns are quite consistent with real situations.", "title": "" }, { "docid": "18c8fcba57c295568942fa40b605c27e", "text": "The Internet of Things (IoT), an emerging global network of uniquely identifiable embedded computing devices within the existing Internet infrastructure, is transforming how we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. In addition to increased communication efficiency between connected objects, the IoT also brings new security and privacy challenges. Comprehensive measures that enable IoT device authentication and secure access control need to be established. Existing hardware, software, and network protection methods, however, are designed against fraction of real security issues and lack the capability to trace the provenance and history information of IoT devices. To mitigate this shortcoming, we propose an RFID-enabled solution that aims at protecting endpoint devices in IoT supply chain. We take advantage of the connection between RFID tag and control chip in an IoT device to enable data transfer from tag memory to centralized database for authentication once deployed. Finally, we evaluate the security of our proposed scheme against various attacks.", "title": "" }, { "docid": "9f538d6f447f1e536b7109620156cdf7", "text": "We present a demonstration of Ropossum, an authoring tool for the generation and testing of levels of the physics-based game, Cut the Rope. Ropossum integrates many features: (1) automatic design of complete solvable content, (2) incorporation of designer’s input through the creation of complete or partial designs, (3) automatic check for playability and (4) optimization of a given design based on playability. The system includes a physics engine to simulate the game and an evolutionary framework to evolve content as well as an AI reasoning agent to check for playability. The system is optimised to allow on-line feedback and realtime interaction.", "title": "" }, { "docid": "eb6643fba28b6b84b4d51a565fc97be0", "text": "The spiral antenna is a well known kind of wideband antenna. The challenges to improve its design are numerous, such as creating a compact wideband matched feeding or controlling the radiation pattern. Here we propose a self matched and compact slot spiral antenna providing a unidirectional pattern.", "title": "" }, { "docid": "d394d5d1872bbb6a38c28ecdc0e24f06", "text": "An ever increasing number of configuration parameters are provided to system users. But many users have used one configuration setting across different workloads, leaving untapped the performance potential of systems. A good configuration setting can greatly improve the performance of a deployed system under certain workloads. But with tens or hundreds of parameters, it becomes a highly costly task to decide which configuration setting leads to the best performance. While such task requires the strong expertise in both the system and the application, users commonly lack such expertise.\n To help users tap the performance potential of systems, we present Best Config, a system for automatically finding a best configuration setting within a resource limit for a deployed system under a given application workload. BestConfig is designed with an extensible architecture to automate the configuration tuning for general systems. To tune system configurations within a resource limit, we propose the divide-and-diverge sampling method and the recursive bound-and-search algorithm. BestConfig can improve the throughput of Tomcat by 75%, that of Cassandra by 63%, that of MySQL by 430%, and reduce the running time of Hive join job by about 50% and that of Spark join job by about 80%, solely by configuration adjustment.", "title": "" }, { "docid": "eec9bd3e2c187c23f3d99fd3b98433ce", "text": "Optimum sample size is an essential component of any research. The main purpose of the sample size calculation is to determine the number of samples needed to detect significant changes in clinical parameters, treatment effects or associations after data gathering. It is not uncommon for studies to be underpowered and thereby fail to detect the existing treatment effects due to inadequate sample size. In this paper, we explain briefly the basic principles of sample size calculations in medical studies.", "title": "" }, { "docid": "d4eb3631b1cc8edd2f1eafe678d04a31", "text": "Social media being a prolific source of rumours, stance classification of individual posts towards rumours has gained attention in the past few years. Classification of stance in individual posts can then be useful to determine the veracity of a rumour. Research in this direction has looked at rumours in different domains, such as politics, natural disasters or terrorist attacks. However, work has been limited to in-domain experiments, i.e. training and testing data belong to the same domain. This presents the caveat that when one wants to deal with rumours in domains that are more obscure, training data tends to be scarce. This is the case of mental health disorders, which we explore here. Having annotated collections of tweets around rumours emerged in the context of breaking news, we study the performance stability when switching to the new domain of mental health disorders. Our study confirms that performance drops when we apply our trained model on a new domain, emphasising the differences in rumours across domains. We overcome this issue by using a little portion of the target domain data for training, which leads to a substantial boost in performance. We also release the new dataset with mental health rumours annotated for stance.", "title": "" }, { "docid": "b485b27da4b17469a5c519538f4dcf1b", "text": "The research described in this work focuses on identifying key components for the task of irony detection. By means of analyzing a set of customer reviews, which are considered as ironic both in social and mass media, we try to find hints about how to deal with this task from a computational point of view. Our objective is to gather a set of discriminating elements to represent irony. In particular, the kind of irony expressed in such reviews. To this end, we built a freely available data set with ironic reviews collected from Amazon. Such reviews were posted on the basis of an online viral effect; i.e. contents whose effect triggers a chain reaction on people. The findings were assessed employing three classifiers. The results show interesting hints regarding the patterns and, especially, regarding the implications for sentiment analysis.", "title": "" } ]
scidocsrr
67f6ef8cf4238ed8554b090179401fe8
TransCut: Transparent Object Segmentation from a Light-Field Image
[ { "docid": "80b514540933a9cc31136c8cb86ec9b3", "text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.", "title": "" } ]
[ { "docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2", "text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.", "title": "" }, { "docid": "d3049fee1ed622515f5332bcfa3bdd7b", "text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.", "title": "" }, { "docid": "f1d7e1b222e1ae313c3e751e8ba443f3", "text": "INTRODUCTION\nLapatinib, an orally active tyrosine kinase inhibitor of epidermal growth factor receptor ErbB1 (EGFR) and ErbB2 (HER2), has activity as monotherapy and in combination with chemotherapy in HER2-overexpressing metastatic breast cancer (MBC).\n\n\nMETHODS\nThis phase II single-arm trial assessed the safety and efficacy of first-line lapatinib in combination with paclitaxel in previously untreated patients with HER2-overexpressing MBC. The primary endpoint was the overall response rate (ORR). Secondary endpoints were the duration of response (DoR), time to response, time to progression, progression-free survival (PFS), overall survival, and the incidence and severity of adverse events. All endpoints were investigator- and independent review committee (IRC)-assessed.\n\n\nRESULTS\nThe IRC-assessed ORR was 51% (29/57 patients with complete or partial response) while the investigator-assessed ORR was 77% (44/57). As per the IRC, the median DoR was 39.7 weeks, and the median PFS was 47.9 weeks. The most common toxicities were diarrhea (56%), neutropenia (44%), rash (40%), fatigue (25%), and peripheral sensory neuropathy (25%).\n\n\nCONCLUSIONS\nFirst-line lapatinib plus paclitaxel for HER2-overexpressing MBC produced an encouraging ORR with manageable toxicities. This combination may be useful in first-line treatment for patients with HER2-overexpressing MBC and supports the ongoing evaluation of this combination as first-line therapy in HER2-overexpressing MBC.", "title": "" }, { "docid": "617ec3be557749e0646ad7092a1afcb6", "text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.", "title": "" }, { "docid": "a92efa40799017f16c9ae624b97d02aa", "text": "BLEU is the de facto standard automatic evaluation metric in machine translation. While BLEU is undeniably useful, it has a number of limitations. Although it works well for large documents and multiple references, it is unreliable at the sentence or sub-sentence levels, and with a single reference. In this paper, we propose new variants of BLEU which address these limitations, resulting in a more flexible metric which is not only more reliable, but also allows for more accurate discriminative training. Our best metric has better correlation with human judgements than standard BLEU, despite using a simpler formulation. Moreover, these improvements carry over to a system tuned for our new metric.", "title": "" }, { "docid": "05e4168615c39071bb9640bd5aa6f3d9", "text": "The intestinal microbiome plays an important role in the metabolism of chemical compounds found within food. Bacterial metabolites are different from those that can be generated by human enzymes because bacterial processes occur under anaerobic conditions and are based mainly on reactions of reduction and/or hydrolysis. In most cases, bacterial metabolism reduces the activity of dietary compounds; however, sometimes a specific product of bacterial transformation exhibits enhanced properties. Studies on the metabolism of polyphenols by the intestinal microbiota are crucial for understanding the role of these compounds and their impact on our health. This review article presents possible pathways of polyphenol metabolism by intestinal bacteria and describes the diet-derived bioactive metabolites produced by gut microbiota, with a particular emphasis on polyphenols and their potential impact on human health. Because the etiology of many diseases is largely correlated with the intestinal microbiome, a balance between the host immune system and the commensal gut microbiota is crucial for maintaining health. Diet-related and age-related changes in the human intestinal microbiome and their consequences are summarized in the paper.", "title": "" }, { "docid": "b50498964a73a59f54b3a213f2626935", "text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.", "title": "" }, { "docid": "5481f319296c007412e62129d2ec5943", "text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.", "title": "" }, { "docid": "316e4fa32d0b000e6f833d146a9e0d80", "text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.", "title": "" }, { "docid": "f8d256bf6fea179847bfb4cc8acd986d", "text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.", "title": "" }, { "docid": "e5edb616b5d0664cf8108127b0f8684c", "text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.", "title": "" }, { "docid": "64d839525e2d9c71478d862a30aa0277", "text": "The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a Bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.", "title": "" }, { "docid": "db7bc8bbfd7dd778b2900973f2cfc18d", "text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.", "title": "" }, { "docid": "f7792dbc29356711c2170d5140030142", "text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.", "title": "" }, { "docid": "b39d1f4f6caed09030a87faeb2c1beeb", "text": "In the present paper we examine the moderating effects of age diversity and team coordination on the relationship between shared leadership and team performance. Using a field sample of 96 individuals in 26 consulting project teams, team members assessed their team’s shared leadership and coordination. Six to eight weeks later, supervisors rated their teams’ performance. Results indicated that shared leadership predicted team performance and both age diversity and coordination moderated the impact of shared leadership on team performance. Thereby shared leadership was positively related to team performance when age diversity and coordination were low, whereas higher levels of age diversity and coordination appeared to compensate for lower levels of shared leadership effectiveness. In particular strong effects of shared leadership on team performance were evident when both age diversity and coordination were low, whereas shared leadership was not related to team performance when both age diversity and coordination were high.", "title": "" }, { "docid": "d045e59441a16874f3ccb1d8068e4e6d", "text": "In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers' ability to detect deception and did not result in a response bias.", "title": "" }, { "docid": "a0c126480f0bce527a893853f6f3bec9", "text": "Word problems are an established technique for teaching mathematical modeling skills in K-12 education. However, many students find word problems unconnected to their lives, artificial, and uninteresting. Most students find them much more difficult than the corresponding symbolic representations. To account for this phenomenon, an ideal pedagogy might involve an individually crafted progression of unique word problems that form a personalized plot. We propose a novel technique for automatic generation of personalized word problems. In our system, word problems are generated from general specifications using answer-set programming (ASP). The specifications include tutor requirements (properties of a mathematical model), and student requirements (personalization, characters, setting). Our system takes a logical encoding of the specification, synthesizes a word problem narrative and its mathematical model as a labeled logical plot graph, and realizes the problem in natural language. Human judges found our problems as solvable as the textbook problems, with a slightly more artificial language.", "title": "" }, { "docid": "5f20df3abf9a4f7944af6b3afd16f6f8", "text": "An important step towards the successful integration of information and communication technology (ICT) in schools is to facilitate their capacity to develop a school-based ICT policy resulting in an ICT policy plan. Such a plan can be defined as a school document containing strategic and operational elements concerning the integration of ICT in education. To write such a plan in an efficient way is challenging for schools. Therefore, an online tool [Planning for ICT in Schools (pICTos)] has been developed to guide schools in this process. A multiple case study research project was conducted with three Flemish primary schools to explore the process of developing a school-based ICT policy plan and the supportive role of pICTos within this process. Data from multiple sources (i.e. interviews with school leaders and ICT coordinators, school policy documents analysis and a teacher questionnaire) were collected and analysed. The results indicate that schools shape their ICT policy based on specific school data collected and presented by the pICTos environment. School teams learned about the actual and future place of ICT in teaching and learning. Consequently, different policy decisions were made according to each school’s vision on ‘good’ education and ICT integration.", "title": "" }, { "docid": "b4f06236b0babb6cd049c8914170d7bf", "text": "We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.", "title": "" }, { "docid": "8224818f838fd238879dca0a4b5531c1", "text": "Intelligence plays an important role in supporting military operations. In the course of military intelligence a vast amount of textual data in different languages needs to be analyzed. In addition to information provided by traditional military intelligence, nowadays the internet offers important resources of potential militarily relevant information. However, we are not able to manually handle this vast amount of data. The science of natural language processing (NLP) provides technology to efficiently handle this task, in particular by means of machine translation and text mining. In our research project ISAF-MT we created a statistical machine translation (SMT) system for Dari to German. In this paper we describe how NLP technologies and in particular SMT can be applied to different intelligence processes. We therefore argue that multilingual NLP technology can strongly support military operations.", "title": "" } ]
scidocsrr
bff2b507340e06e3a0027ef914f0bf4f
KNOWROB-MAP - knowledge-linked semantic object maps
[ { "docid": "db897ae99b6e8d2fc72e7d230f36b661", "text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.", "title": "" } ]
[ { "docid": "c8bc1f8fc7bf3a63eb09cdd0935a1878", "text": "On the way to achieving higher degrees of autonomy for vehicles in complicated, ever changing scenarios, the localization problem poses a very important role. Especially the Simultaneous Localization and Mapping (SLAM) problem has been studied greatly in the past. For an autonomous system in the real world, we present a very cost-efficient, robust and very precise localization approach based on GraphSLAM and graph optimization using radar sensors. We are able to prove on a dynamically changing parking lot layout that both mapping and localization accuracy are very high. To evaluate the performance of the mapping algorithm, a highly accurate ground truth map generated from a total station was used. Localization results are compared to a high precision DGPS/INS system. Utilizing these methods, we can show the strong performance of our algorithm.", "title": "" }, { "docid": "c9bec74bcb607b0dc4a8372ba28eb4b0", "text": "Alarm fatigue, a condition in which clinical staff become desensitized to alarms due to the high frequency of unnecessary alarms, is a major patient safety concern. Alarm fatigue is particularly prevalent in the pediatric setting, due to the high level of variation in vital signs with patient age. Existing studies have shown that the current default pediatric vital sign alarm thresholds are inappropriate, and lead to a larger than necessary alarm load. This study leverages a large database containing over 190 patient-years of heart rate data to accurately identify the 1st and 99th percentiles of an individual's heart rate on their first day of vital sign monitoring. These percentiles are then used as personalized vital sign thresholds, which are evaluated by comparing to non-default alarm thresholds used in practice, and by using the presence of major clinical events to infer alarm labels. Using the proposed personalized thresholds would decrease low and high heart rate alarms by up to 50% and 44% respectively, while maintaining sensitivity of 62% and increasing specificity to 49%. The proposed personalized vital sign alarm thresholds will reduce alarm fatigue, thus contributing to improved patient outcomes, shorter hospital stays, and reduced hospital costs.", "title": "" }, { "docid": "4f5494d38d72adc52554b4c9638b0791", "text": "Approximating the semantic similarity between entities in the learned Hamming space is the key for supervised hashing techniques. The semantic similarities between entities are often non-transitive since they could share different latent similarity components. For example, in social networks, we connect with people for various reasons, such as sharing common interests, working in the same company, being alumni and so on. Obviously, these social connections are non-transitive if people are connected due to different reasons. However, existing supervised hashing methods treat the pairwise similarity relationships in a simple and unified way and project data into a single Hamming space, while neglecting that the non-transitive property cannot be ade- quately captured by a single Hamming space. In this paper, we propose a non-transitive hashing method, namely Multi-Component Hashing (MuCH), to identify the latent similarity components to cope with the non-transitive similarity relationships. MuCH generates multiple hash tables with each hash table corresponding to a similarity component, and preserves the non-transitive similarities in different hash table respectively. Moreover, we propose a similarity measure, called Multi-Component Similarity, aggregating Hamming similarities in multiple hash tables to capture the non-transitive property of semantic similarity. We conduct extensive experiments on one synthetic dataset and two public real-world datasets (i.e. DBLP and NUS-WIDE). The results clearly demonstrate that the proposed MuCH method significantly outperforms the state-of-art hashing methods especially on search efficiency.", "title": "" }, { "docid": "bf6434b4498aa3cdaaf482cb15ca7e12", "text": "Multicore processors represent the latest significant development in microprocessor technology. Computer System Performance and Evaluation deal with the investigation of computer components (both hardware and software) with a view to establish the level of their performances. This research work carried out performance evaluation studies on AMD dual-core and Intel dual-core processor to know which of the processor has better execution time and throughput. The architecture of AMD and Intel duo-core processor were studied. SPEC CPU2006 benchmarks suite was used to measure the performance of AMD and Intel duo core processors. The overall execution and throughput time measurement of AMD and Intel duo core processors were reported and compared to each other. Results showed that the execution time of CQ56 Intel Pentium Dual-Core Processor is about 6.62% faster than AMD Turion II P520 Dual-Core Processor while the throughput of Intel Pentium Dual-Core Processor was found to be 1.06 times higher than AMD Turion (tm) II P520 Dual Core Processor. Therefore, Intel Pentium Dual-Core Processors exhibit better performance probably due to the following architectural features: faster core-to-core communication, dynamic cache sharing between cores and smaller size of level 2 cache.", "title": "" }, { "docid": "d4878e0d2aaf33bb5d9fc9c64605c4d2", "text": "Labeled Faces in the Wild (LFW) database has been widely utilized as the benchmark of unconstrained face verification and due to big data driven machine learning methods, the performance on the database approaches nearly 100%. However, we argue that this accuracy may be too optimistic because of some limiting factors. Besides different poses, illuminations, occlusions and expressions, crossage face is another challenge in face recognition. Different ages of the same person result in large intra-class variations and aging process is unavoidable in real world face verification. However, LFW does not pay much attention on it. Thereby we construct a Cross-Age LFW (CALFW) which deliberately searches and selects 3,000 positive face pairs with age gaps to add aging process intra-class variance. Negative pairs with same gender and race are also selected to reduce the influence of attribute difference between positive/negative pairs and achieve face verification instead of attributes classification. We evaluate several metric learning and deep learning methods on the new database. Compared to the accuracy on LFW, the accuracy drops about 10%-17% on CALFW.", "title": "" }, { "docid": "5cdb99bf928039bd5377b3eca521d534", "text": "Thanks to advances in information and communication technologies, there is a prominent increase in the amount of information produced specifically in the form of text documents. In order to, effectively deal with this “information explosion” problem and utilize the huge amount of text databases, efficient and scalable tools and techniques are indispensable. In this study, text clustering which is one of the most important techniques of text mining that aims at extracting useful information by processing data in textual form is addressed. An improved variant of spherical K-Means (SKM) algorithm named multi-cluster SKM is developed for clustering high dimensional document collections with high performance and efficiency. Experiments were performed on several document data sets and it is shown that the new algorithm provides significant increase in clustering quality without causing considerable difference in CPU time usage when compared to SKM algorithm.", "title": "" }, { "docid": "2547e6e8138c49b76062e241391dfc1d", "text": "Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation.", "title": "" }, { "docid": "f21ac64e54b23ab671c5fc038bef4686", "text": "This paper presents the working methodology and results on Code Mix Entity Extraction in Indian Languages (CMEE-IL) shared the task of FIRE-2016. The aim of the task is to identify various entities such as a person, organization, movie and location names in a given code-mixed tweets. The tweets in code mix are written in English mixed with Hindi or Tamil. In this work, Entity Extraction system is implemented for both Hindi-English and Tamil-English code-mix tweets. The system employs context based character embedding features to train Support Vector Machine (SVM) classifier. The training data was tokenized such that each line containing a single word. These words were further split into characters. Embedding vectors of these characters are appended with the I-O-B tags and used for training the system. During the testing phase, we use context embedding features to predict the entity tags for characters in test data. We observed that the cross-validation accuracy using character embedding gave better results for Hindi-English twitter dataset compare to TamilEnglish twitter dataset. CCS Concepts • Information Retrieval ➝ Retrieval tasks and goals ➝ Information Extraction • Machine Learning ➝ Machine Learning approaches ➝ Kernel Methods ➝ Support Vector Machines", "title": "" }, { "docid": "3efdab4cf72ecaff61539c10c1196263", "text": "Design of high gain and high efficiency antennas is one of the key challenges in antenna engineering and especially in millimeter wave communication systems. Various types of planar waveguide arrays with series-fed traveling wave operation have been developed in Tokyo Tech. Key features of single-layer arrays in terms of mass production and fabrication cost have been studied, developed and demonstrated there. In this talk, in addition to the array design techniques, the novel methods for loss evaluation of materials and also the diffusion bonding for fabricating waveguide fine structure, specially developed for the millimeter wave are discussed. These arrays are now applied to two kinds of systems in the Tokyo Tech millimeter wave project; the indoor short range file-transfer systems and the outdoor communication systems for the medium range backhaul links. The latter has been field-tested in the model network built in Tokyo Tech Okayama campus. The talk covers early stage progress of the project including unique propagation data.", "title": "" }, { "docid": "fdd14b086d77b95b7ca00ab744f39458", "text": "1567-4223/$34.00 Crown Copyright 2008 Publishe doi:10.1016/j.elerap.2008.11.001 * Corresponding author. Tel.: +886 7 5254713; fax: E-mail address: tw_cchuang@hotmail.com (C.-C. H While eWOM advertising has recently emerged as an effective marketing strategy among marketing practitioners, comparatively few studies have been conducted to examine the eWOM from the perspective of pass-along emails. Based on social capital theory and social cognitive theory, this paper develops a model involving social enablers and personal cognition factors to explore the eWOM behavior and its efficacy. Data collected from 347 email users have lent credit to the model proposed. Tested by LISREL 8.70, the results indicate that the factors such as message involvement, social interaction tie, affection outcome expectations and message passing self-efficacy exert significant influences on pass-along email intentions (PAEIs). The study result may well be useful to marketing practitioners who are considering email marketing, especially to those who are in the process of selecting key email users and/or designing product advertisements to heighten the eWOM effect. Crown Copyright 2008 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "da47eb6c793f4afff5aecf6f52194e12", "text": "An inline chalcogenide phase change RF switch utilizing germanium telluride (GeTe) and driven by an integrated, electrically isolated thin film heater for thermal actuation has been fabricated. A voltage or current pulse applied to the heater terminals was used to transition the phase change material between the crystalline and amorphous states. An on-state resistance of 1.2 Ω (0.036 Ω-mm), with an off-state capacitance and resistance of 18.1 fF and 112 kΩ respectively were measured. This results in an RF switch cut-off frequency (Fco) of 7.3 THz, and an off/on DC resistance ratio of 9 × 104. The heater pulse power required to switch the GeTe between the two states was as low as 0.5W, with zero power consumption during steady state operation, making it a non-volatile RF switch. To the authors' knowledge, this is the first reported implementation of an RF phase change switch in a 4-terminal, inline configuration.", "title": "" }, { "docid": "93e358533212f7a87aef146fe69f3ca3", "text": "A remote real time AMR (automatic meter reading) system based on wireless sensor networks is presented in this paper. The useful remote AMR sensors were analyzed and efficient wireless network was suggested. The remote measurement system for water supply is taken as a typical example in experiments. The structure of system employs distributed structure based on wireless sensor networks, which consists of measure meters, sensor nodes, data collectors, server and wireless communication network. For a short distance transmission, the data collector collects data from the water meter sensors using the RF and ZigBee communication. For a long distance transmission, from the data collector to the server, system uses CDMA cellular network. The water meter data are received at the server through LAN using TCP/IP protocol. The proposed system have abroad application foreground in the real application field.", "title": "" }, { "docid": "326b0cb75e92e216cbac8f3c648b0efc", "text": "Scholarly content is increasingly being discussed, shared and bookmarked online by researchers. Altmetric is a start-­up that focuses on tracking, collecting and measuring this activity on behalf of publishers and here we describe our approach and general philosophy. Over the past year we've seen sharing and discussion activity around approximately 750k articles. The average number of articles shared each day grows by 5-­ 10% a month. We look at examples of how people are interacting with papers online and at how publishers can collect and present the resulting data to deliver real value to their authors and readers. Introduction Scholars are increasingly visible on the web and social media 1. While the majority of their online activities may not be directly related to their research they are nevertheless discussing, sharing and bookmarking scholarly articles online in large numbers. We know this because our job at Altmetric is to track the attention paid to papers online. Founded in January 2011 and with investment from Digital Science we're a London based start-­‐up that identifies, tracks and collects article level metrics on behalf of publishers. Article level metrics are quantitative or qualitative indicators of the impact that a single article has had. Examples of the former would be a count of the number of times the article has been downloaded, or shared on Twitter. Examples of the latter would be media coverage or a blog post from somebody well respected in the field. Tracking the conversations around papers Encouraging audiences to engage with articles online isn't anything new for many publishers. The Public Library of Science (PLoS), BioMed Central, Cell Press and Nature Publishing Group have all tried encouraging users to leave comments on papers with varying degrees of success but the response from users has generally been poor, with only a small fraction of papers ever receiving notable attention 2. A larger proportion of papers are discussed in some depth on academic blogs and a larger still proportion shared on social networks like Twitter, Facebook and Google+. Scholars seem to feel more comfortable sharing or discussing content in more informal environments tied to their personal identity and where", "title": "" }, { "docid": "40479536efec6311cd735f2bd34605d7", "text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.", "title": "" }, { "docid": "d798bc49068356495074f92b3bfe7a4b", "text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.", "title": "" }, { "docid": "aed7f6b54aeaf11ec6596d1f04b9db48", "text": "Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration, exposition, description, argumentandemotion expressingsentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed.", "title": "" }, { "docid": "5f72b6caef9b67dfbc1d31ad0675872d", "text": "The squirrel-cage induction motor remains the workhorse of the petrochemical industry because of its versatility and ruggedness. However, it has its limitations, which if exceeded will cause premature failure of the stator, rotor, bearings or shaft. This paper is the final abridgement and update of six previous papers for the Petroleum and Chemical Industry Committee of the IEEE Industry Applications Society presented over the last 24 years and includes the final piece dealing with shaft failures. A methodology is provided that will lead operations personnel to the most likely root causes of failure. Check-off sheets are provided to assist in the orderly collection of data to assist in the analysis. As the petrochemical industry evolves from reactive to time based, to preventive, to trending, to diagnostics, and to a predictive maintenance attitude, more and more attention to root cause analysis will be required. This paper will help provide a platform for the establishment of such an evolution. The product scope includes lowand medium-voltage squirrel-cage induction motors in the 1–3000–hp range with anti friction bearings. However, much of this material is applicable to other types and sizes.", "title": "" }, { "docid": "4e502571f06df2789dc77a7233b17a6f", "text": "With this work we aim to make a three-fold contribution. We first address the issue of supporting efficiently queries over string-attributes involving prefix, suffix, containment, and equality operators in large-scale data networks. Our first design decision is to employ distributed hash tables (DHTs) for the data network's topology, harnessing their desirable properties. Our next design decision is to derive DHT-independent solutions, treating DHT as a black box. Second, we exploit this infrastructure to develop efficient content based publish/subscribe systems. The main contribution here are algorithms for the efficient processing of queries (subscriptions) and events (publications). Specifically, we show that our subscription processing algorithms require O(logN) messages for a N-node network, and our event processing algorithms require O(l x logN) messages (with l being the average string length).Third, we develop algorithms for optimizing the processing of multi-dimensional events, involving several string attributes. Further to our analysis, we provide simulation-based experiments showing promising performance results in terms of number of messages, required bandwidth, load balancing, and response times.", "title": "" }, { "docid": "648a2e662f3d0c54d0d0ae8774928b38", "text": "Storytelling systems are computational systems designed to tell stories. Every story generation system defines its specific knowledge representation for supporting the storytelling process. Thus, there is a shared need amongst all the systems: the knowledge must be expressed unambiguously to avoid inconsistencies. However, when trying to make a comparative assessment between the storytelling systems, there is not a common way for expressing this knowledge. That is when a form of expression that covers the different aspects of the knowledge representations becomes necessary. A suitable solution is the use of a Controlled Natural Language (CNL) which is a good half-way point between natural and formal languages. A CNL can be used as a common medium of expression for this heterogeneous set of systems. This paper proposes the use of Controlled Natural Language for expressing every storytelling system knowledge as a collection of natural language sentences. In this respect, an initial grammar for a CNL is proposed, focusing on certain aspects of this knowledge.", "title": "" }, { "docid": "93d4d58e974e66c11c9b41d12a833da0", "text": "OBJECTIVE\nButyrate enemas may be effective in the treatment of active distal ulcerative colitis. Because colonic fermentation of Plantago ovata seeds (dietary fiber) yields butyrate, the aim of this study was to assess the efficacy and safety of Plantago ovata seeds as compared with mesalamine in maintaining remission in ulcerative colitis.\n\n\nMETHODS\nAn open label, parallel-group, multicenter, randomized clinical trial was conducted. A total of 105 patients with ulcerative colitis who were in remission were randomized into groups to receive oral treatment with Plantago ovata seeds (10 g b.i.d.), mesalamine (500 mg t.i.d.), and Plantago ovata seeds plus mesalamine at the same doses. The primary efficacy outcome was maintenance of remission for 12 months.\n\n\nRESULTS\nOf the 105 patients, 102 were included in the final analysis. After 12 months, treatment failure rate was 40% (14 of 35 patients) in the Plantago ovata seed group, 35% (13 of 37) in the mesalamine group, and 30% (nine of 30) in the Plantago ovata plus mesalamine group. Probability of continued remission was similar (Mantel-Cox test, p = 0.67; intent-to-treat analysis). Therapy effects remained unchanged after adjusting for potential confounding variables with a Cox's proportional hazards survival analysis. Three patients were withdrawn because of the development of adverse events consisting of constipation and/or flatulence (Plantago ovata seed group = 1 and Plantago ovata seed plus mesalamine group = 2). A significant increase in fecal butyrate levels (p = 0.018) was observed after Plantago ovata seed administration.\n\n\nCONCLUSIONS\nPlantago ovata seeds (dietary fiber) might be as effective as mesalamine to maintain remission in ulcerative colitis.", "title": "" } ]
scidocsrr
623562e48e41b37e383da109b004f532
Temporal-Kernel Recurrent Neural Networks
[ { "docid": "2bd0a839de94ae46df13422e0515f88d", "text": "The extended Kalman lter (EKF) can be used as an on-line algorithm to determine the weights in a recurrent network given target outputs as it runs. This paper notes some relationships between the EKF as applied to recurrent net learning and some simpler techniques that are more widely used. In particular, making certain simpliications to the EKF gives rise to an algorithm essentially identical to the real-time recurrent learning (RTRL) algorithm. Since the EKF involves adjusting unit activity in the network, it also provides a principled generalization of the teacher forcing technique. Prelinary simulation experiments on simple nite-state Boolean tasks indicate that the EKF can provide substantial speed-up in number of time steps required for training on such problems when compared with simpler on-line gradient algorithms. The computational requirements of the EKF are steep, but turn out to scale with network size at the same rate as RTRL. These observations are intended to provide insights that may lead to recurrent net training techniques that allow better control over the tradeoo between computational cost and convergence time.", "title": "" } ]
[ { "docid": "38f84febbfc9b36ea722789e22282361", "text": "Soft tissue augmentation with temporary dermal fillers is a continuously growing field, supported by the ongoing development and advances in technology and biocompatibility of the products marketed. The longer lasting, less immunogenic and thus more convenient hyaluronic acid (HA) fillers are encompassing by far the biggest share of the temporary dermal filler market. Since the approval of the first HA filler, Restylane, there are at least 10 HA fillers that have been approved by the FDA. Not all of the approved HA fillers are available on the market, and many more are coming. The Juvéderm product line (Allergan, Irvine, CA), consisting of Juvéderm Plus and Juvéderm Ultra Plus, was approved by the FDA in 2006. Juvéderm is a bacterium-derived nonanimal stabilized HA. Juvéderm Ultra and Ultra Plus are smooth, malleable gels with a homologous consistency that use a new technology called \"Hylacross technology\". They have a high concentration of cross-linked HAs, which accounts for its longevity. Juvéderm Ultra Plus is used for volumizing and correcting deeper folds, whereas Juvéderm Ultra is best for contouring and volumizing medium depth facial wrinkles and lip augmentation. Various studies have shown the superiority of the HA filler products compared with collagen fillers for duration, volume needed, and patient satisfaction. Restylane, Perlane, and Juvéderm are currently the most popular dermal fillers used in the United States.", "title": "" }, { "docid": "f8cb44e765ad86bd18e5401283c7e0bf", "text": "Distributional models represent a word through the contexts in which it has been observed. They can be used to predict similarity in meaning, based on the distributional hypothesis, which states that two words that occur in similar contexts tend to have similar meanings. Distributional approaches are often implemented in vector space models. They represent a word as a point in high-dimensional space, where each dimension stands for a context item, and a word’s coordinates represent its context counts. Occurrence in similar contexts then means proximity in space. In this survey we look at the use of vector space models to describe the meaning of words and phrases: the phenomena that vector space models address, and the techniques that they use to do so. Many word meaning phenomena can be described in terms of semantic similarity: synonymy, priming, categorization, and the typicality of a predicate’s arguments. But vector space models can do more than just predict semantic similarity. They are a very flexible tool, because they can make use of all of linear algebra, with all its data structures and operations. The dimensions of a vector space can stand for many things: context words, or non-linguistic context like images, or properties of a concept. And vector space models can use matrices or higher-order arrays instead of vectors for representing more complex relationships. Polysemy is a tough problem for distributional approaches, as a representation that is learned from all of a word’s contexts will conflate the different senses of the word. It can be addressed, using either clustering or vector combination techniques. Finally, we look at vector space models for phrases, which are usually constructed by combining word vectors. Vector space models for phrases can predict phrase similarity, and some argue that they can form the basis for a general-purpose representation framework for natural language semantics.", "title": "" }, { "docid": "5411326f95abd20a141ad9e9d3ff72bf", "text": "media files and almost universal use of email, information sharing is almost instantaneous anywhere in the world. Because many of the procedures performed in dentistry represent established protocols that should be read, learned and then practiced, it becomes clear that photography aids us in teaching or explaining to our patients what we think are common, but to them are complex and mysterious procedures. Clinical digital photography. Part 1: Equipment and basic documentation", "title": "" }, { "docid": "032db9c2dba42ca376e87b28ecb812fa", "text": "This paper tries to put various ways in which Natural Language Processing (NLP) and Software Engineering (SE) can be seen as inter-disciplinary research areas. We survey the current literature, with the aim of assessing use of Software Engineering and Natural Language Processing tools in the researches undertaken. An assessment of how various phases of SDLC can employ NLP techniques is presented. The paper also provides the justification of the use of text for automating or combining both these areas. A short research direction while undertaking multidisciplinary research is also provided.", "title": "" }, { "docid": "2efe1390e6d63cb4623ad1591340fc5a", "text": "This paper finds strong evidence of time-variations in the joint distribution of returns on a stock market portfolio and portfolios tracking sizeand value effects. Mean returns, volatilities and correlations between these equity portfolios are found to be driven by underlying regimes that introduce short-run market timing opportunities for investors. The magnitude of the premia on the size and value portfolios and their hedging properties are found to vary significantly across regimes. Regimes are also found to have a large impact on the optimal asset allocation especially under rebalancing and on investors’ welfare.", "title": "" }, { "docid": "07c185c21c9ce3be5754294a73ab5e3c", "text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "3b0f2413234109c6df1b643b61dc510b", "text": "Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.", "title": "" }, { "docid": "7219c24cdc89d12f7d661e9bba697719", "text": "OBJECTIVE\nThis study sought to examine the effects of media images on men's attitudes toward their body appearance.\n\n\nMETHOD\nA group of college men viewed advertisements showing muscular men, whereas a control group viewed neutral advertisements. Immediately thereafter, participants performed a computerized test of body image perception while unaware of the hypotheses being tested in the study.\n\n\nRESULTS\nThe students exposed to the muscular images showed a significantly greater discrepancy between their own perceived muscularity and the level of muscularity that they ideally wanted to have.\n\n\nDISCUSSION\nThese findings suggest that media images, even in a brief presentation, can affect men's views of their bodies.", "title": "" }, { "docid": "f2911f66107de4778dbc9d0b4c290038", "text": "We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.", "title": "" }, { "docid": "b5927458f6d34f2ff326f0f631a0e450", "text": "Bipolar disorder (BD) is a common and disabling psychiatric condition with a severe socioeconomic impact. BD is treated with mood stabilizers, among which lithium represents the first-line treatment. Lithium alone or in combination is effective in 60% of chronically treated patients, but response remains heterogenous and a large number of patients require a change in therapy after several weeks or months. Many studies have so far tried to identify molecular and genetic markers that could help us to predict response to mood stabilizers or the risk for adverse drug reactions. Pharmacogenetic studies in BD have been for the most part focused on lithium, but the complexity and variability of the response phenotype, together with the unclear mechanism of action of lithium, limited the power of these studies to identify robust biomarkers. Recent pharmacogenomic studies on lithium response have provided promising findings, suggesting that the integration of genome-wide investigations with deep phenotyping, in silico analyses and machine learning could lead us closer to personalized treatments for BD. Nevertheless, to date none of the genes suggested by pharmacogenetic studies on mood stabilizers have been included in any of the genetic tests approved by the Food and Drug Administration (FDA) for drug efficacy. On the other hand, genetic information has been included in drug labels to test for the safety of carbamazepine and valproate. In this review, we will outline available studies investigating the pharmacogenetics and pharmacogenomics of lithium and other mood stabilizers, with a specific focus on the limitations of these studies and potential strategies to overcome them. We will also discuss FDA-approved pharmacogenetic tests for treatments commonly used in the management of BD.", "title": "" }, { "docid": "c496424323fa958e09bbe0f6504f842d", "text": "In this research a new hybrid prediction algorithm for breast cancer has been made from a breast cancer data set. Many approaches are available in diagnosing the medical diseases like genetic algorithm, ant colony optimization, particle swarm optimization, cuckoo search algorithm, etc., The proposed algorithm uses a ReliefF attribute reduction with entropy based genetic algorithm for breast cancer detection. The hybrid combination of these techniques is used to handle the dataset with high dimension and uncertainties. The data are obtained from the Wisconsin breast cancer dataset; these data have been categorized based on different properties. The performance of the proposed method is evaluated and the results are compared with other well known feature selection methods. The obtained result shows that the proposed method has a remarkable ability to generate reduced-size subset of salient features while yielding significant classification accuracy for large datasets.", "title": "" }, { "docid": "4d0889329f9011adc05484382e4f5dc0", "text": "A high level of sustained personal plaque control is fundamental for successful treatment outcomes in patients with active periodontal disease and, hence, oral hygiene instructions are the cornerstone of periodontal treatment planning. Other risk factors for periodontal disease also should be identified and modified where possible. Many restorative dental treatments in particular require the establishment of healthy periodontal tissues for their clinical success. Failure by patients to control dental plaque because of inappropriate designs and materials for restorations and prostheses will result in the long-term failure of the restorations and the loss of supporting tissues. Periodontal treatment planning considerations are also very relevant to endodontic, orthodontic and osseointegrated dental implant conditions and proposed therapies.", "title": "" }, { "docid": "0ecb65da4effb562bfa29d06769b1a4c", "text": "A new algorithm for testing primality is presented. The algorithm is distinguishable from the lovely algorithms of Solvay and Strassen [36], Miller [27] and Rabin [32] in that its assertions of primality are certain (i.e., provable from Peano's axioms) rather than dependent on unproven hypothesis (Miller) or probability (Solovay-Strassen, Rabin). An argument is presented which suggests that the algorithm runs within time c1ln(n)c2ln(ln(ln(n))) where n is the input, and C1, c2 constants independent of n. Unfortunately no rigorous proof of this running time is yet available.", "title": "" }, { "docid": "06f8b713ed4020c99403c28cbd1befbc", "text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.", "title": "" }, { "docid": "d2eefcb0a03f769c5265a66be89c5ca3", "text": "The computational treatment of subjectivity and sentiment in natural language is usually significantly improved by applying features exploiting lexical resources where entries are tagged with semantic orientation (e.g., positive, negative values). In spite of the fair amount of work on Arabic sentiment analysis over the past few years, e.g., (Abbasi et al., 2008; Abdul-Mageed et al., 2014; Abdul-Mageed et al., 2012; Abdul-Mageed and Diab, 2012a; Abdul-Mageed and Diab, 2012b; Abdul-Mageed et al., 2011a; Abdul-Mageed and Diab, 2011), the language remains under-resourced as to these polarity repositories compared to the English language. In this paper, we report efforts to build and present SANA, a large-scale, multi-genre, multi-dialect multi-lingual lexicon for the subjectivity and sentiment analysis of the Arabic language and dialects.", "title": "" }, { "docid": "c501b2c5d67037b7ca263ec9c52503a9", "text": "Edith Penrose’s (1959) book, The Theory of the Growth of the Firm, is considered by many scholars in the strategy field to be the seminal work that provided the intellectual foundations for the modern, resource-based theory of the firm. However, the present paper suggests that Penrose’s direct or intended contribution to resource-based thinking has been misinterpreted. Penrose never aimed to provide useful strategy prescriptions for managers to create a sustainable stream of rents; rather, she tried to rigorously describe the processes through which firms grow. In her theory, rents were generally assumed not to occur. If they arose this reflected an inefficient macro-level outcome of an otherwise efficient micro-level growth process. Nevertheless, her ideas have undoubtedly stimulated ‘good conversation’ within the strategy field in the spirit of Mahoney and Pandian (1992); their emerging use by some scholars as building blocks in models that show how sustainable competitive advantage and rents can be achieved is undeniable, although such use was never intended by Edith Penrose herself. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "8862b581cd7a8a12e4666a5e1a2388be", "text": "Following the Arab Spring, a debate broke out among both academics and pundits as to how important social media had been in bringing about what may have been the least anticipated political development of the 21st century. Critics of the importance of social media pointed in particular to two factors: (a) the proportion of social media messages that were transmitted in English; and (b) the proportion of Arab Spring related social media posts that originated from outside the Arab world. In our chapter, we will test whether two important subsequent unanticipated protests Turkey’s 2013 Gezi Park protests and Ukraine’s 2013-14 Euromaidan protest also are susceptible to such criticisms. To do so, we draw on millions of tweets from both protests including millions of geolocated tweets from Turkey to test hypotheses related to the use of Twitter by protest participants and, more generally, in-country supporters of protest movements, effectively refuting the idea that emerged after the Arab Spring that Twitter use during protests only reflects international attention to an event. ∗The authors are all members of the New York University Social Media and Political Participation (SMaPP) laboratory. The writing of this article was supported by the INSPIRE program of the National Science Foundation (Award # SES-1248077) and Dean Thomas Carew and the Research Investment Fund (RIF) of New York University. 1 Online Appendix Figure A1: Turkish Facebook Activity Over Time", "title": "" }, { "docid": "55b1eb2df97e5d8e871e341c80514ab1", "text": "Modern digital still cameras sample the color spectrum using a color filter array coated to the CCD array such that each pixel samples only one color channel. The result is a mosaic of color samples which is used to reconstruct the full color image by taking the information of the pixels’ neighborhood. This process is called demosaicking. While standard literature evaluates the performance of these reconstruction algorithms by comparison of a ground-truth image with a reconstructed Bayer pattern image in terms of grayscale comparison, this work gives an evaluation concept to asses the geometrical accuracy of the resulting color images. Only if no geometrical distortions are created during the demosaicking process, it is allowed to use such images for metric calculations, e.g. 3D reconstruction or arbitrary metrical photogrammetric processing.", "title": "" }, { "docid": "7228ebec1e9ffddafab50e3ac133ebad", "text": "Building robust low and mid-level image representations, beyond edge primitives, is a long-standing goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these mid-level cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images.", "title": "" } ]
scidocsrr
b1b28d8b6a9f36025d31c3d1466d4f48
CauseInfer: Automatic and distributed performance diagnosis with hierarchical causality graph in large distributed systems
[ { "docid": "700a6c2741affdbdc2a5dd692130ebb0", "text": "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "title": "" }, { "docid": "e06cc2a4291c800a76fd2a107d2230e4", "text": "Surprisingly, console logs rarely help operators detect problems in large-scale datacenter services, for they often consist of the voluminous intermixing of messages from many software components written by independent developers. We propose a general methodology to mine this rich source of information to automatically detect system runtime problems. We first parse console logs by combining source code analysis with information retrieval to create composite features. We then analyze these features using machine learning to detect operational problems. We show that our method enables analyses that are impossible with previous methods because of its superior ability to create sophisticated features. We also show how to distill the results of our analysis to an operator-friendly one-page decision tree showing the critical messages associated with the detected problems. We validate our approach using the Darkstar online game server and the Hadoop File System, where we detect numerous real problems with high accuracy and few false positives. In the Hadoop case, we are able to analyze 24 million lines of console logs in 3 minutes. Our methodology works on textual console logs of any size and requires no changes to the service software, no human input, and no knowledge of the software's internals.", "title": "" } ]
[ { "docid": "0f3cad05c9c267f11c4cebd634a12c59", "text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.", "title": "" }, { "docid": "738df0dc6483e50d1480d27fe45e1034", "text": "A novel approach is proposed for the extraction of legal and courtesy amounts and date from cheque images based on the structural description of cheques. A method for the representation of cheques is presented. Several image processing techniques and algorithms have been developed in this approach. Experimental results show that the approach is effective and the proposed techniques and algorithms perform well.", "title": "" }, { "docid": "39db226d1f8980b3f0bc008c42248f2f", "text": "In vitro studies have demonstrated antibacterial activity of essential oils (EOs) against Listeria monocytogenes, Salmonella typhimurium, Escherichia coli O157:H7, Shigella dysenteria, Bacillus cereus and Staphylococcus aureus at levels between 0.2 and 10 microl ml(-1). Gram-negative organisms are slightly less susceptible than gram-positive bacteria. A number of EO components has been identified as effective antibacterials, e.g. carvacrol, thymol, eugenol, perillaldehyde, cinnamaldehyde and cinnamic acid, having minimum inhibitory concentrations (MICs) of 0.05-5 microl ml(-1) in vitro. A higher concentration is needed to achieve the same effect in foods. Studies with fresh meat, meat products, fish, milk, dairy products, vegetables, fruit and cooked rice have shown that the concentration needed to achieve a significant antibacterial effect is around 0.5-20 microl g(-1) in foods and about 0.1-10 microl ml(-1) in solutions for washing fruit and vegetables. EOs comprise a large number of components and it is likely that their mode of action involves several targets in the bacterial cell. The hydrophobicity of EOs enables them to partition in the lipids of the cell membrane and mitochondria, rendering them permeable and leading to leakage of cell contents. Physical conditions that improve the action of EOs are low pH, low temperature and low oxygen levels. Synergism has been observed between carvacrol and its precursor p-cymene and between cinnamaldehyde and eugenol. Synergy between EO components and mild preservation methods has also been observed. Some EO components are legally registered flavourings in the EU and the USA. Undesirable organoleptic effects can be limited by careful selection of EOs according to the type of food.", "title": "" }, { "docid": "ae393c8f1afc39d6f4ad7ce4b5640034", "text": "Generative adversarial networks have gained a lot of attention in general computer vision community due to their capability of data generation without explicitly modelling the probability density function and robustness to overfitting. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into the training and imposing higher order consistency that is proven to be useful in many cases, such as in domain adaptation, data augmentation, and image-to-image translation. These nice properties have attracted researcher in the medical imaging community and we have seen quick adoptions in many traditional tasks and some novel applications. This trend will continue to grow based on our observation therefore we conducted a review of the recent advances in medical imaging using the adversarial training scheme in the hope of benefiting researchers that are interested in this technique.", "title": "" }, { "docid": "7621e0dcdad12367dc2cfcd12d31c719", "text": "Microblogging sites have emerged as major platforms for bloggers to create and consume posts as well as to follow other bloggers and get informed of their updates. Due to the large number of users, and the huge amount of posts they create, it becomes extremely difficult to identify relevant and interesting blog posts. In this paper, we propose a novel convex collective matrix completion (CCMC) method that effectively utilizes user-item matrix and incorporates additional user activity and topic-based signals to recommend relevant content. The key advantage of CCMC over existing methods is that it can obtain a globally optimal solution and can easily scale to large-scale matrices using Hazan’s algorithm. To the best of our knowledge, this is the first work which applies and studies CCMC as a recommendation method in social media. We conduct a large scale study and show significant improvement over existing state-ofthe-art approaches.", "title": "" }, { "docid": "dbfff3ad127f8966aae988f52eeb0b41", "text": "Fingerprint recognition is a method of biometric authentication that uses pattern recognition techniques based on highresolution fingerprints images of the individual. Fingerprints have been used in forensic as well as commercial applications for identification as well as verification. Singular point detection is the most important task of fingerprint image classification operation. Two types of singular points called core and delta points are claimed to be enough to classify the fingerprints. The classification can act as an important indexing mechanism for large fingerprint databases which can reduce the query time and the computational complexity. Usually fingerprint images have noisy background and the local orientation field also changes very rapidly in the singular point area. It is difficult to locate the singular point precisely. There already exists many singular point detection algorithms, Most of them can efficiently detect the core point when the image quality is fine, but when the image quality is poor, the efficiency of the algorithm degrades rapidly. In the present work, a new method of detection and localization of core points in a fingerprint image is proposed.", "title": "" }, { "docid": "57f604677a10cb8112c7e254e8e7270b", "text": "This paper introduces building blocks for modular design of elliptic, pseudoelliptic, and self-equalized filters. The first building block is of second order and generates two transmission zeros (TZs), which are either on the real or imaginary axis. Moving the zeros from the real axis (linear phase response) to the imaginary axis (attenuation poles) requires changing the sign of one coupling coefficient. The second building block is a structure of order three, called extended doublet, which allows the generation of two TZs practically anywhere in the complex plane. An important property of this block is its ability to move the two TZs from the real axis to the imaginary axis of the complex s-plane without changing the signs of its coupling coefficients. The third building block is of third order and generates two TZs, which can be moved from the real to the imaginary axis by changing the sign of one coupling coefficient. Simple waveguide structures to implement these blocks are introduced for validation, although this general approach is feasible for all resonator filter types. Higher order filters are designed modularly by cascading an arbitrary number of these building blocks. A novel concept, which allows the independent control of each pair of TZs in higher order filters, is then introduced. It is shown that the new concept allows a filter of order N to generate N TZs without directly coupling the source to the load even when the coupling coefficients are all assumed frequency independent. The same approach can be used to design and reduce the sensitivity of higher order elliptic and pseudoelliptic filters using other building blocks such as doublets or a mixture of building blocks of different orders and properties. Measured results and extensive computer simulation are presented to demonstrate the validity of the concept and the performance of the designed filters.", "title": "" }, { "docid": "41b83a85c1c633785766e3f464cbd7a6", "text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.", "title": "" }, { "docid": "377210d62d3d3cd36f312c1812080a31", "text": "Nowadays air quality data can be easily accumulated by sensors around the world. Analysis on air quality data is very useful for society decision. Among five major air pollutants which are calculated for AQI (Air Quality Index), PM2.5 data is the most concerned by the people. PM2.5 data is also cross-impacted with the other factors in the air and which has properties of non-linear nonstationary including high noise level and outlier. Traditional methods cannot solve the problem of PM2.5 data clustering very well because of their inherent characteristics. In this paper, a novel model-based feature extraction method is proposed to address this issue. The EPLS model includes 1) Mode Decomposition, in which EEMD algorithm is applied to the aggregation dataset; 2) Dimension Reduction, which is carried out for a more significant set of vectors; 3) Least Squares Projection, in which all testing data are projected to the obtained vectors. Synthetic dataset and air quality dataset are applied to different clustering methods and similarity measures. Experimental results demonstrate IFully documented templates are available in the elsarticle package on CTAN. ∗Corresponding author at: Department of Computer Science, China University of Geosciences, Wuhan, China. ∗∗Corresponding author at: Department of Computer Science, China University of Geosciences, Wuhan, China. Email addresses: lizhewang@icloud.com (Lizhe Wang), lfy_cug@hotmail.com (Fangyuan Li) 1Email address: Cyl king@hotmail.com. Preprint submitted to Journal of LTEX Templates December 1, 2016", "title": "" }, { "docid": "080dbf49eca85711f26d4e0d8386937a", "text": "In this work, we investigate the use of directional antennas and beam steering techniques to improve performance of 802.11 links in the context of communication between amoving vehicle and roadside APs. To this end, we develop a framework called MobiSteer that provides practical approaches to perform beam steering. MobiSteer can operate in two modes - cached mode - where it uses prior radiosurvey data collected during \"idle\" drives, and online mode, where it uses probing. The goal is to select the best AP and beam combination at each point along the drive given the available information, so that the throughput can be maximized. For the cached mode, an optimal algorithm for AP and beam selection is developed that factors in all overheads.\n We provide extensive experimental results using a commercially available eight element phased-array antenna. In the experiments, we use controlled scenarios with our own APs, in two different multipath environments, as well as in situ scenarios, where we use APs already deployed in an urban region - to demonstrate the performance advantage of using MobiSteer over using an equivalent omni-directional antenna. We show that MobiSteer improves the connectivity duration as well as PHY-layer data rate due to better SNR provisioning. In particular, MobiSteer improves the throughput in the controlled experiments by a factor of 2 - 4. In in situ experiments, it improves the connectivity duration by more than a factor of 2 and average SNR by about 15 dB.", "title": "" }, { "docid": "cec3d18ea5bd7eba435e178e2fcb38b0", "text": "The synthesis of three-degree-of-freedom planar parallel manipulators is performed using a genetic algorithm. The architecture of a manipulator and its position and orientation with respect to a prescribed workspace are determined. The architectural parameters are optimized so that the manipulator’s constantorientation workspace is as close as possible to a prescribed workspace. The manipulator’s workspace is discretized and its dexterity is computed as a global property of the manipulator. An analytical expression of the singularity loci (local null dexterity) can be obtained from the Jacobian matrix determinant, and its intersection with the manipulator’s workspace may be verified and avoided. Results are shown for different conditions. First, the manipulators’ workspaces are optimized for a prescribed workspace, without considering whether the singularity loci intersect it or not. Then the same type of optimization is performed, taking intersections with the singularity loci into account. In the following results, the optimization of the manipulator’s dexterity is also included in an objective function, along with the workspace optimization and the avoidance of singularity loci. Results show that the end-effector’s location has a significant effect on the manipulator’s dexterity. ©2002 John Wiley & Sons, Inc.", "title": "" }, { "docid": "0ac7db546c11b9d18897ceeb2e5be70f", "text": "A backstepping approach is proposed in this paper to cope with the failure of a quadrotor propeller. The presented methodology supposes to turn off also the motor which is opposite to the broken one. In this way, a birotor configuration with fixed propellers is achieved. The birotor is controlled to follow a planned emergency landing trajectory. Theory shows that the birotor can reach any point in the Cartesian space losing the possibility to control the yaw angle. Simulation tests are employed to validate the proposed controller design.", "title": "" }, { "docid": "9a30158449c7bc5902bc0005afa4d35e", "text": "DTW algorithm compares the parameters of an unknown spoken word with the parameters of one or more reference templates. The more reference templates are used for the same word, the higher is the recognition rate. But increasing the number of reference templates for the same word to recognize, leads to an increase in memory resources and computing time. The proposed algorithm is used in the learning phase and combines the advantages of DTW and Vector Quantization (VQ); instead of storing multiple reference templates, it stores only one reference model for each word and that reference is based on classes (like in the vector quantization method), each class is represented by a centroid (or codeword). In the recognition phase, the parameters of the unknown utterance are compared to the centroids of the reference model. This solution increases the speed of calculation in the recognition phase and reduces the quantity of used memory.", "title": "" }, { "docid": "e0db3c5605ea2ea577dda7d549e837ae", "text": "This paper presents a system based on new operators for handling sets of propositional clauses represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multi-resolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.", "title": "" }, { "docid": "19d53b5a9ee4e4e6731b572bdc7dfbd7", "text": "Today, crowdfunding has emerged as a popular means for fundraising. Among various crowdfunding platforms, reward-based ones are the most well received. However, to the best knowledge of the authors, little research has been performed on rewards. In this paper, we analyze a Kickstarter dataset, which consists of approximately 3K projects and 30K rewards. The analysis employs various statistical methods, including Pearson correlation tests, Kolmogorov-Smirnow test and Kaplan-Meier estimation, to study the relationships between various reward characteristics and project success. We find that projects with more rewards, with limited offerings and late-added rewards are more likely to succeed.", "title": "" }, { "docid": "218b2f7a8e088c1023202bd27164b780", "text": "The explanation of crime has been preoccupied with individuals and communities as units of analysis. Recent work on offender decision making (Cornish and Clarke, 1986), situations (Clarke, 1983, 1992), environments (Brantingham and Brantingham 1981, 1993), routine activities (Cohen and Felson, 1979; Felson, 1994), and the spatial organization of drug dealing in the U.S. suggest a new unit of analysis: places. Crime is concentrated heavily in a Jew \"hot spots\" of crime (Sherman et aL 1989). The concentration of crime among repeat places is more intensive than it is among repeat offenders (Spelman and Eck, 1989). The components of this concentration are analogous to the components of the criminal careers of persons: onset, desistance, continuance, specialization, and desistance. The theoretical explanationfor variance in these components is also stronger at the level of places than it is for individuals. These facts suggest a need for rethinking theories of crime, as well as a new approach to theorizing about crime for", "title": "" }, { "docid": "cb2f5ac9292df37860b02313293d2f04", "text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a", "title": "" }, { "docid": "bda419b065c53853f86f7fdbf0e330f2", "text": "In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation.", "title": "" }, { "docid": "4e08aba1ff8d0a5d0d23763dad627cb8", "text": "ion Real systems are di cult to specify and verify without abstrac tions We need to identify di erent kinds of abstractions perhaps tailored for certain kinds of systems or problem domains and we need to develop ways to justify them formally perhaps using mechanical help Reusable models and theories Rather than de ning models and theories from scratch each time a new application is tackled it would be better to have reusable and parameterized models and theories Combinations of mathematical theories Many safety critical systems have both digital and analog components These hybrid systems require reasoning about both discrete and continuous mathematics System developers would like to be able to predict how well their system will operate in the eld Indeed they often care more about performance than cor rectness Performance modeling borrows strongly from probability statistics and queueing theory Data structures and algorithms To handle larger search spaces and larger systems new data structures and algorithms e g more concise data structures for representing boolean functions are needed", "title": "" } ]
scidocsrr
5ba192d81e40a975b186f7faf44c34d7
Depth map denoising using graph-based transform and group sparsity
[ { "docid": "a19c27371c6bf366fddabc2fd3f277b7", "text": "Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.", "title": "" }, { "docid": "e1b83ecf08498491b8d70043cc67d523", "text": "We give a brief discussion of denoising algorithms for depth data and introduce a novel technique based on the NL-means filter. A unified approach is presented that removes outliers from depth data and accordingly achieves an unbiased smoothing result. This robust denoising algorithm takes intra-patch similarity and optional color information into account in order to handle strong discontinuities and to preserve fine detail structure in the data. We achieve fast computation times with a GPU-based implementation. Results using data from a time-of-flight camera system show a significant gain in visual quality.", "title": "" }, { "docid": "0771cd99e6ad19deb30b5c70b5c98183", "text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "title": "" }, { "docid": "b5453d9e4385d5a5ff77997ad7e3f4f0", "text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "title": "" } ]
[ { "docid": "befa45cc1b2f9820b5f990d4f2971a3c", "text": "The stromal vasculature in tumors is a vital conduit of nutrients and oxygen for cancer cells. To date, the vast majority of studies have focused on unraveling the genetic basis of vessel sprouting (also termed angiogenesis). In contrast to the widely studied changes in cancer cell metabolism, insight in the metabolic regulation of angiogenesis is only just emerging. These studies show that metabolic pathways in endothelial cells (ECs) importantly regulate angiogenesis in conjunction with genetic signals. In this review, we will highlight these emerging insights in EC metabolism and discuss them in perspective of cancer cell metabolism. While it is generally assumed that cancer cells have unique metabolic adaptations, not shared by healthy non-transformed cells, we will discuss parallels and highlight differences between endothelial and cancer cell metabolism and consider possible novel therapeutic opportunities arising from targeting both cancer and endothelial cells.", "title": "" }, { "docid": "be96717240a76265febc4041d3f09338", "text": "We consider the problem of property testing for differential privacy: with black-box access to a purportedly private algorithm, can we verify its privacy guarantees? In particular, we show that any privacy guarantee that can be efficiently verified is also efficiently breakable in the sense that there exist two databases between which we can efficiently distinguish. We give lower bounds on the query complexity of verifying pure differential privacy, approximate differential privacy, random pure differential privacy, and random approximate differential privacy. We also give algorithmic upper bounds. The lower bounds obtained in the work are infeasible for the scale of parameters that are typically considered reasonable in the differential privacy literature, even when we suppose that the verifier has access to an (untrusted) description of the algorithm. A central message of this work is that verifying privacy requires compromise by either the verifier or the algorithm owner. Either the verifier has to be satisfied with a weak privacy guarantee, or the algorithm owner has to compromise on side information or access to the algorithm.", "title": "" }, { "docid": "884ea5137f9eefa78030608097938772", "text": "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios.", "title": "" }, { "docid": "a13d1144c4a719b1d6d5f4f0e645c2e3", "text": "Array antennas for 77GHz automotive radar application are designed and measured. Linear series-fed patch array (SFPA) antenna is designed for transmitters of middle range radar (MRR) and all the receivers. A planar SFPA based on the linear one and substrate integrated waveguide (SIW) feeding network is proposed for transmitter of long range radar (LRR), which can decline the radiation from feeding network itself. The array antennas are fabricated, both the performances with and without radome of these array antennas are measured. Good agreement between simulation and measurement has been achieved. They can be good candidates for 77GHz automotive application.", "title": "" }, { "docid": "80c4198d97b42988aa2fccaa97667bcc", "text": "Although the principles of gossip protocols are relatively easy to grasp, their variety can make their design and evaluation highly time consuming. This problem is compounded by the lack of a unified programming framework for gossip, which means developers cannot easily reuse, compose, or adapt existing solutions to fit their needs, and have limited opportunities to share knowledge and ideas. In this paper, we consider how component frameworks, which have been widely applied to implement middleware solutions, can facilitate the development of gossip-based systems in a way that is both generic and simple. We show how such an approach can maximize code reuse, simplify the implementation of gossip protocols, and facilitate dynamic evolution and redeployment.Also known as “epidemic” protocols.", "title": "" }, { "docid": "2e3bd582d0984f687032f03eb51b5fc0", "text": "We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at", "title": "" }, { "docid": "3feb565be1dc3439fd2fdf6b0e25d65b", "text": "Previous research demonstrated that a single amnesic patient could acquire complex knowledge and processes required for the performance of a computer data-entry task. The present study extends the earlier work to a larger group of brain-damaged patients with memory disorders of varying severity and of various etiologies and with other accompanying cognitive deficits. All patients were able to learn both the data-entry procedures and the factual information associated with the task. Declarative knowledge was acquired by patients at a much slower rate than normal whereas procedural learning proceeded at approximately the same rate in patients and control subjects. Patients also showed evidence of transfer of declarative knowledge to the procedural task, as well as transfer of the data-entry procedures across changes in materials.", "title": "" }, { "docid": "cca61271fe31513cb90c2ac7ecb0b708", "text": "This paper deals with the synthesis of fuzzy state feedback controller of induction motor with optimal performance. First, the Takagi-Sugeno (T-S) fuzzy model is employed to approximate a non linear system in the synchronous d-q frame rotating with electromagnetic field-oriented. Next, a fuzzy controller is designed to stabilise the induction motor and guaranteed a minimum disturbance attenuation level for the closed-loop system. The gains of fuzzy control are obtained by solving a set of Linear Matrix Inequality (LMI). Finally, simulation results are given to demonstrate the controller’s effectiveness. Keywords—Rejection disturbance, fuzzy modelling, open-loop control, Fuzzy feedback controller, fuzzy observer, Linear Matrix Inequality (LMI)", "title": "" }, { "docid": "913ea886485fae9b567146532ca458ac", "text": "This article presents a new method to illustrate the feasibility of 3D topology creation. We base the 3D construction process on testing real cases of implementation of 3D parcels construction in a 3D cadastral system. With the utilization and development of dense urban space, true 3D geometric volume primitives are needed to represent 3D parcels with the adjacency and incidence relationship. We present an effective straightforward approach to identifying and constructing the valid volumetric cadastral object from the given faces, and build the topological relationships among 3D cadastral objects on-thefly, based on input consisting of loose boundary 3D faces made by surveyors. This is drastically different from most existing methods, which focus on the validation of single volumetric objects after the assumption of the object’s creation. Existing methods do not support the needed types of geometry/ topology (e.g. non 2-manifold, singularities) and how to create and maintain valid 3D parcels is still a challenge in practice. We will show that the method does not change the faces themselves and faces in a given input are independently specified. Various volumetric objects, including non-manifold 3D cadastral objects (legal spaces), can be constructed correctly by this method, as will be shown from the", "title": "" }, { "docid": "59597ab549189c744aae774259f84745", "text": "This paper addresses the problem of multi-view people occupancy map estimation. Existing solutions either operate per-view, or rely on a background subtraction preprocessing. Both approaches lessen the detection performance as scenes become more crowded. The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases. Although deep learning algorithms have proven to excel on remarkably numerous computer vision tasks, such a method has not been applied yet to this problem. In large part this is due to the lack of large-scale multi-camera data-set. The core of our method is an architecture which makes use of monocular pedestrian data-set, available at larger scale than the multi-view ones, applies parallel processing to the multiple video streams, and jointly utilises it. Our end-to-end deep learning method outperforms existing methods by large margins on the commonly used PETS 2009 data-set. Furthermore, we make publicly available a new three-camera HD data-set.", "title": "" }, { "docid": "e7d26ac50053c8ff3799bbeafbf84f84", "text": "Social-emotional competence is a critical factor to target with universal preventive interventions that are conducted in schools because the construct (a) associates with social, behavioral, and academic outcomes that are important for healthy development; (b) predicts important life outcomes in adulthood; (c) can be improved with feasible and cost-effective interventions; and (d) plays a critical role in the behavior change process. This article reviews this research and what is known about effective intervention approaches. Based on that, an intervention model is proposed for how schools should enhance the social and emotional learning of students in order to promote resilience. Suggestions are also offered for how to support implementation of this intervention model at scale.", "title": "" }, { "docid": "29c32c8c447b498f43ec215633305923", "text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.", "title": "" }, { "docid": "6cf7a5286a03190b0910380830968351", "text": "In this paper, the mechanical and aerodynamic design, carbon composite production, hierarchical control system design and vertical flight tests of a new unmanned aerial vehicle, which is capable of VTOL (vertical takeoff and landing) like a helicopter and long range horizontal flight like an airplane, are presented. Real flight tests show that the aerial vehicle can successfully operate in VTOL mode. Kalman filtering is employed to obtain accurate roll and pitch angle estimations.", "title": "" }, { "docid": "88bc4f8a24a2e81a9c133d11a048ca10", "text": "In this paper, we give an overview of the HDF5 technology suite and some of its applications. We discuss the HDF5 data model, the HDF5 software architecture and some of its performance enhancing capabilities.", "title": "" }, { "docid": "9994825fcf1d5a5e252937af66255c8d", "text": "Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To solve this problem, we present a novel method that incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and to learn them efficiently. It learns to discover objects and to model physical interactions between them from raw visual images in a purely unsupervised fashion. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches, that do not incorporate such prior knowledge. We show its ability to handle occlusion and that it can extrapolate learned knowledge to environments with different numbers of objects.", "title": "" }, { "docid": "99d9dcef0e4441ed959129a2a705c88e", "text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: daniel.rinser@alumni.hpi.uni-potsdam.de (Daniel Rinser), dustin.lange@hpi.uni-potsdam.de (Dustin Lange), naumann@hpi.uni-potsdam.de (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions", "title": "" }, { "docid": "0b52e4be9b45d109e13750f522aa84a3", "text": "This dissertation presents, discusses, and sheds some light on the problems that appear when computers try to automatically classify musical genres from audio signals. In particular, a method is proposed for the automatic music genre classification by using a computational approach that is inspired in music cognition and musicology in addition to Music Information Retrieval techniques. In this context, we design a set of experiments by combining the different elements that may affect the accuracy in the classification (audio descriptors, machine learning algorithms, etc.). We evaluate, compare and analyze the obtained results in order to explain the existing glass-ceiling in genre classification, and propose new strategies to overcome it. Moreover, starting from the polyphonic audio content processing we include musical and cultural aspects of musical genre that have usually been neglected in the current state of the art approaches. This work studies different families of audio descriptors related to timbre, rhythm, tonality and other facets of music, which have not been frequently addressed in the literature. Some of these descriptors are proposed by the author and others come from previous existing studies. We also compare machine learning techniques commonly used for classification and analyze how they can deal with the genre classification problem. We also present a discussion on their ability to represent the different classification models proposed in cognitive science. Moreover, the classification results using the machine learning techniques are contrasted with the results of some listening experiments proposed. This comparison drive us to think of a specific architecture of classifiers that will be justified and described in detail. It is also one of the objectives of this dissertation to compare results under different data configurations, that is, using different datasets, mixing them and reproducing some real scenarios in which genre classifiers could be used (huge datasets). As a conclusion, we discuss how the classification architecture here proposed can break the existing glass-ceiling effect in automatic genre classification. To sum up, this dissertation contributes to the field of automatic genre classification: a) It provides a multidisciplinary review of musical genres and its classification; b) It provides a qualitative and quantitative evaluation of families of audio descriptors used for automatic classification; c) It evaluates different machine learning techniques and their pros and cons in the context of genre classification; d) It proposes a new architecture of classifiers after analyzing music genre classification from different disciplines; e) It analyzes the behavior of this proposed architecture in different environments consisting of huge or mixed datasets.", "title": "" }, { "docid": "88f0adb7573ce611a654b53cd07f23fb", "text": "In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution \"thumb-nail\" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.", "title": "" }, { "docid": "820b38bf53b58c0557a574a3c210955a", "text": "Modern students will work in the “Industry 4.0” and create digital economy of Russia. Digital economy is based on the infrastructure organization of production, which is based on the network interaction of production and technology. The world infrastructure of transport, trading, finance provides a technological organization of production and consumers. Industry 4.0 begins with the creation of redundant infrastructure networks - Industrial Internet of Things (IIoT). Today, university education corresponds to the processing form of the technological organization. The infrastructure form of the organization of educational process is necessary. Information technologies supporting the entire training process and representing it on the Internet, do not exist. A new methodology of educational process is suggested. Training in the training process, lectures, seminars, laboratory works are organized according to the logical fulfillment of the target works. The task of the teacher is the design of the target works and the analysis of the result. The simulation training system is a form of the activity organization that is directed to the development of participants and obtaining a result. The result will be a course work as a part of the project in the simulation system. Students act in specially developed problem, training and developing situations. Practical formation in the field of wireless communications technologies is carried out on the basis of equipment of software defined radio (SDR) or universal software radio peripheral (USRP) systems as a combination of hardware and software platforms for creating prototypes of real radio systems. Laboratory and research works are performed on the basis of this firmware radio system (SDR) in the remote Internet access Mode. This conforms to the principles of the Industry4.0. The project is carried out under the control of the teacher. The activity of each student is monitored. The Internet provides individual activities and simulation, communication, scheduling, and group activity analysis. As a result, there is no knowledge of the sum of knowledge, and the integral technical culture, which is a criterion of the education.", "title": "" } ]
scidocsrr
e94dffb1b53c6fd2937ee59e687d511d
Privacy Preserving Data Mining
[ { "docid": "5ed4c23e1fcfb3f18c18bb1eb6f408ab", "text": "In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.", "title": "" }, { "docid": "0a968f1dcba70ab1a42c25b1a6ec2a5c", "text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. A number of algorithmic techniques have been designed for privacy-preserving data mining. In this paper, we provide a review of the state-of-the-art methods for privacy. We discuss methods for randomization, k-anonymization, and distributed privacy-preserving data mining. We also discuss cases in which the output of data mining applications needs to be sanitized for privacy-preservation purposes. We discuss the computational and theoretical limits associated with privacy-preservation over high dimensional data sets.", "title": "" } ]
[ { "docid": "bb782cfc4528de63c38dfc2165f9c4b4", "text": "Many studies have investigated the smart grid architecture and communication models in the past few years. However, the communication model and architecture for a smart grid still remain unclear. Today's electric power distribution is very complex and maladapted because of the lack of efficient and cost-effective energy generation, distribution, and consumption management systems. A wireless smart grid communication system can playan important role in achieving these goals. In thispaper, we describe a smart grid communication architecture in which we merge customers and distributors into a single domain. In the proposed architecture, all the home area networks, neighborhood area networks, and local electrical equipment form a local wireless mesh network (LWMN). Each device or meter can act as a source, router, or relay. The data generated in any node (device/meter) reaches the data collector via other nodes. The data collector transmits this data via the access point of a wide area network (WAN). Finally, data is transferred to the service provider or to the control center of the smart grid. We propose a wireless cooperative communication model for the LWMN. We deploy a limited number of smart relays to improve the performance of the network. A novel relay selection mechanism is also proposed to reduce the relay selection overhead. Simulation results show that our cooperative smart grid (coopSG) communication model improves the end-to-end packet delivery latency, throughput, and energy efficiency over both the Wang et al. and Niyato et al. models.", "title": "" }, { "docid": "df701752c19f1b0ff56555a89201d0a9", "text": "This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristically. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.", "title": "" }, { "docid": "4d43bf711d1d756c2067369bbc9f8137", "text": "This paper develops a framework for examining the effect of demand uncertainty and forecast error on unit costs and customer service levels in the supply chain, including Material Requirements Planning (MRP) type manufacturing systems. The aim is to overcome the methodological limitations and confusion that has arisen in much earlier research. To illustrate the issues, the problem of estimating the value of improving forecasting accuracy for a manufacturer was simulated. The topic is of practical importance because manufacturers spend large sums of money in purchasing and staffing forecasting support systems to achieve more accurate forecasts. In order to estimate the value a two-level MRP system with lot sizing where the product is manufactured for stock was simulated. Final product demand was generated by two commonly occurring stochastic processes and with different variances. Different levels of forecasting error were then introduced to arrive at corresponding values for improving forecasting accuracy. The quantitative estimates of improved accuracy were found to depend on both the demand generating process and the forecasting method. Within this more complete framework, the substantive results confirm earlier research that the best lot sizing rules for the deterministic situation are the worst whenever there is uncertainty in demand. However, size matters, both in the demand uncertainty and forecasting errors. The quantitative differences depend on service level and also the form of demand uncertainty. Unit costs for a given service level increase exponentially as the uncertainty in the demand data increases. The paper also estimates the effects of mis-specification of different sizes of forecast error in addition to demand uncertainty. In those manufacturing problems with high demand uncertainty and high forecast error, improved forecast accuracy should lead to substantial percentage improvements in unit costs. Methodologically, the results demonstrate the need to simulate demand uncertainty and the forecasting process separately. Journal of the Operational Research Society (2011) 62, 483–500. doi:10.1057/jors.2010.40 Published online 16 June 2010", "title": "" }, { "docid": "78db8b57c3221378847092e5283ad754", "text": "This paper analyzes correlations and causalities between Bitcoin market indicators and Twitter posts containing emotional signals on Bitcoin. Within a timeframe of 104 days (November 23 2013 March 7 2014), about 160,000 Twitter posts containing ”bitcoin” and a positive, negative or uncertainty related term were collected and further analyzed. For instance, the terms ”happy”, ”love”, ”fun”, ”good”, ”bad”, ”sad” and ”unhappy” represent positive and negative emotional signals, while ”hope”, ”fear” and ”worry” are considered as indicators of uncertainty. The static (daily) Pearson correlation results show a significant positive correlation between emotional tweets and the close price, trading volume and intraday price spread of Bitcoin. However, a dynamic Granger causality analysis does not confirm a causal effect of emotional Tweets on Bitcoin market values. To the contrary, the analyzed data shows that a higher Bitcoin trading volume Granger causes more signals of uncertainty within a 24 to 72hour timeframe. This result leads to the interpretation that emotional sentiments rather mirror the market than that they make it predictable. Finally, the conclusion of this paper is that the microblogging platform Twitter is Bitcoins virtual trading floor, emotionally reflecting its trading dynamics.2", "title": "" }, { "docid": "ce17d4ecfe780d5dcc4e2910063c87f5", "text": "Article history: Transgender people face ma Received 14 December 2007 Received in revised form 31 December 2008 Accepted 20 January 2009 Available online 24 January 2009", "title": "" }, { "docid": "b00c6771f355577437dee2cdd63604b8", "text": "A person gets frustrated when he faces slow speed as many devices are connected to the same network. As the number of people accessing wireless internet increases, it’s going to result in clogged airwaves. Li-Fi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through a LED light bulb that varies in intensity faster than the human eye can follow.", "title": "" }, { "docid": "9b06026e998df745d820fbd835554b13", "text": "There have been significant advances in the field of Internet of Things (IoT) recently. At the same time there exists an ever-growing demand for ubiquitous healthcare systems to improve human health and well-being. In most of IoT-based patient monitoring systems, especially at smart homes or hospitals, there exists a bridging point (i.e., gateway) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks. These gateways have beneficial knowledge and constructive control over both the sensor network and the data to be transmitted through the Internet. In this paper, we exploit the strategic position of such gateways to offer several higher-level services such as local storage, real-time local data processing, embedded data mining, etc., proposing thus a Smart e-Health Gateway. By taking responsibility for handling some burdens of the sensor network and a remote healthcare center, a Smart e-Health Gateway can cope with many challenges in ubiquitous healthcare systems such as energy efficiency, scalability, and reliability issues. A successful implementation of Smart e-Health Gateways enables massive deployment of ubiquitous health monitoring systems especially in clinical environments. We also present a case study of a Smart e-Health Gateway called UTGATE where some of the discussed higher-level features have been implemented. Our proof-of-concept design demonstrates an IoT-based health monitoring system with enhanced overall system energy efficiency, performance, interoperability, security, and reliability.", "title": "" }, { "docid": "1ada0fc6b22bba07d9baf4ccab437671", "text": "Tree-based path planners have been shown to be well suited to solve various high dimensional motion planning problems. Here we present a variant of the Rapidly-Exploring Random Tree (RRT) path planning algorithm that is able to explore narrow passages or difficult areas more effectively. We show that both workspace obstacle information and C-space information can be used when deciding which direction to grow. The method includes many ways to grow the tree, some taking into account the obstacles in the environment. This planner works best in difficult areas when planning for free flying rigid or articulated robots. Indeed, whereas the standard RRT can face difficulties planning in a narrow passage, the tree based planner presented here works best in these areas", "title": "" }, { "docid": "2cc97c407494310f500525b938e8aaa4", "text": "OBJECTIVE\nIn this paper, we aim to investigate the effect of computer-aided triage system, which is implemented for the health checkup of lung lesions involving tens of thousands of chest X-rays (CXRs) that are required for diagnosis. Therefore, high accuracy of diagnosis by an automated system can reduce the radiologist's workload on scrutinizing the medical images.\n\n\nMETHOD\nWe present a deep learning model in order to efficiently detect abnormal levels or identify normal levels during mass chest screening so as to obtain the probability confidence of the CXRs. Moreover, a convolutional sparse denoising autoencoder is designed to compute the reconstruction error. We employ four publicly available radiology datasets pertaining to CXRs, analyze their reports, and utilize their images for mining the correct disease level of the CXRs that are to be submitted to a computer aided triaging system. Based on our approach, we vote for the final decision from multi-classifiers to determine which three levels of the images (i.e. normal, abnormal, and uncertain cases) that the CXRs fall into.\n\n\nRESULTS\nWe only deal with the grade diagnosis for physical examination and propose multiple new metric indices. Combining predictors for classification by using the area under a receiver operating characteristic curve, we observe that the final decision is related to the threshold from reconstruction error and the probability value. Our method achieves promising results in terms of precision of 98.7 and 94.3% based on the normal and abnormal cases, respectively.\n\n\nCONCLUSION\nThe results achieved by the proposed framework show superiority in classifying the disease level with high accuracy. This can potentially save the radiologists time and effort, so as to allow them to focus on higher-level risk CXRs.", "title": "" }, { "docid": "727add0c0e44d0044d7f58b3633160d2", "text": "Case II: Deterministic transitions, continuous state Case III: “Mildly” stochastic trans., finite state: P(s,a,s’) ≥ 1 δ Case IV: Bounded-noise stochastic transitions, continuous state: st+1 = T(st, at) + wt , ||wt|| ≤ ∆ Planning and Learning in Environments with Delayed Feedback Thomas J. Walsh, Ali Nouri, Lihong Li, Michael L. Littman Rutgers Laboratory for Real Life Reinforcement Learning Computer Science Department, Rutgers University, Piscataway NJ", "title": "" }, { "docid": "78bc13c6b86ea9a8fda75b66f665c39f", "text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.", "title": "" }, { "docid": "a8c4b84175074e654cf1facfc65bde50", "text": "We propose monotonic classification with selection of monotonic features as a defense against evasion attacks on classifiers for malware detection. The monotonicity property of our classifier ensures that an adversary will not be able to evade the classifier by adding more features. We train and test our classifier on over one million executables collected from VirusTotal. Our secure classifier has 62% temporal detection rate at a 1% false positive rate. In comparison with a regular classifier with unrestricted features, the secure malware classifier results in a drop of approximately 13% in detection rate. Since this degradation in performance is a result of using a classifier that cannot be evaded, we interpret this performance hit as the cost of security in classifying malware.", "title": "" }, { "docid": "3cc84fda5e04ccd36f5b632d9da3a943", "text": "We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.", "title": "" }, { "docid": "301715c650ee5f918ddeaf0c18889183", "text": "Keyframe-based Learning from Demonstration has been shown to be an effective method for allowing end-users to teach robots skills. We propose a method for using multiple keyframe demonstrations to learn skills as sequences of positional constraints (c-keyframes) which can be planned between for skill execution. We also introduce an interactive GUI which can be used for displaying the learned c-keyframes to the teacher, for altering aspects of the skill after it has been taught, or for specifying a skill directly without providing kinesthetic demonstrations. We compare 3 methods of teaching c-keyframe skills: kinesthetic teaching, GUI teaching, and kinesthetic teaching followed by GUI editing of the learned skill (K-GUI teaching). Based on user evaluation, the K-GUI method of teaching is found to be the most preferred, and the GUI to be the least preferred. Kinesthetic teaching is also shown to result in more robust constraints than GUI teaching, and several use cases of K-GUI teaching are discussed to show how the GUI can be used to improve the results of kinesthetic teaching.", "title": "" }, { "docid": "53afafd2fc1087989a975675ff4098d8", "text": "The sixth generation of IEEE 802.11 wireless local area networks is under developing in the Task Group 802.11ax. One main physical layer (PHY) novel feature in the IEEE 802.11ax amendment is the specification of orthogonal frequency division multiplexing (OFDM) uplink multi-user multiple-input multiple-output (UL MU-MIMO) techniques. A challenge issue to implement UL MU-MIMO in OFDM PHY is the mitigation of the relative carrier frequency offset (CFO), which can cause intercarrier interference and rotation of the constellation of received symbols, and, consequently, degrading the system performance dramatically if it is not properly mitigated. In this paper, we show that a frequency domain CFO estimation and correction scheme implemented at both transmitter (Tx) and receiver (Rx) coupled with pre-compensation approach at the Tx can decrease the negative effects of the relative CFO.", "title": "" }, { "docid": "c5ccbeec002977a2722f7b1e017112e1", "text": "Distributed processing of real-world graphs is challenging due to their size and the inherent irregular structure of graph computations. We present HipG, a distributed framework that facilitates programming parallel graph algorithms by composing the parallel application automatically from the user-defined pieces of sequential work on graph nodes. To make the user code high-level, the framework provides a unified interface to executing methods on local and non-local graph nodes and an abstraction of exclusive execution. The graph computations are managed by logical objects called synchronizers, which we used, for example, to implement distributed divide-and-conquer decomposition into strongly connected components. The code written in HipG is independent of a particular graph representation, to the point that the graph can be created on-the-fly, i.e. by the algorithm that computes on this graph, which we used to implement a distributed model checker. HipG programs are in general short and elegant; they achieve good portability, memory utilization, and performance.", "title": "" }, { "docid": "8d176debd26505d424dcbf8f5cfdb4d1", "text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "title": "" }, { "docid": "4af06d0e333f681a2d9afdb3298b549b", "text": "In this paper we present CRF-net, a CNN-based solution for estimating the camera response function from a single photograph. We follow the recent trend of using synthetic training data, and generate a large set of training pairs based on a small set of radio-metrically linear images and the DoRF database of camera response functions. The resulting CRF-net estimates the parameters of the EMoR camera response model directly from a single photograph. Experimentally, we show that CRF-net is able to accurately recover the camera response function from a single photograph under a wide range of conditions.", "title": "" }, { "docid": "66b7ed8c1d20bceafb0a1a4194cd91e8", "text": "In this paper a novel watermarking scheme for image authentication and recovery is presented. The algorithm can detect modified regions in images and is able to recover a good approximation of the original content of the tampered regions. For this purpose, two different watermarks have been used: a semi-fragile watermark for image authentication and a robust watermark for image recovery, both embedded in the Discrete Wavelet Transform domain. The proposed method achieves good image quality with mean Peak Signal-to-Noise Ratio values of the watermarked images of 42 dB and identifies image tampering of up to 20% of the original image.", "title": "" }, { "docid": "697ae7ff6a0ace541ea0832347ba044f", "text": "The repair of wounds is one of the most complex biological processes that occur during human life. After an injury, multiple biological pathways immediately become activated and are synchronized to respond. In human adults, the wound repair process commonly leads to a non-functioning mass of fibrotic tissue known as a scar. By contrast, early in gestation, injured fetal tissues can be completely recreated, without fibrosis, in a process resembling regeneration. Some organisms, however, retain the ability to regenerate tissue throughout adult life. Knowledge gained from studying such organisms might help to unlock latent regenerative pathways in humans, which would change medical practice as much as the introduction of antibiotics did in the twentieth century.", "title": "" } ]
scidocsrr
981688ee2081d695d8ec1090608de8b8
DENFIS: dynamic evolving neural-fuzzy inference system and its application for time-series prediction
[ { "docid": "b6de0b3fb29edff86afc4fadac687e9d", "text": "An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the \"neural gas\" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.", "title": "" }, { "docid": "93dba45f5309d77b63c8957609f146b7", "text": "Research papers available on the World Wide Web (WWW or Web) areoften poorly organized, often exist in forms opaque to searchengines (e.g. Postscript), and increase in quantity daily.Significant amounts of time and effort are typically needed inorder to find interesting and relevant publications on the Web. Wehave developed a Web based information agent that assists the userin the process of performing a scientific literature search. Givena set of keywords, the agent uses Web search engines and heuristicsto locate and download papers. The papers are parsed in order toextract information features such as the abstract and individuallyidentified citations. The agents Web interface can be used to findrelevant papers in the database using keyword searches, or bynavigating the links between papers formed by the citations. Linksto both citing and cited publications can be followed. In additionto simple browsing and keyword searches, the agent can find paperswhich are similar to a given paper using word information and byanalyzing common citations made by the papers.", "title": "" } ]
[ { "docid": "b1394b4534d1a2d62767f885c180903b", "text": "OBJECTIVE\nTo determine the value of measuring fetal femur and humerus length at 11-14 weeks of gestation in screening for chromosomal defects.\n\n\nMETHODS\nFemur and humerus lengths were measured using transabdominal ultrasound in 1018 fetuses immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In the group of chromosomally normal fetuses, regression analysis was used to determine the association between long bone length and crown-rump length (CRL). Femur and humerus lengths in fetuses with trisomy 21 were compared with those of normal fetuses.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The karyotype was normal in 920 fetuses and abnormal in 98, including 65 cases of trisomy 21. In the chromosomally normal group the fetal femur and humerus lengths increased significantly with CRL (femur length = - 6.330 + 0.215 x CRL in mm, r = 0.874, P < 0.0001; humerus length = - 6.240 + 0.220 x CRL in mm, r = 0.871, P < 0.0001). In the Bland-Altman plot the mean difference between paired measurements of femur length was 0.21 mm (95% limits of agreement - 0.52 to 0.48 mm) and of humerus length was 0.23 mm (95% limits of agreement - 0.57 to 0.55 mm). In the trisomy 21 fetuses the median femur and humerus lengths were significantly below the appropriate normal mean for CRL by 0.4 and 0.3 mm, respectively (P = 0.002), but they were below the respective 5th centile of the normal range in only six (9.2%) and three (4.6%) of the cases, respectively.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation the femur and humerus lengths in trisomy 21 fetuses are significantly reduced but the degree of deviation from normal is too small for these measurements to be useful in screening for trisomy 21.", "title": "" }, { "docid": "840c42456a69d20deead9f8574f6ee14", "text": "Millimeter wave (mmWave) is a promising approach for the fifth generation cellular networks. It has a large available bandwidth and high gain antennas, which can offer interference isolation and overcome high frequency-dependent path loss. In this paper, we study the non-uniform heterogeneous mmWave network. Non-uniform heterogeneous networks are more realistic in practical scenarios than traditional independent homogeneous Poisson point process (PPP) models. We derive the signal-to-noise-plus-interference ratio (SINR) and rate coverage probabilities for a two-tier non-uniform millimeter-wave heterogeneous cellular network, where the macrocell base stations (MBSs) are deployed as a homogeneous PPP and the picocell base stations (PBSs) are modeled as a Poisson hole process (PHP), dependent on the MBSs. Using tools from stochastic geometry, we derive the analytical results for the SINR and rate coverage probabilities. The simulation results validate the analytical expressions. Furthermore, we find that there exists an optimum density of the PBS that achieves the best coverage probability and the change rule with different radii of the exclusion region. Finally, we show that as expected, mmWave outperforms microWave cellular network in terms of rate coverage probability for this system.", "title": "" }, { "docid": "357e03d12dc50cf5ce27cadd50ac99fa", "text": "This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.", "title": "" }, { "docid": "1ace2a8a8c6b4274ac0891e711d13190", "text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.", "title": "" }, { "docid": "3840043afe85979eb901ad05b5b8952f", "text": "Cross media retrieval systems have received increasing interest in recent years. Due to the semantic gap between low-level features and high-level semantic concepts of multimedia data, many researchers have explored joint-model techniques in cross media retrieval systems. Previous joint-model approaches usually focus on two traditional ways to design cross media retrieval systems: (a) fusing features from different media data; (b) learning different models for different media data and fusing their outputs. However, the process of fusing features or outputs will lose both low- and high-level abstraction information of media data. Hence, both ways do not really reveal the semantic correlations among the heterogeneous multimedia data. In this paper, we introduce a novel method for the cross media retrieval task, named Parallel Field Alignment Retrieval (PFAR), which integrates a manifold alignment framework from the perspective of vector fields. Instead of fusing original features or outputs, we consider the cross media retrieval as a manifold alignment problem using parallel fields. The proposed manifold alignment algorithm can effectively preserve the metric of data manifolds, model heterogeneous media data and project their relationship into intermediate latent semantic spaces during the process of manifold alignment. After the alignment, the semantic correlations are also determined. In this way, the cross media retrieval task can be resolved by the determined semantic correlations. Comprehensive experimental results have demonstrated the effectiveness of our approach.", "title": "" }, { "docid": "cb00e564a81ace6b75e776f1fe41fb8f", "text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30", "title": "" }, { "docid": "6d41ec322f71c32195119807f35fde53", "text": "Input current distortion in the vicinity of input voltage zero crossings of boost single-phase power factor corrected (PFC) ac-dc converters is studied in this paper. Previously known causes for the zero-crossing distortion are reviewed and are shown to be inadequate in explaining the observed input current distortion, especially under high ac line frequencies. A simple linear model is then presented which reveals two previously unknown causes for zero-crossing distortion, namely, the leading phase of the input current and the lack of critical damping in the current loop. Theoretical and practical limitations in reducing the phase lead and increasing the damping factor are discussed. A simple phase compensation technique to reduce the zero-crossing distortion is also presented. Numerical simulation and experimental results are presented to validate the theory.", "title": "" }, { "docid": "b97e58184a94d6827bf294a3b1f91687", "text": "A good and robust sensor data fusion in diverse weather conditions is a quite challenging task. There are several fusion architectures in the literature, e.g. the sensor data can be fused right at the beginning (Early Fusion), or they can be first processed separately and then concatenated later (Late Fusion). In this work, different fusion architectures are compared and evaluated by means of object detection tasks, in which the goal is to recognize and localize predefined objects in a stream of data. Usually, state-of-the-art object detectors based on neural networks are highly optimized for good weather conditions, since the well-known benchmarks only consist of sensor data recorded in optimal weather conditions. Therefore, the performance of these approaches decreases enormously or even fails in adverse weather conditions. In this work, different sensor fusion architectures are compared for good and adverse weather conditions for finding the optimal fusion architecture for diverse weather situations. A new training strategy is also introduced such that the performance of the object detector is greatly enhanced in adverse weather scenarios or if a sensor fails. Furthermore, the paper responds to the question if the detection accuracy can be increased further by providing the neural network with a-priori knowledge such as the spatial calibration of the sensors.", "title": "" }, { "docid": "76d10dc3b823d7cae01269b2b7f15745", "text": "The new challenge for designers and HCI researchers is to develop software tools for effective e-learning. Learner-Centered Design (LCD) provides guidelines to make new learning domains accessible in an educationally productive manner. A number of new issues have been raised because of the new \"vehicle\" for education. Effective e-learning systems should include sophisticated and advanced functions, yet their interface should hide their complexity, providing an easy and flexible interaction suited to catch students' interest. In particular, personalization and integration of learning paths and communication media should be provided.It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool.", "title": "" }, { "docid": "33d36d081564bb08e95323b17945e86b", "text": "Sparse matrix-vector multiplication (SpMV) is an important kernel in scientific and engineering computing. Straightforward parallel implementations of SpMV often perform poorly, and with the increasing variety of architectural features in multicore processors, it is getting more difficult to determine the sparse matrix data structure and corresponding SpMV implementation that optimize performance. In this paper we present pOSKI, an autotuning system for SpMV that automatically searches over a large set of possible data structures and implementations to optimize SpMV performance on multicore platforms. pOSKI explores a design space that depends on both the nonzero pattern of the sparse matrix, typically not known until run-time, and the architecture, which is explored off-line as much as possible, in order to reduce tuning time. We demonstrate significant performance improvements compared to previous serial and parallel implementations, and compare performance to upper bounds based on architectural models. General Terms: Design, Experimentation, Performance Additional", "title": "" }, { "docid": "72aef0bd0793116983c11883ebfb5525", "text": "Building facade classification by architectural styles allows categorization of large databases of building images into semantic categories belonging to certain historic periods, regions and cultural influences. Image databases sorted by architectural styles permit effective and fast image search for the purposes of content-based image retrieval, 3D reconstruction, 3D city-modeling, virtual tourism and indexing of cultural heritage buildings. Building facade classification is viewed as a task of classifying separate architectural structural elements, like windows, domes, towers, columns, etc, as every architectural style applies certain rules and characteristic forms for the design and construction of the structural parts mentioned. In the context of building facade architectural style classification the current paper objective is to classify the architectural style of facade windows. Typical windows belonging to Romanesque, Gothic and Renaissance/Baroque European main architectural periods are classified. The approach is based on clustering and learning of local features, applying intelligence that architects use to classify windows of the mentioned architectural styles in the training stage.", "title": "" }, { "docid": "07b889a2b1a18bc1f91021f3b889474a", "text": "In this study, we show a correlation between electrical properties (relative permittivity-εr and conductivity-σ) of blood plasma and plasma glucose concentration. In order to formulate that correlation, we performed electrical property measurements on blood samples collected from 10 adults between the ages of 18 and 40 at University of Alabama Birmingham (UAB) Children's hospital. The measurements are conducted between 500 MHz and 20 GHz band. Using the data obtained from measurements, we developed a single-pole Cole-Cole model for εr and σ as a function of plasma blood glucose concentration. To provide an application, we designed a microstrip patch antenna that can be used to predict the glucose concentration within a given plasma sample. Simulation results regarding antenna design and its performance are also presented.", "title": "" }, { "docid": "e45fe4344cf0d6c3077389ea73e427c6", "text": "Vehicle tracking data is an essential “raw” material for a broad range of applications such as traffic management and control, routing, and navigation. An important issue with this data is its accuracy. The method of sampling vehicular movement using GPS is affected by two error sources and consequently produces inaccurate trajectory data. To become useful, the data has to be related to the underlying road network by means of map matching algorithms. We present three such algorithms that consider especially the trajectory nature of the data rather than simply the current position as in the typical map-matching case. An incremental algorithm is proposed that matches consecutive portions of the trajectory to the road network, effectively trading accuracy for speed of computation. In contrast, the two global algorithms compare the entire trajectory to candidate paths in the road network. The algorithms are evaluated in terms of (i) their running time and (ii) the quality of their matching result. Two novel quality measures utilizing the Fréchet distance are introduced and subsequently used in an experimental evaluation to assess the quality of matching real tracking data to a road network.", "title": "" }, { "docid": "441f80a25e7a18760425be5af1ab981d", "text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.", "title": "" }, { "docid": "f77495366909b9713463bebf2b4ff2fc", "text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.", "title": "" }, { "docid": "c8e4450de63dc54b5802566d589d4cdc", "text": "BACKGROUND\nMore than 1.5 million Americans have Parkinson disease (PD), and this figure is expected to rise as the population ages. However, the dental literature offers little information about the illness.\n\n\nTYPES OF STUDIES REVIEWED\nThe authors conducted a MEDLINE search using the key terms \"Parkinson's disease,\" \"medical management\" and \"dentistry.\" They selected contemporaneous articles published in peer-reviewed journals and gave preference to articles reporting randomized controlled trials.\n\n\nRESULTS\nPD is a progressive neurodegenerative disorder caused by loss of dopaminergic and nondopaminergic neurons in the brain. These deficits result in tremor, slowness of movement, rigidity, postural instability and autonomic and behavioral dysfunction. Treatment consists of administering medications that replace dopamine, stimulate dopamine receptors and modulate other neurotransmitter systems.\n\n\nCLINICAL IMPLICATIONS\nOral health may decline because of tremors, muscle rigidity and cognitive deficits. The dentist should consult with the patient's physician to establish the patient's competence to provide informed consent and to determine the presence of comorbid illnesses. Scheduling short morning appointments that begin 90 minutes after administration of PD medication enhances the patient's ability to cooperate with care. Inclination of the dental chair at 45 degrees, placement of a bite prop, use of a rubber dam and high-volume oral evacuation enhance airway protection. To avoid adverse drug interactions with levodopa and entacapone, the dentist should limit administration of local anesthetic agents to three cartridges of 2 percent lidocaine with 1:100,000 epinephrine per half hour, and patients receiving selegiline should not be given agents containing epinephrine or levonordefrin. The dentist should instruct the patient and the caregiver in good oral hygiene techniques.", "title": "" }, { "docid": "acf6a62e487b79fc0500aa5e6bbb0b0b", "text": "This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically. Our model allows the designers of RL agents to solely focus on the task to achieve, without having to worry about the implementation of multiple trivial ethical patterns to follow. Based on the assumption that the majority of human behavior, regardless which goals they are achieving, is ethical, our design integrates human policy with the RL policy to achieve the target objective with less chance of violating the ethical code that human beings normally obey.", "title": "" }, { "docid": "e573d85271e3f3cc54b774de8a5c6dd9", "text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.", "title": "" }, { "docid": "fb1e23b956c5b60f581f9a32001a9783", "text": "Deep convolutional neural networks (CNNs) have recently shown very high accuracy in a wide range of cognitive tasks, and due to this, they have received significant interest from the researchers. Given the high computational demands of CNNs, custom hardware accelerators are vital for boosting their performance. The high energy efficiency, computing capabilities and reconfigurability of FPGA make it a promising platform for hardware acceleration of CNNs. In this paper, we present a survey of techniques for implementing and optimizing CNN algorithms on FPGA. We organize the works in several categories to bring out their similarities and differences. This paper is expected to be useful for researchers in the area of artificial intelligence, hardware architecture and system design.", "title": "" } ]
scidocsrr
1d279b2a5f4a5a28c40c315d9c12ab38
Automated Void Detection in Solder Balls in the Presence of Vias and Other Artifacts
[ { "docid": "3ef23f2c076837f804819e11f39734f9", "text": "Non-wet solder joints in processor sockets are causing mother board failures. These board failures can escape to customers resulting in returns and dissatisfaction. The current process to identify these non-wets is to use a 2D or advanced X-ray tool with multidimension capability to image solder joints in processor sockets. The images are then examined by an operator who determines if each individual joint is good or bad. There can be an average of 150 images for an operator to examine for each socket. Each image contains more than 30 joints. These factors make the inspection process time consuming and the output variable depending on the skill and alertness of the operator. This paper presents an automatic defect identification and classification system for the detection of non-wet solder joints. The main components of the proposed system consist of region of interest (ROI) segmentation, feature extraction, reference-free classification, and automatic mapping. The ROI segmentation process is a noise-resilient segmentation method for the joint area. The centroids of the segmented joints (ROIs) are used as feature parameters to detect the suspect joints. The proposed reference-free classification can detect defective joints in the considered images with high accuracy without the need for training data or reference images. An automatic mapping procedure which maps the positions of all joints to a known Master Ball Grid Array file is used to get the precise label and location of the suspect joint for display to the operator and collection of non-wet statistics. The accuracy of the proposed system was determined to be 95.8% based on the examination of 56 sockets (76 496 joints). The false alarm rate is 1.1%. In comparison, the detection rate of a currently available advanced X-ray tool with multidimension capability is in the range of 43% to 75%. The proposed method reduces the operator effort to examine individual images by 89.6% (from looking at 154 images to 16 images) by presenting only images with suspect joints for inspection. When non-wet joints are missed, the presented system has been shown to identify the neighboring joints. This fact provides the operator with the capability to make 100% detection of all non-wets when utilizing a user interface that highlights the suspect joint area. The system works with a 2D X-ray imaging device, which saves cost over more expensive advanced X-ray tools with multidimension capability. The proposed scheme is relatively inexpensive to implement, easy to set up and can work with a variety of 2D X-ray tools.", "title": "" } ]
[ { "docid": "06df4096b54d72eb415f9ad9c18cdf68", "text": "This paper concerns automated cell counting and detection in microscopy images. The approach we take is to use convolutional neural networks (CNNs) to regress a cell spatial density map across the image. This is applicable to situations where traditional single-cell segmentation-based methods do not work well due to cell clumping or overlaps. We make the following contributions: (i) we develop and compare architectures for two fully convolutional regression networks (FCRNs) for this task; (ii) since the networks are fully convolutional, they can predict a density map for an input image of arbitrary size, and we exploit this to improve efficiency by end-to-end training on image patches; (iii) we show that FCRNs trained entirely on synthetic data are able to give excellent predictions on microscopy images from real biological experiments without fine-tuning, and that the performance can be further improved by fine-tuning on these real images. Finally, (iv) by inverting feature representations, we show to what extent the information from an input image has been encodedby feature responses in different layers.We set a new state-of-the-art performance for cell counting on standard synthetic image benchmarks and show that the FCRNs trained entirely with synthetic data can generalise well to real microscopy images both for cell counting and detections for the case of overlapping cells. ARTICLE HISTORY Received 15 Nov 2015 Accepted 28 Jan 2016", "title": "" }, { "docid": "f58d69de4b5bcc4100a3bfe3426fa81f", "text": "This study evaluated the factor structure of the Rosenberg Self-Esteem Scale (RSES) with a diverse sample of 1,248 European American, Latino, Armenian, and Iranian adolescents. Adolescents completed the 10-item RSES during school as part of a larger study on parental influences and academic outcomes. Findings suggested that method effects in the RSES are more strongly associated with negatively worded items across three diverse groups but also more pronounced among ethnic minority adolescents. Findings also suggested that accounting for method effects is necessary to avoid biased conclusions regarding cultural differences in selfesteem and how predictors are related to the RSES. Moreover, the two RSES factors (positive self-esteem and negative self-esteem) were differentially predicted by parenting behaviors and academic motivation. Substantive and methodological implications of these findings for crosscultural research on adolescent self-esteem are discussed.", "title": "" }, { "docid": "c02cc2c217da6614bccb90ac8b7c7506", "text": "This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks.", "title": "" }, { "docid": "ff3142da3a2047e0b7ba00082b87180a", "text": "Static code analysis automates time-consuming and error-prone manual code review processes. Unlike dynamic testing, which requires the software to be executed, static code analysis is performed directly on the source code, enabling quality checks to begin before the code is ready for integration and test. During analysis, Polyspace tools calculate complexity metrics and check the code for compliance with development standards, including MISRA C®, MISRA C++, and JSF++.", "title": "" }, { "docid": "3e2df9d6ed3cad12fcfda19d62a0b42e", "text": "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.", "title": "" }, { "docid": "7f583b52f995504e7f8ac1fe933c30e6", "text": "Contemporary solutions for cloud-supported, edge-data analytics mostly apply analytics techniques in a rigid bottom-up approach, regardless of the data's origin. Typically, data are generated at the edge of the infrastructure and transmitted to the cloud, where traditional data analytics techniques are applied. Currently, developers are forced to resort to ad hoc solutions specifically tailored for the available infrastructure (for example, edge devices) when designing, developing, and operating the data analytics applications. Here, a novel approach implements cloud-supported, real-time data analytics in edge-computing applications. The authors introduce their serverless edge-data analytics platform and application model and discuss their main design requirements and challenges, based on real-life healthcare use case scenarios.", "title": "" }, { "docid": "6b72358b5cbbe349ee09f88773762ab1", "text": "Estimating virtual CT(vCT) image from MRI data is in crucial need for medical application due to the relatively high dose of radiation exposure in CT scan and redundant workflow of both MR and CT. Among the existing work, the fully convolutional neural network(FCN) shows its superiority in generating vCT of high fidelity which merits further investigation. However, the most widely used evaluation metrics mean absolute error (MAE) and peak signal to noise ratio (PSNR) may not be adequate enough to reflect the structure quality of the vCT, while most of the current FCN based approaches focus more on the architectures but have little attention on the loss functions which are closely related to the final evaluation. The objective of this thesis is to apply Structure Similarity(SSIM) as loss function for predicting vCT from MRI based on FCN and see whether the prediction has improvement in terms of structure compared with conventionally used l or l loss. Inspired by the SSIM, the contextual l has been proposed to investigate the impact of introducing context information to the loss function. CT data was non-rigidly registered to MRI for training and evaluation. Patch-based 3D FCN were optimized for different loss functions to predict vCT from MRI data. Specifically for optimizing SSIM, the training data should be normalization to [0, 1] and architecture should be slightly changed by adding the ReLu layer before the output to guarantee the convexity of the SSIM during the training. The evaluation results are carried out with 7-folds cross validation of the 14 patients. MAE, PSNR and SSIM for the whole volume and tissue-wise are evaluted respectively. All optimizations successfully converged well and cl outperformed the other losses in terms of PSNR and MAE but with the worst SSIM, DSSIM works better at preserving the structures and resulting in smooth output. Yuan Zhou Delft, August 2017", "title": "" }, { "docid": "c6058966ef994d7b447f47d41d7fff33", "text": "The advancement in computer technology has encouraged the researchers to develop software for assisting doctors in making decision without consulting the specialists directly. The software development exploits the potential of human intelligence such as reasoning, making decision, learning (by experiencing) and many others. Artificial intelligence is not a new concept, yet it has been accepted as a new technology in computer science. It has been applied in many areas such as education, business, medical and manufacturing. This paper explores the potential of artificial intelligence techniques particularly for web-based medical applications. In addition, a model for web-based medical diagnosis and prediction is", "title": "" }, { "docid": "d6c477f7d4e3453dedbace0b4281f83a", "text": "Electronic intermediaries have become pervasive in sales transactions for many durables, such as cars, power tools, and apartments. Yet only recently have they successfully tackled the challenge of enabling parties to share such goods. A key impediment to sharing is a lender’s concern about damage due to unobservable actions by a renter, usually resulting in moral hazard. This paper shows how an intermediary can eliminate the moral-hazard problem by providing optimal insurance to the lender and first-best-incentives to the renter to exert care, as long as market participants are risk-neutral. The solution is illustrated for the collaborative housing market but applies in principle to any sharing market with vertically differentiated goods. A population of renters, heterogeneous both in their preferences for housing quality and with respect to the amounts of care they exert in a rental situation, faces a choice between collaborative housing and staying at a local hotel. The private hosts choose their prices strategically and the intermediary sets commission rates on both sides of the market as well as insurance terms for the rental agreement. The latter are set so as to eliminate moral hazard. The intermediary is able to extract the gains the hosts would earn compared to transacting directly. Finally, even if hotels set their prices at the outset so as to maximize collusive profits, we find that collaborative housing persists at substantial market shares, regardless of the difference between the efficiencies of hosts and hotels to reduce renters’ cost of effort. The aggregate of hosts, intermediary, and hotels benefits from (a variety in) these effort costs, which indicates that the intermediated sharing of goods is an economically viable, robust phenomenon. JEL-Classification: C72, D43, D47, L13, L14, O18, R31.", "title": "" }, { "docid": "753a4af9741cd3fec4e0e5effaf5fc67", "text": "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.", "title": "" }, { "docid": "900ba3bb682c3543c2855d7487701864", "text": "Lines of waiting customers are always very long in most of banks. The essence of this phenomenon is the low efficiency of queuing system. In this paper, the queuing number, the service windows number and the optimal service rate are investigated by means of the queuing theory. In technology, the optimal problem of the bank queuing is solved. The time of customer queuing is reduced. The customer satisfaction is increased. It was proved that this optimal model of the queuing is feasible. By the example, the results are effective and practical.", "title": "" }, { "docid": "6d882c210047b3851cb0514083cf448e", "text": "Child sexual abuse is a serious global problem and has gained public attention in recent years. Due to the popularity of digital cameras, many perpetrators take images of their sexual activities with child victims. Traditionally, it was difficult to use cutaneous vascular patterns for forensic identification, because they were nearly invisible in color images. Recently, this limitation was overcome using a computational method based on an optical model to uncover vein patterns from color images for forensic verification. This optical-based vein uncovering (OBVU) method is sensitive to the power of the illuminant and does not utilize skin color in images to obtain training parameters to optimize the vein uncovering performance. Prior publications have not included an automatic vein matching algorithm for forensic identification. As a result, the OBVU method only supported manual verification. In this paper, we propose two new schemes to overcome limitations in the OBVU method. Specifically, a color optimization scheme is used to derive the range of biophysical parameters to obtain training parameters and an automatic intensity adjustment scheme is used to enhance the robustness of the vein uncovering algorithm. We also developed an automatic matching algorithm for vein identification. This algorithm can handle rigid and non-rigid deformations and has an explicit pruning function to remove outliers in vein patterns. The proposed algorithms were examined on a database with 300 pairs of color and near infrared (NIR) images collected from the forearms of 150 subjects. The experimental results are encouraging and indicate that the proposed vein uncovering algorithm performs better than the OBVU method and that the uncovered patterns can potentially be used for automatic criminal and victim identification.", "title": "" }, { "docid": "056c84ed6a65e219d40e244bf9f456f3", "text": "An astronomical set of sentences can be produced in natural language by combining relatively simple sentence structures with a human-size lexicon. These sentences are within the range of human language performance. Here, we investigate the ability of simple recurrent networks (SRNs) to handle such combinatorial productivity. We successfully trained SRNs to process sentences formed by combining sentence structures with different groups of words. Then, we tested the networks with test sentences in which words from different training sentences were combined. The networks failed to process these sentences, even though the sentence structures remained the same and all words appeared on the same syntactic positions as in the training sentences. In these combination cases, the networks produced work–word associations, similar to the condition in which words are presented in the context of a random word sequence. The results show that SRNs have serious difficulties in handling the combinatorial productivity that underlies human language performance. We discuss implications of this result for a potential neural architecture of human language processing.", "title": "" }, { "docid": "7b731f1e128e0fd5cbda58bb2dbf6ba9", "text": "An electromagnetic (EM) wave with orbital angular momentum (OAM) has a helical wave front, which is different from that of the plane wave. The phase gradient can be found perpendicular to the direction of propagation and proportional to the number of OAM modes. Herein, we study the backscattering property of the EM wave with different OAM modes, i.e., the radar cross section (RCS) of the target is measured and evaluated with different OAM waves. As indicated by the experimental results, different OAM waves have the same RCS fluctuation for the simple target, e.g., a small metal ball as the target. However, for complicated targets, e.g., two transverse-deployed small metal balls, different RCSs can be identified from the same incident angle. This valuable fact helps to obtain RCS diversity, e.g., equal gain or selective combining of different OAM wave scattering. The majority of the targets are complicated targets or expanded targets; the RCS diversity can be utilized to detect a weak target traditionally measured by the plane wave, which is very helpful for anti-stealth radar to detect the traditional stealth target by increasing the RCS with OAM waves.", "title": "" }, { "docid": "aa0d6d4fb36c2a1d18dac0930e89179e", "text": "The interest in biomass is increasing in the light of the growing concern about global warming and the resulting climate change. The emission of the greenhouse gas CO2 can be reduced when 'green' biomass-derived transportation fuels are used. One of the most promising routes to produce green fuels is the combination of biomass gasification (BG) and Fischer-Tropsch (FT) synthesis, wherein biomass is gasified and after cleaning the biosyngas is used for FT synthesis to produce long-chain hydrocarbons that are converted into ‘green diesel’. To demonstrate this route, a small FT unit based on Shell technology was operated for in total 650 hours on biosyngas produced by gasification of willow. In the investigated system, tars were removed in a high-temperature tar cracker and other impurities, like NH3 and H2S were removed via wet scrubbing followed by active-carbon and ZnO filters. The experimental work and the supporting system analysis afforded important new insights on the desired gas cleaning and the optimal line-up for biomass gasification processes with a maximised conversion to FT liquids. Two approaches were considered: a front-end approach with reference to the (small) scale of existing CFB gasifiers (1-100 M Wth) and a back-end approach with reference to the desired (large) scale for FT synthesis (500-1000 MWth). In general, the sum of H2 and CO in the raw biosyngas is an important parameter, whereas the H2/CO ratio is less relevant. BTX (i.e . benzene, toluene, and xylenes) are the design guideline for the gas cleaning and with this the tar issue is de-facto solved (as tars are easier to remove than BTX). To achieve high yields of FT products the presence of a tar cracker in the system is required. Oxygen gasification allows a further increase in yield of FT products as a N2-free gas is required for off-gas recycling. The scale of the BG-FT installation determines the line-up of the gas cleaning and the integrated process. It is expected that the future of BG-FT systems will be large plants with pressurised oxygen blown gasifiers and maximised Fischer-Tropsch synthesis.", "title": "" }, { "docid": "7a4849a839b41e8c4c170f4b4b5a241b", "text": "A practical approach for generating motion paths with continuous steering for car-like mobile robots is presented here. This paper addresses two key issues in robot motion planning; path continuity and maximum curvature constraint for nonholonomic robots. The advantage of this new method is that it allows robots to account for their constraints in an efficient manner that facilitates real-time planning. Bspline curves are leveraged for their robustness and practical synthesis to model the vehicle’s path. Comparative navigational-based analyses are presented to selected appropriate curve and nominate its parameters. Path continuity is achieved by utilizing a single path, to represent the trajectory, with no limitations on path, or orientation. The path parameters are formulated with respect to the robot’s constraints. Maximum curvature is satisfied locally, in every segment using a smoothing algorithm, if needed. It is M. Elbanhawi ( ) · M. Simic · R. N. Jazar School of Aerospace, Mechanical, and Manufacturing Engineering (SAMME), RMIT University.Bundoora East Campus, Corner of Plenty Road, McKimmies Road, Bundoora VIC 3083, Melbourne, Australia e-mail: mohamed.elbenhawi@rmit.edu.au M. Simic e-mail: milan.simic@rmit.edu.au R. N. Jazar e-mail: reza.jazar@rmit.edu.au demonstrated that any local modifications of single sections have minimal effect on the entire path. Rigorous simulations are presented, to highlight the benefits of the proposed method, in comparison to existing approaches with regards to continuity, curvature control, path length and resulting acceleration. Experimental results validate that our approach mimics human steering with high accuracy. Accordingly, efficiently formulated continuous paths ultimately contribute towards passenger comfort improvement. Using presented approach, autonomous vehicles generate and follow paths that humans are accustomed to, with minimum disturbances.", "title": "" }, { "docid": "a6d8fadb1e0e05929dbca89ee7188088", "text": "The polymorphic nature of the cytochrome P450 (CYP) genes affects individual drug response and adverse reactions to a great extent. This variation includes copy number variants (CNV), missense mutations, insertions and deletions, and mutations affecting gene expression and activity of mainly CYP2A6,CYP2B6, CYP2C9, CYP2C19 andCYP2D6,which have been extensively studied andwell characterized. CYP1A2 andCYP3A4 expression varies significantly, and the cause has been suggested to bemainly of genetic origin but the exact molecular basis remains unknown.We present a review of the major polymorphic CYP alleles and conclude that this variability is of greatest importance for treatment with several antidepressants, antipsychotics, antiulcer drugs, anti-HIV drugs, anticoagulants, antidiabetics and the anticancer drug tamoxifen. We also present tables illustrating the relative importance of specific common CYP alleles for the extent of enzyme functionality. The field of pharmacoepigenetics has just opened, and we present recent examples wherein gene methylation influences the expression of CYP. In addition microRNA (miRNA) regulation of P450 has been described. Furthermore, this review updates the fieldwith respect to regulatory initiatives and experience of predictive pharmacogenetic investigations in the clinics. It is concluded that the pharmacogenetic knowledge regarding CYP polymorphism now developed to a stage where it can be implemented in drug development and in clinical routine for specific drug treatments, thereby improving the drug response and reducing costs for drug treatment. © 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1eb415cae9b39655849537cdc007f51f", "text": "Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.", "title": "" }, { "docid": "10584d580f626fe5937dd3855a7be987", "text": "This paper presents virtual asymmetric multiprocessor, a new scheme of virtual desktop scheduling on multi-core processors for user-interactive performance. The proposed scheme enables virtual CPUs to be dynamically performance-asymmetric based on their hosted workloads. To enhance user experience on consolidated desktops, our scheme provides interactive workloads with fast virtual CPUs, which have more computing power than those hosting background workloads in the same virtual machine. To this end, we devise a hypervisor extension that transparently classifies background tasks from potentially interactive workloads. In addition, we introduce a guest extension that manipulates the scheduling policy of an operating system in favor of our hypervisor-level scheme so that interactive performance can be further improved. Our evaluation shows that the proposed scheme significantly improves interactive performance of application launch, Web browsing, and video playback applications when CPU-intensive workloads highly disturb the interactive workloads.", "title": "" }, { "docid": "de96b6b43f68972faac8eec246e34c25", "text": "The idea that chemotherapy can be used in combination with immunotherapy may seem somewhat counterproductive, as it can theoretically eliminate the immune cells needed for antitumour immunity. However, much preclinical work has now demonstrated that in addition to direct cytotoxic effects on cancer cells, a proportion of DNA damaging agents may actually promote immunogenic cell death, alter the inflammatory milieu of the tumour microenvironment and/or stimulate neoantigen production, thereby activating an antitumour immune response. Some notable combinations have now moved forward into the clinic, showing promise in phase I–III trials, whereas others have proven toxic, and challenging to deliver. In this review, we discuss the emerging data of how DNA damaging agents can enhance the immunogenic properties of malignant cells, focussing especially on immunogenic cell death, and the expansion of neoantigen repertoires. We discuss how best to strategically combine DNA damaging therapeutics with immunotherapy, and the challenges of successfully delivering these combination regimens to patients. With an overwhelming number of chemotherapy/immunotherapy combination trials in process, clear hypothesis-driven trials are needed to refine the choice of combinations, and determine the timing and sequencing of agents in order to stimulate antitumour immunological memory and improve maintained durable response rates, with minimal toxicity.", "title": "" } ]
scidocsrr
47d1c4e9c0c85c68605c285176da5c12
Towards Fast Computation of Certified Robustness for ReLU Networks
[ { "docid": "4e847c4acec420ef833a08a17964cb28", "text": "Machine learning models are vulnerable to adversarial examples, inputs maliciously perturbed to mislead the model. These inputs transfer between models, thus enabling black-box attacks against deployed models. Adversarial training increases robustness to attacks by injecting adversarial examples into training data. Surprisingly, we find that although adversarially trained models exhibit strong robustness to some white-box attacks (i.e., with knowledge of the model parameters), they remain highly vulnerable to transferred adversarial examples crafted on other models. We show that the reason for this vulnerability is the model’s decision surface exhibiting sharp curvature in the vicinity of the data points, thus hindering attacks based on first-order approximations of the model’s loss, but permitting black-box attacks that use adversarial examples transferred from another model. We harness this observation in two ways: First, we propose a simple yet powerful novel attack that first applies a small random perturbation to an input, before finding the optimal perturbation under a first-order approximation. Our attack outperforms prior “single-step” attacks on models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, an extension of adversarial training that additionally augments training data with perturbed inputs transferred from a number of fixed pre-trained models. On MNIST and ImageNet, ensemble adversarial training vastly improves robustness to black-box attacks.", "title": "" }, { "docid": "22accfa74592e8424bdfe74224365425", "text": "In the SQuaD reading comprehension task systems are given a paragraph from Wikipedia and have to answer a question about it. The answer is guaranteed to be contained within the paragraph. There are 107,785 such paragraph-question-answer tuples in the dataset. Human performance on this task achieves 91.2% accuracy (F1), and the current state-of-the-art system obtains a respectably close 84.7%. Not so fast though! If we adversarially add a single sentence to those paragraphs, in such a way that the added sentences do not contradict the correct answer, nor do they confuse humans, the accuracy of the published models studied plummets from an average of 75% to just 36%.", "title": "" } ]
[ { "docid": "e749b355c41ca254a0ee249d7c4e9ab1", "text": "This paper explores a framework to permit the creation of modules as part of a robot creation and combat game. We explore preliminary work that offers a design solution to generate and test robots comprised of modular components. This current implementation, which is reliant on a constraint-driven process is then assessed to indicate the expressive range of content it can create and the total number of unique combinations it can establish.", "title": "" }, { "docid": "5db1e7db73ae18802d04ed122ace42b0", "text": "Phishing is an online identity theft that aims to steal sensitive information such as username, password and online banking details from its victims. Phishing education needs to be considered as a means to combat this threat. This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users' avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a preand post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants' phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "be05abd038de9b32cc255ca221634a2c", "text": "This paper sees a smart city not as a status of how smart a city is but as a city's effort to make itself smart. The connotation of a smart city represents city innovation in management and policy as well as technology. Since the unique context of each city shapes the technological, organizational and policy aspects of that city, a smart city can be considered a contextualized interplay among technological innovation, managerial and organizational innovation, and policy innovation. However, only little research discusses innovation in management and policy while the literature of technology innovation is abundant. This paper aims to fill the research gap by building a comprehensive framework to view the smart city movement as innovation comprised of technology, management and policy. We also discuss inevitable risks from innovation, strategies to innovate while avoiding risks, and contexts underlying innovation and risks.", "title": "" }, { "docid": "5ea123d6e93daf3c1bd3de8110cbf92f", "text": "Recent work in human cognitive neuroscience has linked self-consciousness to the processing of multisensory bodily signals (bodily self-consciousness [BSC]) in fronto-parietal cortex and more posterior temporo-parietal regions. We highlight the behavioral, neurophysiological, neuroimaging, and computational laws that subtend BSC in humans and non-human primates. We propose that BSC includes body-centered perception (hand, face, and trunk), based on the integration of proprioceptive, vestibular, and visual bodily inputs, and involves spatio-temporal mechanisms integrating multisensory bodily stimuli within peripersonal space (PPS). We develop four major constraints of BSC (proprioception, body-related visual information, PPS, and embodiment) and argue that the fronto-parietal and temporo-parietal processing of trunk-centered multisensory signals in PPS is of particular relevance for theoretical models and simulations of BSC and eventually of self-consciousness.", "title": "" }, { "docid": "5faa1d3acdd057069fb1dab75d7b0803", "text": "The past 10 years of event ordering research has focused on learning partial orderings over document events and time expressions. The most popular corpus, the TimeBank, contains a small subset of the possible ordering graph. Many evaluations follow suit by only testing certain pairs of events (e.g., only main verbs of neighboring sentences). This has led most research to focus on specific learners for partial labelings. This paper attempts to nudge the discussion from identifying some relations to all relations. We present new experiments on strongly connected event graphs that contain ∼10 times more relations per document than the TimeBank. We also describe a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves. Each sieve adds labels to the event graph one at a time, and earlier sieves inform later ones through transitive closure. This paper thus describes innovations in both approach and task. We experiment on the densest event graphs to date and show a 14% gain over state-of-the-art.", "title": "" }, { "docid": "c73f982451170dd00bc0dbde925bc8d4", "text": "Embedded memory remains a major bottleneck in current integrated circuit design in terms of silicon area, power dissipation, and performance; however, static random access memories (SRAMs) are almost exclusively supplied by a small number of vendors through memory generators, targeted at rather generic design specifications. As an alternative, standard cell memories (SCMs) can be defined, synthesized, and placed and routed as an integral part of a given digital system, providing complete design flexibility, good energy efficiency, low-voltage operation, and even area efficiency for small memory blocks. Yet implementing an SCM block with a standard digital flow often fails to exploit the distinct and regular structure of such an array, leaving room for optimization. In this article, we present a design methodology for optimizing the physical implementation of SCM macros as part of the standard design flow. This methodology introduces controlled placement, leading to a structured, noncongested layout with close to 100% placement utilization, resulting in a smaller silicon footprint, reduced wire length, and lower power consumption compared to SCMs without controlled placement. This methodology is demonstrated on SCM macros of various sizes and aspect ratios in a state-of-the-art 28nm fully depleted silicon-on-insulator technology, and compared with equivalent macros designed with the noncontrolled, standard flow, as well as with foundry-supplied SRAM macros. The controlled SCMs provide an average 25% reduction in area as compared to noncontrolled implementations while achieving a smaller size than SRAM macros of up to 1Kbyte. Power and performance comparisons of controlled SCM blocks of a commonly found 256 × 32 (1 Kbyte) memory with foundry-provided SRAMs show greater than 65% and 10% reduction in read and write power, respectively, while providing faster access than their SRAM counterparts, despite being of an aspect ratio that is typically unfavorable for SCMs. In addition, the SCM blocks function correctly with a supply voltage as low as 0.3V, well below the lower limit of even the SRAM macros optimized for low-voltage operation. The controlled placement methodology is applied within a full-chip physical implementation flow of an OpenRISC-based test chip, providing more than 50% power reduction compared to equivalently sized compiled SRAMs under a benchmark application.", "title": "" }, { "docid": "e4628211d0d2657db387c093228e9b9b", "text": "BACKGROUND\nMindfulness-based stress reduction (MBSR) is a clinically standardized meditation that has shown consistent efficacy for many mental and physical disorders. Less attention has been given to the possible benefits that it may have in healthy subjects. The aim of the present review and meta-analysis is to better investigate current evidence about the efficacy of MBSR in healthy subjects, with a particular focus on its benefits for stress reduction.\n\n\nMATERIALS AND METHODS\nA literature search was conducted using MEDLINE (PubMed), the ISI Web of Knowledge, the Cochrane database, and the references of retrieved articles. The search included articles written in English published prior to September 2008, and identified ten, mainly low-quality, studies. Cohen's d effect size between meditators and controls on stress reduction and spirituality enhancement values were calculated.\n\n\nRESULTS\nMBSR showed a nonspecific effect on stress reduction in comparison to an inactive control, both in reducing stress and in enhancing spirituality values, and a possible specific effect compared to an intervention designed to be structurally equivalent to the meditation program. A direct comparison study between MBSR and standard relaxation training found that both treatments were equally able to reduce stress. Furthermore, MBSR was able to reduce ruminative thinking and trait anxiety, as well as to increase empathy and self-compassion.\n\n\nCONCLUSIONS\nMBSR is able to reduce stress levels in healthy people. However, important limitations of the included studies as well as the paucity of evidence about possible specific effects of MBSR in comparison to other nonspecific treatments underline the necessity of further research.", "title": "" }, { "docid": "57fc7b0e377c38830b579e58c88e3dd7", "text": "Integrated model-based specification techniques facilitate the definition of seamless development processes for electronic control units (ECUs) including support for domain specific issues such as management of signals, the integration of isolated logical functions or the deployment of functions to distributed networks of ECUs. A fundamental prerequisite of such approaches is the existence of an adequate modeling notation tailored to the specific needs of the application domain together with a precise definition of its syntax and its semantics. However, although these constituents are necessary, they are not sufficient for guaranteeing an efficient development process of ECU networks. In addition, methodical support which guides the application of the modeling notation must be an integral part of a model-based approach. Therefore we propose the introduction of a so-called ’system model’ which comprises all of these constituents. A major part of this system model constitutes the Automotive Modeling Language (AML), an architecture centric modeling language. The system model further comprises specifically tailored modeling notations derived from the Unified Modeling Language (UML) or the engineering tool ASCET-SD or general applicable structuring mechanisms like abstraction levels which support the definition of an AML relevant well-structured development process.", "title": "" }, { "docid": "cc6b258a38ff6954ddee7e9789a89b30", "text": "One of the main challenges in digital forensics is the increasing volume of data that needs to be analyzed. This problem has become even more pronounced with the emergence of big data and calls for a rethink on the way digital forensics investigations have been handled over the past years. This paper briefly discusses the challenges and needs of digital forensics in the face of the current trends and requirements of different investigations. A digital forensics analysis framework that puts into consideration the existing techniques as well as the current challenges is proposed. The purpose of the framework is to reassess the various stages of the digital forensics examination process and introduce into each stage the required techniques to enhance better collection, analysis, preservation and presentation in the face of big data and other challenges facing digital forensics.", "title": "" }, { "docid": "a602a532a7b95eae050d084e10606951", "text": "Municipal solid waste management has emerged as one of the greatest challenges facing environmental protection agencies in developing countries. This study presents the current solid waste management practices and problems in Nigeria. Solid waste management is characterized by inefficient collection methods, insufficient coverage of the collection system and improper disposal. The waste density ranged from 280 to 370 kg/m3 and the waste generation rates ranged from 0.44 to 0.66 kg/capita/day. The common constraints faced environmental agencies include lack of institutional arrangement, insufficient financial resources, absence of bylaws and standards, inflexible work schedules, insufficient information on quantity and composition of waste, and inappropriate technology. The study suggested study of institutional, political, social, financial, economic and technical aspects of municipal solid waste management in order to achieve sustainable and effective solid waste management in Nigeria.", "title": "" }, { "docid": "a278abfa0501077eb2f71cbb272689d6", "text": "Among the many emerging non-volatile memory technologies, chalcogenide (i.e. GeSbTe/GST) based phase change random access memory (PRAM) has shown particular promise. While accurate simulations are required for reducing programming current and enabling higher integration density, many challenges remain for improved simulation of PRAM cell operation including nanoscale thermal conduction and phase change. This work simulates the fully coupled electrical and thermal transport and phase change in 2D PRAM geometries, with specific attention to the impact of thermal boundary resistance between the GST and surrounding materials. For GST layer thicknesses between 25 and 75nm, the interface resistance reduces the predicted programming current and power by 31% and 53%, respectively, for a typical reset transition. The calculations also show the large sensitivity of programming voltage to the GST thermal conductivity. These results show the importance of temperature-dependent thermal properties of materials and interfaces in PRAM cells", "title": "" }, { "docid": "638e0059bf390b81de2202c22427b937", "text": "Oral and gastrointestinal mucositis is a toxicity of many forms of radiotherapy and chemotherapy. It has a significant impact on health, quality of life and economic outcomes that are associated with treatment. It also indirectly affects the success of antineoplastic therapy by limiting the ability of patients to tolerate optimal tumoricidal treatment. The complex pathogenesis of mucositis has only recently been appreciated and reflects the dynamic interactions of all of the cell and tissue types that comprise the epithelium and submucosa. The identification of the molecular events that lead to treatment-induced mucosal injury has provided targets for mechanistically based interventions to prevent and treat mucositis.", "title": "" }, { "docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb", "text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" }, { "docid": "fb34a0868942928ada71cf8d1c746c19", "text": "We introduce the new Multimodal Named Entity Disambiguation (MNED) task for multimodal social media posts such as Snapchat or Instagram captions, which are composed of short captions with accompanying images. Social media posts bring significant challenges for disambiguation tasks because 1) ambiguity not only comes from polysemous entities, but also from inconsistent or incomplete notations, 2) very limited context is provided with surrounding words, and 3) there are many emerging entities often unseen during training. To this end, we build a new dataset called SnapCaptionsKB, a collection of Snapchat image captions submitted to public and crowd-sourced stories, with named entity mentions fully annotated and linked to entities in an external knowledge base. We then build a deep zeroshot multimodal network for MNED that 1) extracts contexts from both text and image, and 2) predicts correct entity in the knowledge graph embeddings space, allowing for zeroshot disambiguation of entities unseen in training set as well. The proposed model significantly outperforms the stateof-the-art text-only NED models, showing efficacy and potentials of the MNED task.", "title": "" }, { "docid": "abf7ee5b09e679bfaabefc49cb45371a", "text": "The work to be performed on open source systems, whether feature developments or defects, is typically described as an issue (or bug). Developers self-select bugs from the many open bugs in a repository when they wish to perform work on the system. This paper evaluates a recommender, called NextBug, that considers the textual similarity of bug descriptions to predict bugs that require handling of similar code fragments. First, we evaluate this recommender using 69 projects in the Mozilla ecosystem. We show that for detecting similar bugs, a technique that considers just the bug components and short descriptions perform just as well as a more complex technique that considers other features. Second, we report a field study where we monitored the bugs fixed for Mozilla during a week. We sent mails to the developers who fixed these bugs, asking whether they would consider working on the recommendations provided by NextBug, 39 developers (59%) stated that they would consider working on these recommendations, 44 developers (67%) also expressed interest in seeing the recommendations in their bug tracking system.", "title": "" }, { "docid": "29aa7084f7d6155d4626b682a5fc88ef", "text": "There is an underlying cascading behavior over road networks. Traffic cascading patterns are of great importance to easing traffic and improving urban planning. However, what we can observe is individual traffic conditions on different road segments at discrete time intervals, rather than explicit interactions or propagation (e.g., A→B) between road segments. Additionally, the traffic from multiple sources and the geospatial correlations between road segments make it more challenging to infer the patterns. In this paper, we first model the three-fold influences existing in traffic propagation and then propose a data-driven approach, which finds the cascading patterns through maximizing the likelihood of observed traffic data. As this is equivalent to a submodular function maximization problem, we solve it by using an approximate algorithm with provable near-optimal performance guarantees based on its submodularity. Extensive experiments on real-world datasets demonstrate the advantages of our approach in both effectiveness and efficiency.", "title": "" }, { "docid": "c1220e87212f8a9f005cc8a62eda58f8", "text": "This paper argues that double-object verbs decompose into two heads, an external-argumentselecting CAUSE predicate (vCAUSE) and a prepositional element, PHAVE. Two primary types of argument are presented. First, a consideration of the well-known Oerhle’s generalization effects in English motivate such a decomposition, in combination with a consideration of idioms in ditransitive structures. These facts mitigate strongly against a Transform approach to the dative alternation, like that of Larson 1988, and point towards an Alternative Projection approach, similar in many respects to that of Pesetsky 1995. Second, the PHAVE prepositional element is identified with the prepositional component of verbal have, treated in the literature by Benveniste 1966; Freeze 1992; Kayne 1993; Guéron 1995. Languages without PHAVE do not allow possessors to c-command possessees, and show no evidence of a double-object construction, in which Goals c-command Themes. On the current account, these two facts receive the same explanation: PHAVE does not form part of the inventory of morphosyntactic primitives of these languages.", "title": "" }, { "docid": "7f9f32390be9a86d8a5776e9ec5fc980", "text": "Commonly, HoG/SVM classifier uses rectangular images for HoG feature descriptor extraction and training. This means significant additional work has to be done to process irrelevant pixels belonging to the background surrounding the object of interest. While some objects may indeed be square or rectangular, most of objects are not easily representable by simple geometric shapes. In Bitmap-HoG approach we propose in this paper, the irregular shape of object is represented by a bitmap to avoid processing of extra background pixels. Bitmap, derived from the training dataset, encodes those portions of an image to be used to train a classifier. Experimental results show that not only the proposed algorithm decreases the workload associated with HoG/SVM classifiers by 75% compared to the state-of-the-art, but also it shows an average increase about 5% in recall and a decrease about 2% in precision in comparison with standard HoG.", "title": "" }, { "docid": "6e6237011de5348d9586fb70941b4037", "text": "BACKGROUND\nAlthough clinicians frequently add a second medication to an initial, ineffective antidepressant drug, no randomized controlled trial has compared the efficacy of this approach.\n\n\nMETHODS\nWe randomly assigned 565 adult outpatients who had nonpsychotic major depressive disorder without remission despite a mean of 11.9 weeks of citalopram therapy (mean final dose, 55 mg per day) to receive sustained-release bupropion (at a dose of up to 400 mg per day) as augmentation and 286 to receive buspirone (at a dose of up to 60 mg per day) as augmentation. The primary outcome of remission of symptoms was defined as a score of 7 or less on the 17-item Hamilton Rating Scale for Depression (HRSD-17) at the end of this study; scores were obtained over the telephone by raters blinded to treatment assignment. The 16-item Quick Inventory of Depressive Symptomatology--Self-Report (QIDS-SR-16) was used to determine the secondary outcomes of remission (defined as a score of less than 6 at the end of this study) and response (a reduction in baseline scores of 50 percent or more).\n\n\nRESULTS\nThe sustained-release bupropion group and the buspirone group had similar rates of HRSD-17 remission (29.7 percent and 30.1 percent, respectively), QIDS-SR-16 remission (39.0 percent and 32.9 percent), and QIDS-SR-16 response (31.8 percent and 26.9 percent). Sustained-release bupropion, however, was associated with a greater reduction (from baseline to the end of this study) in QIDS-SR-16 scores than was buspirone (25.3 percent vs. 17.1 percent, P<0.04), a lower QIDS-SR-16 score at the end of this study (8.0 vs. 9.1, P<0.02), and a lower dropout rate due to intolerance (12.5 percent vs. 20.6 percent, P<0.009).\n\n\nCONCLUSIONS\nAugmentation of citalopram with either sustained-release bupropion or buspirone appears to be useful in actual clinical settings. Augmentation with sustained-release bupropion does have certain advantages, including a greater reduction in the number and severity of symptoms and fewer side effects and adverse events. (ClinicalTrials.gov number, NCT00021528.).", "title": "" }, { "docid": "1b7db8f3a7273a77d5303951666238b0", "text": "Graph theory is a valuable framework to study the organization of functional and anatomical connections in the brain. Its use for comparing network topologies, however, is not without difficulties. Graph measures may be influenced by the number of nodes (N) and the average degree (k) of the network. The explicit form of that influence depends on the type of network topology, which is usually unknown for experimental data. Direct comparisons of graph measures between empirical networks with different N and/or k can therefore yield spurious results. We list benefits and pitfalls of various approaches that intend to overcome these difficulties. We discuss the initial graph definition of unweighted graphs via fixed thresholds, average degrees or edge densities, and the use of weighted graphs. For instance, choosing a threshold to fix N and k does eliminate size and density effects but may lead to modifications of the network by enforcing (ignoring) non-significant (significant) connections. Opposed to fixing N and k, graph measures are often normalized via random surrogates but, in fact, this may even increase the sensitivity to differences in N and k for the commonly used clustering coefficient and small-world index. To avoid such a bias we tried to estimate the N,k-dependence for empirical networks, which can serve to correct for size effects, if successful. We also add a number of methods used in social sciences that build on statistics of local network structures including exponential random graph models and motif counting. We show that none of the here-investigated methods allows for a reliable and fully unbiased comparison, but some perform better than others.", "title": "" } ]
scidocsrr
64905b863e9648b9748c3001fdb05856
1 Image super-resolution : Historical overview and future challenges
[ { "docid": "8d29b510fb10f8f7dc4563bca36b9e6d", "text": "Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.", "title": "" } ]
[ { "docid": "539610ab3c9fa9c522e878c1270a4de4", "text": "The use of a latent heat storage system using phase change materials (PCMs) is an effective way of storing thermal energy and has the advantages of high-energy storage density and the isothermal nature of the storage process. PCMs have been widely used in latent heat thermalstorage systems for heat pumps, solar engineering, and spacecraft thermal control applications. The uses of PCMs for heating and cooling applications for buildings have been investigated within the past decade. There are large numbers of PCMs that melt and solidify at a wide range of temperatures, making them attractive in a number of applications. This paper also summarizes the investigation and analysis of the available thermal energy storage systems incorporating PCMs for use in different applications. # 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9b1fa27fda7e50de3ced6e80dc6a0a3d", "text": "For years the HCI community has struggled to integrate design in research and practice. While design has gained a strong foothold in practice, it has had much less impact on the HCI research community. In this paper we propose a new model for interaction design research within HCI. Following a research through design approach, designers produce novel integrations of HCI research in an attempt to make the right thing: a product that transforms the world from its current state to a preferred state. This model allows interaction designers to make research contributions based on their strength in addressing under-constrained problems. To formalize this model, we provide a set of four lenses for evaluating the research contribution and a set of three examples to illustrate the benefits of this type of research.", "title": "" }, { "docid": "6af7bb1d2a7d8d44321a5b162c9781a2", "text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.", "title": "" }, { "docid": "9869bc5dfc8f20b50608f0d68f7e49ba", "text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.", "title": "" }, { "docid": "414160c5d5137def904c38cccc619628", "text": "Side-channel attacks, particularly differential power analysis (DPA) attacks, are efficient ways to extract secret keys of the attacked devices by leaked physical information. To resist DPA attacks, hiding and masking methods are commonly used, but it usually resulted in high area overhead and performance degradation. In this brief, a DPA countermeasure circuit based on digital controlled ring oscillators is presented to efficiently resist the first-order DPA attack. The implementation of the critical S-box of the advanced encryption standard (AES) algorithm shows that the area overhead of a single S-box is about 19% without any extra delay in the critical path. Moreover, the countermeasure circuit can be mounted onto different S-box implementations based on composite field or look-up table (LUT). Based on our approach, a DPA-resistant AES chip can be proposed to maintain the same throughput with less than 2K extra gates.", "title": "" }, { "docid": "8c3fe85634bd2126b91a0ebf88b63086", "text": "Few academic management theories have had as much influence in the business world as Clayton M. Christensen's theory of disruptive innovation. But how well does the theory describe what actually happens in business?", "title": "" }, { "docid": "19518604892789208000e970747d0c3d", "text": "Given a partial symmetric matrixA with only certain elements specified, the Euclidean distance matrix completion problem (EDMCP) is to find the unspecified elements of A that makeA a Euclidean distance matrix (EDM). In this paper, we follow the successful approach in [20] and solve the EDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primal-dual interiorpoint algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.", "title": "" }, { "docid": "3e5d5ddd691f82a7caa99896d618f667", "text": "We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking/retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods.", "title": "" }, { "docid": "c378c44b68f8c27c12215f1d87056cde", "text": "Head motion systematically alters correlations in resting state functional connectivity fMRI (RSFC). In this report we examine impact of motion on signal intensity and RSFC correlations. We find that motion-induced signal changes (1) are often complex and variable waveforms, (2) are often shared across nearly all brain voxels, and (3) often persist more than 10s after motion ceases. These signal changes, both during and after motion, increase observed RSFC correlations in a distance-dependent manner. Motion-related signal changes are not removed by a variety of motion-based regressors, but are effectively reduced by global signal regression. We link several measures of data quality to motion, changes in signal intensity, and changes in RSFC correlations. We demonstrate that improvements in data quality measures during processing may represent cosmetic improvements rather than true correction of the data. We demonstrate a within-subject, censoring-based artifact removal strategy based on volume censoring that reduces group differences due to motion to chance levels. We note conditions under which group-level regressions do and do not correct motion-related effects.", "title": "" }, { "docid": "50c78e339e472f1b1814687f7d0ec8c6", "text": "Frontonasal dysplasia (FND) refers to a class of midline facial malformations caused by abnormal development of the facial primordia. The term encompasses a spectrum of severities but characteristic features include combinations of ocular hypertelorism, malformations of the nose and forehead and clefting of the facial midline. Several recent studies have drawn attention to the importance of Alx homeobox transcription factors during craniofacial development. Most notably, loss of Alx1 has devastating consequences resulting in severe orofacial clefting and extreme microphthalmia. In contrast, mutations of Alx3 or Alx4 cause milder forms of FND. Whilst Alx1, Alx3 and Alx4 are all known to be expressed in the facial mesenchyme of vertebrate embryos, little is known about the function of these proteins during development. Here, we report the establishment of a zebrafish model of Alx-related FND. Morpholino knock-down of zebrafish alx1 expression causes a profound craniofacial phenotype including loss of the facial cartilages and defective ocular development. We demonstrate for the first time that Alx1 plays a crucial role in regulating the migration of cranial neural crest (CNC) cells into the frontonasal primordia. Abnormal neural crest migration is coincident with aberrant expression of foxd3 and sox10, two genes previously suggested to play key roles during neural crest development, including migration, differentiation and the maintenance of progenitor cells. This novel function is specific to Alx1, and likely explains the marked clinical severity of Alx1 mutation within the spectrum of Alx-related FND.", "title": "" }, { "docid": "4118cc1ed5ae11289029338c99964c1b", "text": "The concept of t-designs in compact symmetric spaces of rank 1 is a generalization of the theory of classical t-designs. In this paper we obtain new lower bounds on the cardinality of designs in projective compact symmetric spaces of rank 1. With one exception our bounds are the first improvements of the classical bounds by more than one. We use the linear programming technique and follow the approach we have proposed for spherical codes and designs. Some examples are shown and compared with the classical bounds.", "title": "" }, { "docid": "17d17cc62c89c87c173d9e17ede291d3", "text": "A search engine recommends to the user a list of web pages. The user examines this list, from the first page to the last, and clicks on all attractive pages until the user is satisfied. This behavior of the user can be described by the dependent click model (DCM). We propose DCM bandits, an online learning variant of the DCM where the goal is to maximize the probability of recommending satisfactory items, such as web pages. The main challenge of our learning problem is that we do not observe which attractive item is satisfactory. We propose a computationally-efficient learning algorithm for solving our problem, dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable assumptions; and also prove a matching lower bound up to logarithmic factors. We evaluate our algorithm on synthetic and realworld problems, and show that it performs well even when our model is misspecified. This work presents the first practical and regret-optimal online algorithm for learning to rank with multiple clicks in a cascade-like click model.", "title": "" }, { "docid": "67f1e137b42906ea3e23c32c90388166", "text": "Semantic matching of natural language sentences or identifying the relationship between two sentences is a core research problem underlying many natural language tasks. Depending on whether training data is available, prior research has proposed both unsupervised distance-based schemes and supervised deep learning schemes for sentence matching. However, previous approaches either omit or fail to fully utilize the ordered, hierarchical, and flexible structures of language objects, as well as the interactions between them. In this paper, we propose Hierarchical Sentence Factorization— a technique to factorize a sentence into a hierarchical representation, with the components at each different scale reordered into a “predicate-argument” form. The proposed sentence factorization technique leads to the invention of: 1) a new unsupervised distance metric which calculates the semantic distance between a pair of text snippets by solving a penalized optimal transport problem while preserving the logical relationship of words in the reordered sentences, and 2) new multi-scale deep learning models for supervised semantic training, based on factorized sentence hierarchies. We apply our techniques to text-pair similarity estimation and text-pair relationship classification tasks, based on multiple datasets such as STSbenchmark, the Microsoft Research paraphrase identification (MSRP) dataset, the SICK dataset, etc. Extensive experiments show that the proposed hierarchical sentence factorization can be used to significantly improve the performance of existing unsupervised distance-based metrics as well as multiple supervised deep learning models based on the convolutional neural network (CNN) and long short-term memory (LSTM).", "title": "" }, { "docid": "3ce6c3b6a23e713bf9af419ce2d7ded3", "text": "Two measures of financial performance that are being applied increasingly in investor-owned and not-for-profit healthcare organizations are market value added (MVA) and economic value added (EVA). Unlike traditional profitability measures, both MVA and EVA measures take into account the cost of equity capital. MVA is most appropriate for investor-owned healthcare organizations and EVA is the best measure for not-for-profit organizations. As healthcare financial managers become more familiar with MVA and EVA and understand their potential, these two measures may become more widely accepted accounting tools for assessing the financial performance of investor-owned and not-for-profit healthcare organizations.", "title": "" }, { "docid": "78eb86088042c2d0e6330067798a2270", "text": "This paper presents a list of heuristics to evaluate smartphone apps for older adults. It further verifies the usefulness of the proposed list through a study performed with two groups of five expert evaluators, who inspected two popular health and fitness smartphone apps -- Nike+ and Run keeper -- through heuristic evaluation. Additionally, the evaluators completed a post-evaluation survey to provide feedback to the researchers about the usefulness, strengths and gaps of the heuristics list. The results of the heuristic evaluations and post questionnaires demonstrate both a comprehensive application of the heuristics as well as an overall positive assessment of their quality and potential to identify usability issues of smartphone apps targeted at older adults.", "title": "" }, { "docid": "93fe562da15b8babc98fb2c10d0f1082", "text": "In this paper we address the problem of estimating the intrinsic parameters of a 3D LIDAR while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to address this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal identifiability conditions, under which it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes, as well as standard metrics of the calibration errors, are used to assess the validity of our approach.", "title": "" }, { "docid": "2801a7eea00bc4db7d6aacf71071de20", "text": "Internet of Things (IoT) devices are rapidly becoming ubiquitous while IoT services are becoming pervasive. Their success has not gone unnoticed and the number of threats and attacks against IoT devices and services are on the increase as well. Cyber-attacks are not new to IoT, but as IoT will be deeply interwoven in our lives and societies, it is becoming necessary to step up and take cyber defense seriously. Hence, there is a real need to secure IoT, which has consequently resulted in a need to comprehensively understand the threats and attacks on IoT infrastructure. This paper is an attempt to classify threat types, besides analyze and characterize intruders and attacks facing IoT devices and services.", "title": "" }, { "docid": "04913d0003cbc32648f685995b6761da", "text": "Selectional preferences have long been claimed to be essential for coreference resolution. However, they are mainly modeled only implicitly by current coreference resolvers. We propose a dependencybased embedding model of selectional preferences which allows fine-grained compatibility judgments with high coverage. We show that the incorporation of our model improves coreference resolution performance on the CoNLL dataset, matching the state-of-the-art results of a more complex system. However, it comes with a cost that makes it debatable how worthwhile such improvements are.", "title": "" }, { "docid": "1dbaaa804573e9a834616cce38547d8d", "text": "This paper combines traditional fundamentals, such as earnings and cash flows, with measures tailored for growth firms, such as earnings stability, growth stability and intensity of R&D, capital expenditure and advertising, to create an index – GSCORE. A long–short strategy based on GSCORE earns significant excess returns, though most of the returns come from the short side. Results are robust in partitions of size, analyst following and liquidity and persist after controlling for momentum, book-tomarket, accruals and size. High GSCORE firms have greater market reaction and analyst forecast surprises with respect to future earnings announcements. Further, the results are inconsistent with a riskbased explanation as returns are positive in most years, and firms with lower risk earn higher returns. Finally, a contextual approach towards fundamental analysis works best, with traditional analysis appropriate for high BM stocks and growth oriented fundamental analysis appropriate for low BM stocks.", "title": "" }, { "docid": "b91cf13547266547b14e5520e3a12749", "text": "The objective of this article is to review radio frequency identification (RFID) technology, its developments on RFID transponders, design and operating principles, so that end users can benefit from knowing which transponder meets their requirements. In this article, RFID system definition, RFID transponder architecture and RFID transponder classification based on a comprehensive literature review on the field of research are presented. Detailed descriptions of these tags are also presented, as well as an in-house developed semiactive tag in a compact package.", "title": "" } ]
scidocsrr
7eba65702da78df3635b2487df6cb649
Smart Home-Control and Monitoring System Using Smart Phone
[ { "docid": "fa3c52e9b3c4a361fd869977ba61c7bf", "text": "The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.", "title": "" }, { "docid": "e67986714c6bda56c03de25168c51e6b", "text": "With the development of modern technology and Android Smartphone, Smart Living is gradually changing people’s life. Bluetooth technology, which aims to exchange data wirelessly in a short distance using short-wavelength radio transmissions, is providing a necessary technology to create convenience, intelligence and controllability. In this paper, a new Smart Living system called home lighting control system using Bluetooth-based Android Smartphone is proposed and prototyped. First Smartphone, Smart Living and Bluetooth technology are reviewed. Second the system architecture, communication protocol and hardware design aredescribed. Then the design of a Bluetooth-based Smartphone application and the prototype are presented. It is shown that Android Smartphone can provide a platform to implement Bluetooth-based application for Smart Living.", "title": "" } ]
[ { "docid": "d4e43f79578230a13333894de239ae6c", "text": "Facial expressions play an important role in interpersonal relations as well as for security purposes. The malicious intensions of a thief can be recognized with the help of his gestures, facial expressions being its major part. This is because humans demonstrate and convey a lot of evident information visually rather than verbally. Although humans recognize facial expressions virtually without effort or delay, reliable expression recognition by machine remains a challenge. A picture portrays much more than its equivalent textual description. Along this theory, we assert that although verbal and gestural methods convey valuable information, facial expressions are unparalleled in this regard. In sustenance to this idea, a facial expression is considered to consist of deformations of facial components and their spatial relations, along with changes in the pigmentation of the same. This paper envisages the detection of faces, localization of features thus leading to emotion recognition in images. Key Terms: Facial Gestures, Action Units, Neural Networks, Fiducial Points, Feature Contours. INTRODUCTION Facial expression recognition is a basic process performed by every human every day. Each one of us analyses the expressions of the individuals we interact with, to understand best their response to us. The malicious intensions of a thief or a person tobe interviewed can be recognized with the help of his gestures, facial expressions being its major part. In this paper we have tried to highlight the facial expressions for security reasons. In the next step to Human-Computer interaction, we endeavor to empower the computer with this ability — to be able to discern the emotions depicted on a person‘s visage. This seemingly effortless task for us needs to be broken down into several parts for a computer to perform. For this purpose, we consider a facial expression to represent fundamentally, a deformation of the original features of the face. On a day-to-day basis, humans commonly recognize emotions by characteristic features displayed as part of a facial expression. For instance, happiness is undeniably associated with a smile, or an upward movement of the corners of the lips. This could be accompanied by upward movement of the cheeks and wrinkles directed outward from the outer corners of the eyes. Similarly, other emotions are characterized by other deformations typical to the particular expression. More often than not, emotions are depicted by subtle changes in some facial elements rather than their obvious contortion to represent its typical expression as is defined. In order to detect these slight variations induced, it is important to track fine-grained changes in the facial features. The general trend of comprehending observable components of facial gestures utilizes the FACS, which is also a commonly used psychological approach. This system, as described by Ekman , interprets facial information in terms of Action Units, which isolate localized changes in features such as eyes, lips, eyebrows and cheeks. The actual process is akin to a divide-andconquer approach, a step-by-step isolation of facial features, and then recombination of the interpretations of the same in order to finally arrive at a conclusion about the emotion depicted.", "title": "" }, { "docid": "1dcae3f9b4680725d2c7f5aa1736967c", "text": "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.", "title": "" }, { "docid": "b7ed4926681f3e43dea8775519b5a79a", "text": "In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active towards a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria) it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery.", "title": "" }, { "docid": "39b4b7e77e357c9cc73038498f0f2cd1", "text": "Traditional machine learning algorithms often fail to generalize to new input distributions, causing reduced accuracy. Domain adaptation attempts to compensate for the performance degradation by transferring and adapting source knowledge to target domain. Existing unsupervised methods project domains into a lower-dimensional space and attempt to align the subspace bases, effectively learning a mapping from source to target points or vice versa. However, they fail to take into account the difference of the two distributions in the subspaces, resulting in misalignment even after adaptation. We present a unified view of existing subspace mapping based methods and develop a generalized approach that also aligns the distributions as well as the subspace bases. Background. Domain adaptation, or covariate shift, is a fundamental problem in machine learning, and has attracted a lot of attention in the machine learning and computer vision community. Domain adaptation methods for visual data attempt to learn classifiers on a labeled source domain and transfer it to a target domain. There are two settings for visual domain adaptation: (1) unsupervised domain adaptation where there are no labeled examples available in the target domain; and (2) semisupervised domain adaptation where there are a few labeled examples in the target domain. Most existing algorithms operate in the semi-superised setting. However, in real world applications, unlabeled target data is often much more abundant and labeled examples are very limited, so the question of how to utilize the unlabeled target data is more important for practical visual domain adaptation. Thus, in this paper, we focus on the unsupervised scenario. Most of the existing unsupervised approaches have pursued adaptation by separately projecting the source and target data into a lowerdimensional manifold, and finding a transformation that brings the subspaces closer together. This process is illustrated in Figure 1. Geodesic methods [2, 3] find a path along the subspace manifold, and either project source and target onto points along that path [3], or find a closed-form linear map that projects source points to target [2]. Alternatively, the subspaces can be aligned by computing the linear map that minimizes the Frobenius norm of the difference between them, a method known as Subspace Alignment [1]. Approach. The intuition behind our approach is that although the existing approaches might align the subspaces (the bases of the subspaces), it might not fully align the data distributions in the subspaces as illustrated in Figure 1. We use the firstand second-order statistics, namely the mean and the variance, to describe a distribution in this paper. Since the mean after data preprocessing (i.e. normalization) is zero and is not affected 10 20 30 40 50 60 70 80 90 100 25 30 35 40 45 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=1 NA SA SDA−TS GFK SDA−IS 10 20 30 40 50 60 70 80 90 100 25 30 35 40 45 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=3 NA SA SDA−TS GFK SDA−IS Figure 2: Mean accuracy across all 12 experiment settings (domain shifts) of the k-NN Classifier on the Office-Caltech10 dataset. Both our methods SDA-IS and SDA-TS outperform GFK and SA consistently. Left: k-NN Classifier with k=1; Right: k-NN Classifier with k=3. 10 20 30 40 50 60 70 80 90 100 15 20 25 30 Subspace Dimension A c c u ra c y Mean Accuracy of k−NN with k=1", "title": "" }, { "docid": "cf074f806c9b78947c54fb7f41167d9e", "text": "Applications of Machine Learning to Support Dementia Care through Commercially Available O↵-the-Shelf Sensing", "title": "" }, { "docid": "76ad212ccd103c93d45c1ffa0e208b45", "text": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.", "title": "" }, { "docid": "d59c6a2dd4b6bf7229d71f3ae036328a", "text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.", "title": "" }, { "docid": "b2124dfd12529c1b72899b9866b34d03", "text": "In today's world, the amount of stored information has been enormously increasing day by day which is generally in the unstructured form and cannot be used for any processing to extract useful information, so several techniques such as summarization, classification, clustering, information extraction and visualization are available for the same which comes under the category of text mining. Text Mining can be defined as a technique which is used to extract interesting information or knowledge from the text documents. Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values.", "title": "" }, { "docid": "a4a4c67e0ca81a099f58146fccc5a2eb", "text": "Chinese calligraphy is among the finest and most important of all Chinese art forms and an inseparable part of Chinese history. Its delicate aesthetic effects are generally considered to be unique among all calligraphic arts. Its subtle power is integral to traditional Chinese painting. A novel intelligent system uses a constraint-based analogous-reasoning process to automatically generate original Chinese calligraphy that meets visually aesthetic requirements. We propose an intelligent system that can automatically create novel, aesthetically appealing Chinese calligraphy from a few training examples of existing calligraphic styles. To demonstrate the proposed methodology's feasibility, we have implemented a prototype system that automatically generates new Chinese calligraphic art from a small training set.", "title": "" }, { "docid": "e742aa091dae6227994cffcdb5165769", "text": "In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random minibatch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.", "title": "" }, { "docid": "97b2aa3fc92ac3772c16ea92506414cb", "text": "BACKGROUND\nAcute exacerbation of asthma is divided qualitatively into mild, moderate, and severe attacks and respiratory failure. This system is, however, not suitable for estimating small changes in respiratory condition with time and for determining the efficacy of treatments, because it has a qualitative, but not quantitative nature.\n\n\nMETHODS\nTo evaluate the usefulness of quantitative estimation of asthma exacerbation, modified Pulmonary Index Score (mPIS) values were measured in 87 asthmatic children (mean age, 5.0 ± 0.4 years) during hospitalization. mPIS was calculated by adding the sum of scores for 6 items (scores of 0-3 were given for each item). These consisted of heart rate, respiratory rate, accessory muscle use, inspiratory-to-expiratory flow ratio, degree of wheezing, and oxygen saturation in room air. Measurements were made at visits and at hospitalization and were then made twice a day until discharge.\n\n\nRESULTS\nmPIS values were highly correlated among raters. mPIS values at visits were 9.1 ± 0.1 and 12.6 ± 0.4 in subjects with moderate and severe attacks, respectively (p < 0.001). mPIS values of subjects requiring continuous inhalation therapy (CIT) with isoproterenol in addition to systemic steroids were significantly higher than the values of those without CIT (12.0 ± 0.5 and 9.3 ± 0.2, respectively, p < 0.001). A score of 10 was suggested to be the optimal cutoff for distinguishing between subjects requiring and not requiring CIT, from the perspectives of both sensitivity and specificity. mPIS at hospitalization correlated well with the period until discharge, suggesting that this score was a useful predictor for the clinical course after hospitalization.\n\n\nCONCLUSIONS\nmPIS could be a useful tool for several aspects during acute asthma attacks, including the determination of a treatment plan, and prediction of the period of hospitalization in admitted patients, although prospective studies would be required to establish our hypothesis.", "title": "" }, { "docid": "a3e1eb38273f67a283063bce79b20b9d", "text": "In this article, we examine the impact of digital screen devices, including television, on cognitive development. Although we know that young infants and toddlers are using touch screen devices, we know little about their comprehension of the content that they encounter on them. In contrast, research suggests that children begin to comprehend child-directed television starting at ∼2 years of age. The cognitive impact of these media depends on the age of the child, the kind of programming (educational programming versus programming produced for adults), the social context of viewing, as well the particular kind of interactive media (eg, computer games). For children <2 years old, television viewing has mostly negative associations, especially for language and executive function. For preschool-aged children, television viewing has been found to have both positive and negative outcomes, and a large body of research suggests that educational television has a positive impact on cognitive development. Beyond the preschool years, children mostly consume entertainment programming, and cognitive outcomes are not well explored in research. The use of computer games as well as educational computer programs can lead to gains in academically relevant content and other cognitive skills. This article concludes by identifying topics and goals for future research and provides recommendations based on current research-based knowledge.", "title": "" }, { "docid": "685a3c1eee19ee71c36447c49aca757f", "text": "Advanced diagnostic technologies, such as polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA), have been widely used in well-equipped laboratories. However, they are not affordable or accessible in resource-limited settings due to the lack of basic infrastructure and/or trained operators. Paper-based diagnostic technologies are affordable, user-friendly, rapid, robust, and scalable for manufacturing, thus holding great potential to deliver point-of-care (POC) diagnostics to resource-limited settings. In this review, we present the working principles and reaction mechanism of paper-based diagnostics, including dipstick assays, lateral flow assays (LFAs), and microfluidic paper-based analytical devices (μPADs), as well as the selection of substrates and fabrication methods. Further, we report the advances in improving detection sensitivity, quantification readout, procedure simplification and multi-functionalization of paper-based diagnostics, and discuss the disadvantages of paper-based diagnostics. We envision that miniaturized and integrated paper-based diagnostic devices with the sample-in-answer-out capability will meet the diverse requirements for diagnosis and treatment monitoring at the POC.", "title": "" }, { "docid": "dab7239495ae05dec9a0e35d87c5bd4c", "text": "INTRODUCTION\nReduction mammaplasty in patients with gigantomastia is challenging even to very experienced plastic surgeons. Extremely elongated pedicles impair the vascular supply of the nipple-areola complex. Breast shaping and effective reduction are difficult due to the severely stretched skin envelope. The Ribeiro technique is the standard technique for reduction mammaplasty in our clinic. The aim of this study is to review our approach in patients with gigantomastia in comparison to the current literature.\n\n\nPATIENTS UND METHODS\nFrom 01/2009 to 12/2016, we performed 1247 reduction mammaplasties in 760 patients. In 294 reduction mammoplasties (23.6 %), resection weight was more than 1000 g per breast corresponding to the definition of gigantomastia. The Ribeiro technique with a superomedial pedicle and inferior dermoglandular flap for autologous augmentation of the upper pole was implemented as standard procedure. In cases with a sternal notch-nipple distance > 40 cm, free nipple grafting was performed. The outcome parameters complication rate, patient satisfaction with the aesthetic result, nipple sensitivity and surgical revision rate were obtained and retrospectively analysed.\n\n\nRESULTS\nIn 174 patients, 294 reduction mammaplasties were performed with a resection weight of more than 1000 g per breast. Average resection weight was 1389.6 g (range, 1000-4580 g). Average age was 43.5 years (range, 18-76 years), average body mass index (BMI) was 29.2 kg/m2 (range, 19-40 kg/m2), average sternal notch-nipple distance was 34.8 cm (range, 27-52 cm), average operation time was 117 minutes (range, 72-213 minutes). A free nipple graft was necessary in 30 breasts. Overall complication rate was 7.8 %; secondary surgical revision rate was 16 %. 93 % of the patients were \"very satisfied\" and \"satisfied\" with the aesthetic result. Nipple sensitivity was rated \"very good\" and \"good\" in 88 %.\n\n\nCONCLUSION\nThe Ribeiro technique is a well established, versatile standard technique for reduction mammaplasty, which helps to create high-quality reproducible results with longterm formstable shape. In gigantomastia, this procedure is also very effective to achieve volume reduction and aesthetically pleasing results with a low complication rate.", "title": "" }, { "docid": "e2991def3d4b03340b0fc9b708aa1efc", "text": "Author Samuli Laine Title Efficient Physically-Based Shadow Algorithms This research focuses on developing efficient algorithms for computing shadows in computer-generated images. A distinctive feature of the shadow algorithms presented in this thesis is that they produce correct, physicallybased results, instead of giving approximations whose quality is often hard to ensure or evaluate. Light sources that are modeled as points without any spatial extent produce hard shadows with sharp boundaries. Shadow mapping is a traditional method for rendering such shadows. A shadow map is a depth buffer computed from the scene, using a point light source as the viewpoint. The finite resolution of the shadow map requires that its contents are resampled when determining the shadows on visible surfaces. This causes various artifacts such as incorrect self-shadowing and jagged shadow boundaries. A novel method is presented that avoids the resampling step, and provides exact shadows for every point visible in the image. The shadow volume algorithm is another commonly used algorithm for real-time rendering of hard shadows. This algorithm gives exact results and does not suffer from any resampling problems, but it tends to consume a lot of fillrate, which leads to performance problems. This thesis presents a new technique for locally choosing between two previous shadow volume algorithms with different performance characteristics. A simple criterion for making the local choices is shown to yield better performance than using either of the algorithms alone. Light sources with nonzero spatial extent give rise to soft shadows with smooth boundaries. A novel method is presented that transposes the classical processing order for soft shadow computation in offline rendering. Instead of casting shadow rays, the algorithm first conceptually collects every ray that would need to be cast, and then processes the shadow-casting primitives one by one, hierarchically finding the rays that are blocked. Another new soft shadow algorithm takes a different point of view into computing the shadows. Only the silhouettes of the shadow casters are used for determining the shadows, and an unintrusive execution model makes the algorithm practical for production use in offline rendering. The proposed techniques accelerate the computing of physically-based shadows in real-time and offline rendering. These improvements make it possible to use correct, physically-based shadows in a broad range of scenes that previous methods cannot handle efficiently enough. UDC 004.925, 004.383.5", "title": "" }, { "docid": "945ac1f93e8bc636880a8ce3b1d1e18e", "text": "This paper presents the development of wideband power divider using radial stub for six-port interferometer. The performance of designed power divider is evaluated over Ultrawideband (UWB) frequency range across 3.1 GHz to 10.6 GHz. The design of power divider is started with conventional design of Wilkinson power divider. The operating bandwidth of the power divider is improved by introducing radial stub to the design. To observe the significant bandwidth enhancement, the performance of the power divider with radial stub is compared with the conventional design of power divider. The comparison is investigated using CST Microwave Studio 2010 simulation tool. The overall simulated percentage bandwidth of the radial stub power divider is 37.5%, ranging from 5 to 11 GHz frequency band. To validate the proposed design, the design is fabricated and its S-parameter performances are measured using vector network analyzer. The simulated and measured results of the proposed Wilkinson power divider is compared and analyzed.", "title": "" }, { "docid": "935fb5a196358764fda82ac50b87cf1b", "text": "Linear dimensionality reduction methods, such as LDA, are often used in object recognition for feature extraction, but do not address the problem of how to use these features for recognition. In this paper, we propose Probabilistic LDA, a generative probability model with which we can both extract the features and combine them for recognition. The latent variables of PLDA represent both the class of the object and the view of the object within a class. By making examples of the same class share the class variable, we show how to train PLDA and use it for recognition on previously unseen classes. The usual LDA features are derived as a result of training PLDA, but in addition have a probability model attached to them, which automatically gives more weight to the more discriminative features. With PLDA, we can build a model of a previously unseen class from a single example, and can combine multiple examples for a better representation of the class. We show applications to classification, hypothesis testing, class inference, and clustering, on classes not observed during training.", "title": "" }, { "docid": "f7bed669e86a76f707e0f22e58f15de9", "text": "A new stream cipher, Grain, is proposed. The design targets hardware environments where gate count, power consumption and memory is very limited. It is based on two shift registers and a nonlinear output function. The cipher has the additional feature that the speed can be increased at the expense of extra hardware. The key size is 80 bits and no attack faster than exhaustive key search has been identified. The hardware complexity and throughput compares favourably to other hardware oriented stream ciphers like E0 and A5/1.", "title": "" }, { "docid": "29cc827b8990bed2b8fba1c974a51fdf", "text": "The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on environmental changes or on the wear of the devices. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the platform parameters. The proposed approach performs on-line estimation of the parameters and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real world data using different types of robotic platforms.", "title": "" }, { "docid": "c017d31c31b2fad57a4b714dbdf4fdf2", "text": "This paper presents a miniature 5-6-GHz 8 × 8 Butler matrix in a 0.13-μm CMOS implementation. The 8 × 8 design results in an insertion loss of 3.5 dB at 5.5 GHz with a bandwidth of 5-6 GHz and no power consumption. The chip area is 2.5 × 1.9 mm2 including all pads. The 8 × 8 matrix is mounted on a Teflon board with eight antennas, and the measured patterns agree well with theory and show an isolation of >; 12 dB at 5-6 GHz. CMOS Butler matrices offer a simple and low-power alternative to replace eight-element phased-array systems for high gain transceivers. The applications areas are in high data-rate communications at 5-6 and at 57-66 GHz. They can also be excellent candidates for multiple-input-multiple-output systems.", "title": "" } ]
scidocsrr
1d641eac2b9a3f50b512bfb27cfcd0fe
Digitally Designed Stochastic Flash Adc
[ { "docid": "df374fcdaf0b7cd41ca5ef5932378655", "text": "This paper is concerned with the design of precision MOS anafog circuits. Section ff of the paper discusses the characterization and modeling of mismatch in MOS transistors. A characterization methodology is presented that accurately predicts the mismatch in drain current over a wide operating range using a minimumset of measured data. The physical causes of mismatch are discussed in detail for both pand n-channel devices. Statistieal methods are used to develop analytical models that relate the mismatchto the devicedimensions.It is shownthat these models are valid for smafl-geometrydevices also. Extensive experimental data from a 3-pm CMOS process are used to verify these models. Section 111of the paper demonstrates the applicationof the transistor matching studies to the design of a high-performance digital-to-analog converter (DAC). A circuit designmethodologyis presented that highfights the close interaction between the circuit yield and the matching accuracy of devices. It has been possibleto achievea circuit yieldof greater than 97 percent as a result of the knowledgegenerated regarding the matching behavior of transistors and due to the systematicdesignapproach.", "title": "" } ]
[ { "docid": "aa459fbae09caf816e71e882a9a63624", "text": "Microstrip to Parallel-Strip transitions are frequently used for feeding balanced antenna structures, such as dipoles and printed spiral antennas. In this paper, we propose an analytical method to compute the gradual taper using a Hecken approach in order to minimize the return losses and to have continuity. The proposed method is verified experimentally with the aid of three transitions including matching capabilities with different ratios, suitable for spiral antenna structures in, at least, the ranges from 450 MHz to 2 GHz.", "title": "" }, { "docid": "8850b66d131088dbf99430d2c76f5bca", "text": "The richness of visual details in most computer graphics images nowadays is largely due to the extensive use of texture mapping techniques. Texture mapping is the main tool in computer graphics to integrate a given shape to a given pattern. Despite its power it has problems and limitations. Current solutions cannot handle complex shapes properly. The de nition of the mapping function and problems like distortions can turn the process into a very cumbersome one for the application programmer and consequently for the nal user. An associated problem is the synthesis of patterns which are used as texture. The available options are usually limited to scanning in real pictures. This document is a PhD proposal to investigate techniques to integrate complex shapes and patterns which will not only overcome problems usually associated with texture mapping but also give us more control and make less ad hoc the task of combining shape and pattern. We break the problem into three parts: modeling of patterns, modeling of shape and integration. The integration step will use common information to drive both the modeling of patterns and shape in an integrated manner. Our approach is inspired by observations on how these processes happen in real life, where there is no pattern without a shape associated with it. The proposed solutions will hopefully extent the generality, applicability and exibility of existing integration methods in computer graphics. iii Table of", "title": "" }, { "docid": "f7e004c4e506681f2419878b59ad8b53", "text": "We examine unsupervised machine learning techniques to learn features that best describe configurations of the two-dimensional Ising model and the three-dimensional XY model. The methods range from principal component analysis over manifold and clustering methods to artificial neural-network-based variational autoencoders. They are applied to Monte Carlo-sampled configurations and have, a priori, no knowledge about the Hamiltonian or the order parameter. We find that the most promising algorithms are principal component analysis and variational autoencoders. Their predicted latent parameters correspond to the known order parameters. The latent representations of the models in question are clustered, which makes it possible to identify phases without prior knowledge of their existence. Furthermore, we find that the reconstruction loss function can be used as a universal identifier for phase transitions.", "title": "" }, { "docid": "821be0a049a74abf5b009b012022af2f", "text": "BACKGROUND\nIn theory, infections that arise after female genital mutilation (FGM) in childhood might ascend to the internal genitalia, causing inflammation and scarring and subsequent tubal-factor infertility. Our aim was to investigate this possible association between FGM and primary infertility.\n\n\nMETHODS\nWe did a hospital-based case-control study in Khartoum, Sudan, to which we enrolled women (n=99) with primary infertility not caused by hormonal or iatrogenic factors (previous abdominal surgery), or the result of male-factor infertility. These women underwent diagnostic laparoscopy. Our controls were primigravidae women (n=180) recruited from antenatal care. We used exact conditional logistic regression, stratifying for age and controlling for socioeconomic status, level of education, gonorrhoea, and chlamydia, to compare these groups with respect to FGM.\n\n\nFINDINGS\nOf the 99 infertile women examined, 48 had adnexal pathology indicative of previous inflammation. After controlling for covariates, these women had a significantly higher risk than controls of having undergone the most extensive form of FGM, involving the labia majora (odds ratio 4.69, 95% CI 1.49-19.7). Among women with primary infertility, both those with tubal pathology and those with normal laparoscopy findings were at a higher risk than controls of extensive FGM, both with borderline significance (p=0.054 and p=0.055, respectively). The anatomical extent of FGM, rather than whether or not the vulva had been sutured or closed, was associated with primary infertility.\n\n\nINTERPRETATION\nOur findings indicate a positive association between the anatomical extent of FGM and primary infertility. Laparoscopic postinflammatory adnexal changes are not the only explanation for this association, since cases without such pathology were also affected. The association between FGM and primary infertility is highly relevant for preventive work against this ancient practice.", "title": "" }, { "docid": "3667adb02ff66fee9a77ba02a774f42f", "text": "This report points out a correlation between asthma and dental caries. It also gives certain guidelines on the measures to be taken in an asthmatic to negate the risk of dental caries.", "title": "" }, { "docid": "4c20c48a5b1d86930c7e3cc9e6d8aa11", "text": "Although transnational corporations play the crucial role as transplanters of technology, skills and access to the world market, how they facilitate structural upgrading and economic growth in developing countries has not been adequately conceptualized in terms of a theory of economic development. This article develops a dynamic paradigm o TNC-assisted development by recognizing five key structural characteristics of the global economy as underlying determinants. The phenomena of trade augmentation through foreign direct investment, increasing factor incongruity, and localized (but increasingly transnationalized learning and technological accumulation are identified as three principles that govern the process of rapid growth in the labour-driven stage of economic development and, eventually, the emergence of TNCs from the developing countries themselves also plays a role in this process.", "title": "" }, { "docid": "05a77d687230dc28697ca1751586f660", "text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.", "title": "" }, { "docid": "4502ba935124c2daa9a49fc24ec5865b", "text": "Medical image processing is the most challenging and emerging field now a day’s. In this field, detection of brain tumor from MRI brain scan has become one of the most challenging problems, due to complex structure of brain. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. A computer aided diagnostic system has been proposed here for detecting the tumor texture in biological study. This is an attempt made which describes the proposed strategy for detection of tumor with the help of segmentation techniques in MATLAB; which incorporates preprocessing stages of noise removal, image enhancement and edge detection. Processing stages includes segmentation like intensity and watershed based segmentation, thresholding to extract the area of unwanted cells from the whole image. Here algorithms are proposed to calculate area and percentage of the tumor. Keywords— MRI, FCM, MKFCM, SVM, Otsu, threshold, fudge factor", "title": "" }, { "docid": "794e379f4a3953ab16157e09b6fa346c", "text": "Recent years have seen a surge of research on social recommendation techniques for improving recommender systems due to the growing influence of social networks to our daily life. The intuition of social recommendation is that users tend to show affinities with items favored by their social ties due to social influence. Despite the extensive studies, no existing work has attempted to distinguish and learn the personalized preferences between strong and weak ties, two important terms widely used in social sciences, for each individual in social recommendation. In this paper, we first highlight the importance of different types of ties in social relations originated from social sciences, and then propose a novel social recommendation method based on a new Probabilistic Matrix Factorization model that incorporates the distinction of strong and weak ties for improving recommendation performance. The proposed method is capable of simultaneously classifying different types of social ties in a social network w.r.t. optimal recommendation accuracy, and learning a personalized tie type preference for each user in addition to other parameters. We conduct extensive experiments on four real-world datasets by comparing our method with state-of-the-art approaches, and find encouraging results that validate the efficacy of the proposed method in exploiting the personalized preferences of strong and weak ties for social recommendation.", "title": "" }, { "docid": "5e858796f025a9e2b91109835d827c68", "text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.", "title": "" }, { "docid": "ecbd9201a7f8094a02fcec2c4f78240d", "text": "Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models. In an extensive evaluation on five standard datasets, we show that our student has small accuracy drop, achieves better performance than other knowledge transfer approaches and it surpasses the performance of the same network trained with labels. In addition, we demonstrate state-ofthe-art results compared to other compression strategies.", "title": "" }, { "docid": "b8e193262d1a70ab3b28d45b480dc1ca", "text": "Artificial Neural networks have been part of an attempt to emulate the learning curve of the human nervous system. However the vital difference of, nervous system being highly parallel and computer processor units remaining largely sequential persists. Here an attempt is made to bridge that gap with the help of Graphics Processing Units (GPUs) which are designed to be highly parallel. In particular Back propagation networks are considered which use supervised learning. Back-propagation algorithms, with no data dependencies are embarrassingly parallel and hence only a totally parallel system can exploit it fully. However, it has also been observed that GPUs underperform when either significant overhead in calculations is incurred or algorithm is not sufficiently parallel.", "title": "" }, { "docid": "ca5eaacea8702798835ca585200b041d", "text": "ccupational Health Psychology concerns the application of psychology to improving the quality of work life and to protecting and promoting the safety, health, and well-being of workers. Contrary to what its name suggests, Occupational Health Psychology has almost exclusively dealt with ill health and poor wellbeing. For instance, a simple count reveals that about 95% of all articles that have been published so far in the leading Journal of Occupational Health Psychology have dealt with negative aspects of workers' health and well-being, such as cardiovascular disease, repetitive strain injury, and burnout. In contrast, only about 5% of the articles have dealt with positive aspects such as job satisfaction, commitment, and motivation. However, times appear to be changing. Since the beginning of this century, more attention has been paid to what has been coined positive psychology: the scientific study of human strength and optimal functioning. This approach is considered to supplement the traditional focus of psychology on psychopathology, disease, illness, disturbance, and malfunctioning. The emergence of positive (organizational) psychology has naturally led to the increasing popularity of positive aspects of health and well-being in Occupational Health Psychology. One of these positive aspects is work engagement, which is considered to be the antithesis of burnout. While burnout is usually defined as a syndrome of exhaustion, cynicism, and reduced professional efficacy, engagement is defined as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption. Engaged employees have a sense of energetic and effective connection with their work activities. Since this new concept was proposed by Wilmar Schaufeli (Utrecht University, the Netherlands) in 2001, 93 academic articles mainly focusing on the measurement of work engagement and its possible antecedents and consequences have been published (see www.schaufeli.com). In addition, major international academic conferences organized by the International Commission on Occupational 171", "title": "" }, { "docid": "75b4640071754d331783d26020f9ac7a", "text": "Traditionally, positive emotions and thoughts, strengths, and the satisfaction of basic psychological needs for belonging, competence, and autonomy have been seen as the cornerstones of psychological health. Without disputing their importance, these foci fail to capture many of the fluctuating, conflicting forces that are readily apparent when people navigate the environment and social world. In this paper, we review literature to offer evidence for the prominence of psychological flexibility in understanding psychological health. Thus far, the importance of psychological flexibility has been obscured by the isolation and disconnection of research conducted on this topic. Psychological flexibility spans a wide range of human abilities to: recognize and adapt to various situational demands; shift mindsets or behavioral repertoires when these strategies compromise personal or social functioning; maintain balance among important life domains; and be aware, open, and committed to behaviors that are congruent with deeply held values. In many forms of psychopathology, these flexibility processes are absent. In hopes of creating a more coherent understanding, we synthesize work in emotion regulation, mindfulness and acceptance, social and personality psychology, and neuropsychology. Basic research findings provide insight into the nature, correlates, and consequences of psychological flexibility and applied research provides details on promising interventions. Throughout, we emphasize dynamic approaches that might capture this fluid construct in the real-world.", "title": "" }, { "docid": "d559ace14dcc42f96d0a96b959a92643", "text": "Graphs are an integral data structure for many parts of computation. They are highly effective at modeling many varied and flexible domains, and are excellent for representing the way humans themselves conceive of the world. Nowadays, there is lots of interest in working with large graphs, including social network graphs, “knowledge” graphs, and large bipartite graphs (for example, the Netflix movie matching graph).", "title": "" }, { "docid": "b1ef75c4a0dc481453fb68e94ec70cdc", "text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.", "title": "" }, { "docid": "8d4d34d8eddf39b9ce276d6c098d128a", "text": "For any stream of time-stamped edges that form a dynamic network, an important choice is the aggregation granularity that an analyst uses to bin the data. Picking such a windowing of the data is often done by hand, or left up to the technology that is collecting the data. However, the choice can make a big difference in the properties of the dynamic network. Finding a good windowing is the time scale detection problem. In previous work, this problem is often solved with an unsupervised heuristic. As an unsupervised problem, it is difficult to measure how well a given windowing algorithm performs. In addition, we show that there is little correlation between the quality of a windowing across different tasks. Therefore the time scale detection problem should not be handled independently from the rest of the analysis of the network. Given this, in accordance with standard supervised machine learning practices, we introduce new windowing algorithms that automatically adapt to the task the analyst wants to perform by treating windowing as a hyperparameter for the task, rather than using heuristics. This approach measures the quality of the windowing by how well a given task is accomplished on the resulting network. This also allows us, for the first time, to directly compare different windowing algorithms to each other, by comparing how well the task is accomplished using that windowing algorithm. We compare this approach to previous approaches and several baselines", "title": "" }, { "docid": "df2bc3dce076e3736a195384ae6c9902", "text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.", "title": "" }, { "docid": "7bb1d856e5703afb571cf781d48ce403", "text": "RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.", "title": "" }, { "docid": "f66ebffa2efda9a4728a85c0b3a94fc7", "text": "The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.", "title": "" } ]
scidocsrr
87c2855f2b712864f155454060d9e021
The effect of plant growth promoting rhizobacteria ( PGPR ) and zinc fertilizer on forage yield of maize under water deficit stress conditions
[ { "docid": "c35eb92007a41be4b4011a5b83b05642", "text": "Soil bacteria are very important in biogeochemical cycles and have been used for crop production for decades. Plant–bacterial interactions in the rhizosphere are the determinants of plant health and soil fertility. Free-living soil bacteria beneficial to plant growth, usually referred to as plant growth promoting rhizobacteria (PGPR), are capable of promoting plant growth by colonizing the plant root. PGPR are also termed plant health promoting rhizobacteria (PHPR) or nodule promoting rhizobacteria (NPR). These are associated with the rhizosphere, which is an important soil ecological environment for plant–microbe interactions. Symbiotic nitrogen-fixing bacteria include the cyanobacteria of the genera Rhizobium, Bradyrhizobium, Azorhizobium, Allorhizobium, Sinorhizobium and Mesorhizobium. Free-living nitrogen-fixing bacteria or associative nitrogen fixers, for example bacteria belonging to the species Azospirillum, Enterobacter, Klebsiella and Pseudomonas, have been shown to attach to the root and efficiently colonize root surfaces. PGPR have the potential to contribute to sustainable plant growth promotion. Generally, PGPR function in three different ways: synthesizing particular compounds for the plants, facilitating the uptake of certain nutrients from the soil, and lessening or preventing the plants from diseases. Plant growth promotion and development can be facilitated both directly and indirectly. Indirect plant growth promotion includes the prevention of the deleterious effects of phytopathogenic organisms. This can be achieved by the production of siderophores, i.e. small metal-binding molecules. Biological control of soil-borne plant pathogens and the synthesis of antibiotics have also been reported in several bacterial species. Another mechanism by which PGPR can inhibit phytopathogens is the production of hydrogen cyanide (HCN) and/or fungal cell wall degrading enzymes, e.g., chitinase and ß-1,3-glucanase. Direct plant growth promotion includes symbiotic and non-symbiotic PGPR which function through production of plant hormones such as auxins, cytokinins, gibberellins, ethylene and abscisic acid. Production of indole-3-ethanol or indole-3-acetic acid (IAA), the compounds belonging to auxins, have been reported for several bacterial genera. Some PGPR function as a sink for 1-aminocyclopropane-1-carboxylate (ACC), the immediate precursor of ethylene in higher plants, by hydrolyzing it into α-ketobutyrate and ammonia, and in this way promote root growth by lowering indigenous ethylene levels in the micro-rhizo environment. PGPR also help in solubilization of mineral phosphates and other nutrients, enhance resistance to stress, stabilize soil aggregates, and improve soil structure and organic matter content. PGPR retain more soil organic N, and other nutrients in the plant–soil system, thus reducing the need for fertilizer N and P and enhancing release of the nutrients.", "title": "" } ]
[ { "docid": "39daa09f2e57903abe1109335127d4b9", "text": "Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named ‘SPARK’ has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.", "title": "" }, { "docid": "e939e98e090c57e269444ae5d503884b", "text": "Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.", "title": "" }, { "docid": "6a2a77224ac9f54160b6c4a38b4758e9", "text": "The increasing ubiquity of the mobile phone is creating many opportunities for personal context sensing, and will result in massive databases of individuals' sensitive information incorporating locations, movements, images, text annotations, and even health data. In existing system architectures, users upload their raw (unprocessed or filtered) data streams directly to content-service providers and have little control over their data once they \"opt-in\".\n We present Personal Data Vaults (PDVs), a privacy architecture in which individuals retain ownership of their data. Data are routinely filtered before being shared with content-service providers, and users or data custodian services can participate in making controlled data-sharing decisions. Introducing a PDV gives users flexible and granular access control over data. To reduce the burden on users and improve usability, we explore three mechanisms for managing data policies: Granular ACL, Trace-audit and Rule Recommender. We have implemented a proof-of-concept PDV and evaluated it using real data traces collected from two personal participatory sensing applications.", "title": "" }, { "docid": "ff71aa2caed491f9bf7b67a5377b4d66", "text": "In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.", "title": "" }, { "docid": "1023cd0b40e24429cb39b4d38477cada", "text": "Organizations that migrate from identity-centric to role-based Identity Management face the initial task of defining a valid set of roles for their employees. Due to its capabilities of automated and fast role detection, role mining as a solution for dealing with this challenge has gathered a rapid increase of interest in the academic community. Research activities throughout the last years resulted in a large number of different approaches, each covering specific aspects of the challenge. In this paper, firstly, a survey of the research area provides insight into the development of the field, underlining the need for a comprehensive perspective on role mining. Consecutively, a generic process model for role mining including preand post-processing activities is introduced and existing research activities are classified according to this model. The goal is to provide a basis for evaluating potentially valuable combinations of those approaches in the future.", "title": "" }, { "docid": "ef2cc160033a30ed1341b45468d93464", "text": "A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research.", "title": "" }, { "docid": "a96f219a2a1baac2c0d964a5a7d9fb62", "text": "Spam-reduction techniques have developed rapidly ov er the last few years, as spam volumes have increased. We believe that no one anti-spam soluti on is the “right” answer, and that the best approac h is a multifaceted one, combining various forms of filtering w ith infrastructure changes, financial changes, lega l recourse, and more, to provide a stronger barrier to spam tha n can be achieved with one solution alone. SpamGur u addresses the part of this multi-faceted approach t hat can be handled by technology on the recipient’s side, using plug-in tokenizers and parsers, plug-in classificat ion modules, and machine-learning techniques to ach ieve high hit rates and low false-positive rates.", "title": "" }, { "docid": "f9c938a98621f901c404d69a402647c7", "text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.", "title": "" }, { "docid": "c7d629a83de44e17a134a785795e26d8", "text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.", "title": "" }, { "docid": "62d1a93aa2f458ca592bcfbd1ee03229", "text": "The alternating direction method of multipliers (ADMM) is a versatile tool for solving a wide range of constrained optimization problems, with differentiable or non-differentiable objective functions. Unfortunately, its performance is highly sensitive to a penalty parameter, which makes ADMM often unreliable and hard to automate for a non-expert user. We tackle this weakness of ADMM by proposing a method to adaptively tune the penalty parameters to achieve fast convergence. The resulting adaptive ADMM (AADMM) algorithm, inspired by the successful Barzilai-Borwein spectral method for gradient descent, yields fast convergence and relative insensitivity to the initial stepsize and problem scaling.", "title": "" }, { "docid": "dadcea041dcc49d7d837cb8c938830f3", "text": "Software Defined Networking (SDN) has been proposed as a drastic shift in the networking paradigm, by decoupling network control from the data plane and making the switching infrastructure truly programmable. The key enabler of SDN, OpenFlow, has seen widespread deployment on production networks and its adoption is constantly increasing. Although openness and programmability are primary features of OpenFlow, security is of core importance for real-world deployment. In this work, we perform a security analysis of OpenFlow using STRIDE and attack tree modeling methods, and we evaluate our approach on an emulated network testbed. The evaluation assumes an attacker model with access to the network data plane. Finally, we propose appropriate counter-measures that can potentially mitigate the security issues associated with OpenFlow networks. Our analysis and evaluation approach are not exhaustive, but are intended to be adaptable and extensible to new versions and deployment contexts of OpenFlow.", "title": "" }, { "docid": "1121e6d94c1e545e0fa8b0d8b0ef5997", "text": "Research is a continuous phenomenon. It is recursive in nature. Every research is based on some earlier research outcome. A general approach in reviewing the literature for a problem is to categorize earlier work for the same problem as positive and negative citations. In this paper, we propose a novel automated technique, which classifies whether an earlier work is cited as sentiment positive or sentiment negative. Our approach first extracted the portion of the cited text from citing paper. Using a sentiment lexicon we classify the citation as positive or negative by picking a window of at most five (5) sentences around the cited place (corpus). We have used Naïve-Bayes Classifier for sentiment analysis. The algorithm is evaluated on a manually annotated and class labelled collection of 150 research papers from the domain of computer science. Our preliminary results show an accuracy of 80%. We assert that our approach can be generalized to classification of scientific research papers in different disciplines.", "title": "" }, { "docid": "a3960e34df2846baa277389ba01229de", "text": "Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.", "title": "" }, { "docid": "fcbf97bfbcf63ee76f588a05f82de11e", "text": "The Deliberation without Attention (DWA) effect refers to apparent improvements in decision-making following a period of distraction. It has been presented as evidence for beneficial unconscious cognitive processes. We identify two major concerns with this claim: first, as these demonstrations typically involve subjective preferences, the effects of distraction cannot be objectively assessed as beneficial; second, there is no direct evidence that the DWA manipulation promotes unconscious decision processes. We describe two tasks based on the DWA paradigm in which we found no evidence that the distraction manipulation led to decision processes that are subjectively unconscious, nor that it reduced the influence of presentation order upon performance. Crucially, we found that a lack of awareness of decision process was associated with poorer performance, both in terms of subjective preference measures used in traditional DWA paradigm and in an equivalent task where performance can be objectively assessed. Therefore, we argue that reliance on conscious memory itself can explain the data. Thus the DWA paradigm is not an adequate method of assessing beneficial unconscious thought.", "title": "" }, { "docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2", "text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "title": "" }, { "docid": "ce55485a60213c7656eb804b89be36cc", "text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.", "title": "" }, { "docid": "17a475b655134aafde0f49db06bec127", "text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.", "title": "" }, { "docid": "c534935b7ba93e32d8138ecc2046f4e9", "text": "This paper reviews the findings of several studies and surveys that address the increasing popularity and usage of so-called fitness “gamification.” Fitness gamification is used as an overarching and information term for the use of video game elements in non-gaming systems to improve user experience and user engagement. In this usage, game components such as a scoreboard, competition amongst friends, and awards and achievements are employed to motivate users to achieve personal health goals. The rise in smartphone usage has also increased the number of mobile fitness applications that utilize gamification principles. The most popular and successful fitness applications are the ones that feature an assemblage of workout tracking, social sharing, and achievement systems. This paper provides an overview of gamification, a description of gamification characteristics, and specific examples of how fitness gamification applications function and is used.", "title": "" }, { "docid": "a1d22637d0b1dbcd355b9b2f6e10d9a4", "text": "Accurate monitoring of the sub-visible particle load in protein biopharmaceuticals is increasingly important to drug development. Manufacturers are expected to characterize and control sub-visible protein particles in their products due to their potential immunogenicity. Light obscuration, the most commonly used analytical tool to count microscopic particles, does not allow discrimination between potentially harmful protein aggregates and harmless pharmaceutical components, e.g. silicone oil, commonly present in drug products. Microscopic image analysis in flow-microscopy techniques allows not only counting, but also classification of sub-visible particles based on morphology. We present a novel approach to define software filters for analysis of particle morphology in flow-microscopic images enhancing the capabilities of flow-microscopy. Image morphology analysis was applied to analyze flow-microscopy data from experimental test sets of protein aggregates and silicone oil suspensions. A combination of four image morphology parameters was found to provide a reliable basis for automatic distinction between silicone oil droplets and protein aggregates in protein biopharmaceuticals resulting in low misclassification errors. A novel, custom-made software filter for discrimination between proteinaceous particles and silicone oil droplets in flow-microscopy imaging analysis was successfully developed.", "title": "" }, { "docid": "c22a769dee080ec2e145a12c8588f0f8", "text": "Chicken chicken chicken: chicken chicken. This is actually no major typing error but well and truly the title of both a publication in the 2006 Annals of Improbable Research [2] and a homonymous talk at the 2007 AAAS conference by the software engineer Doug Zongker, composed of totally serious looking texts, graphs and diagrams e based exclusively on a vocabulary restricted to the common name of Gallus gallus domesticus. Apart from having caused the open-plan hilarity of the scientific public, chicken and more generally fowl also happen to be the main reservoir and evolutionary playground for influenza A viruses that occasionally jump over to humans and even more occasionally cause a media e centered commotion. Indeed, manifold combinations of hemagglutinin H1 to H16 and neuraminidase N1 to N9, the two surface proteins used to subtype the virus, are all found in wild water birds and at least seven of them have made it into humans until now [3]. Interestingly, that renders birds apparently better at handing down infectious agents to humans than those closest relatives, African apes, who managed to pass “only” two diseases on to us. Admittedly, the agents in question are Plasmodium falciparum and HIV-1, but considering the broad range of available pathogens, the number remains low [4]. It remains nevertheless ambiguous if the simian pathogens are untalented or rather lacked the opportunities of their winged relatives, spreading their gatecrashers through direct contact on live poultry markets [3]. The latest edition of the series is H7N9, which caused its first human victims in February 2013 [5,6], after about two years of patching up starting from H7N3, H9N9 and a preliminary version of H7N9 [7], with the involuntary participation of ducks, bramblings and other migratory birds as vectors [6]. Above all praised as a victory of the rapid and coordinated reaction of the Chinese health instances and the fast and efficient sharing of information at the international level [4], from he media coverage point of view, things have been remarkably calm around H7N9 lately, despite the fact that the season-related second outbreak in China at the end of this winter caused way more fatalities than the one in 2013 [8]. Papers were busily", "title": "" } ]
scidocsrr
848040afa5a646666cfe10f834325e41
Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention
[ { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" }, { "docid": "346349308d49ac2d3bb1cfa5cc1b429c", "text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "title": "" } ]
[ { "docid": "7bb079fd51771a9dc45a73bc53a797ee", "text": "This paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [O'Neil et al. 1993]. The algorithm is called the LRU-<italic>K</italic> method, and reduces to the well-known LRU (Least Recently Used) method for <italic>K</italic> = 1. Previous work [O'Neil et al. 1993; Weikum et al. 1994; Johnson and Shasha 1994] has shown the effectiveness for <italic>K</italic> > 1 by simulation, especially in the most common case of <italic>K</italic> = 2. The basic idea in LRU-<italic>K</italic> is to keep track of the times of the last <italic>K</italic> references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. Based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. In the current paper, we prove, under the assumptions of the independent reference model, that LRU-<italic>K</italic> is optimal. Specifically we show: given the times of the (up to) <italic>K</italic> most recent references to each disk page, no other algorithm <italic>A</italic> making decisions to keep pages in a memory buffer holding <italic>n</italic> - 1 pages based on this infomation can improve on the expected number of I/Os to access pages over the LRU-<italic>K</italic> algorithm using a memory buffer holding <italic>n</italic> pages. The proof uses the Bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made.", "title": "" }, { "docid": "9157378112fedfd9959683effe7a0a47", "text": "Studies indicate that substance use among Ethiopian adolescents is considerably rising; in particular college and university students are the most at risk of substance use. The aim of the study was to assess substance use and associated factors among university students. A cross-sectional survey was carried out among 1040 Haramaya University students using self-administered structured questionnaire. Multistage sampling technique was used to select students. Descriptive statistics, bivariate, and multivariate analysis were done. About two-thirds (62.4%) of the participants used at least one substance. The most commonly used substance was alcohol (50.2%). Being male had strong association with substance use (AOR (95% CI), 3.11 (2.20, 4.40)). The odds of substance use behaviour is higher among third year students (AOR (95% CI), 1.48 (1.01, 2.16)). Being a follower of Muslim (AOR (95% CI), 0.62 (0.44, 0.87)) and Protestant (AOR (95% CI), 0.25 (0.17, 0.36)) religions was shown to be protective of substance use. Married (AOR (95% CI), 1.92 (1.12, 3.30)) and depressed (AOR (95% CI), 3.30 (2.31, 4.72)) students were more likely to use substances than others. The magnitude of substance use was high. This demands special attention, emergency preventive measures, and targeted information, education and communication activity.", "title": "" }, { "docid": "66c49b0dbdbdf29ace0f60839b867e43", "text": "The job shop scheduling problem with the makespan criterion is a certain NP-hard case from OR theory having excellent practical applications. This problem, having been examined for years, is also regarded as an indicator of the quality of advanced scheduling algorithms. In this paper we provide a new approximate algorithm that is based on the big valley phenomenon, and uses some elements of so-called path relinking technique as well as new theoretical properties of neighbourhoods. The proposed algorithm owns, unprecedented up to now, accuracy, obtainable in a quick time on a PC, which has been confirmed after wide computer tests.", "title": "" }, { "docid": "cb2e602af2467b3d8ad7abdd98e6ddfd", "text": "The ephemeral content popularity seen with many content delivery applications can make indiscriminate on-demand caching in edge networks highly inefficient, since many of the content items that are added to the cache will not be requested again from that network. In this paper, we address the problem of designing and evaluating more selective edge-network caching policies. The need for such policies is demonstrated through an analysis of a dataset recording YouTube video requests from users on an edge network over a 20-month period. We then develop a novel workload modelling approach for such applications and apply it to study the performance of alternative edge caching policies, including indiscriminate caching and <italic>cache on <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"carlsson-ieq1-2614805.gif\"/></alternatives></inline-formula></italic>th <italic>request</italic> for different <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math><alternatives> <inline-graphic xlink:href=\"carlsson-ieq2-2614805.gif\"/></alternatives></inline-formula>. The latter policies are found able to greatly reduce the fraction of the requested items that are inserted into the cache, at the cost of only modest increases in cache miss rate. Finally, we quantify and explore the potential room for improvement from use of other possible predictors of further requests. We find that although room for substantial improvement exists when comparing performance to that of a perfect “oracle” policy, such improvements are unlikely to be achievable in practice.", "title": "" }, { "docid": "a6834bf39e84e4aa9964a7b01e79095f", "text": "As in many neural network architectures, the use of Batch Normalization (BN) has become a common practice for Generative Adversarial Networks (GAN). In this paper, we propose using Euclidean reconstruction error on a test set for evaluating the quality of GANs. Under this measure, together with a careful visual analysis of generated samples, we found that while being able to speed training during early stages, BN may have negative effects on the quality of the trained model and the stability of the training process. Furthermore, Weight Normalization, a more recently proposed technique, is found to improve the reconstruction, training speed and especially the stability of GANs, and thus should be used in place of BN in GAN training.", "title": "" }, { "docid": "e672d12d5e0163fae74639ca0384a131", "text": "The greater sophistication and complexity of machines increases the necessity to equip them with human friendly interfaces. As we know, voice is the main support for human-human communication, so it is desirable to interact with machines, namely robots, using voice. In this paper we present the recent evolution of the Natural Language Understanding capabilities of Carl, our mobile intelligent robot capable of interacting with humans using spoken natural language. The new design is based on a hybrid approach, combining a robust parser with Memory Based Learning. This hybrid architecture is capable of performing deep analysis if the sentence is (almost) completely accepted by the grammar, and capable of performing a shallow analysis if the sentence has severe errors.", "title": "" }, { "docid": "3c118c4f2b418f801faee08050e3a165", "text": "Unsupervised learning from visual data is one of the most difficult challenges in computer vision. It is essential for understanding how visual recognition works. Learning from unsupervised input has an immense practical value, as huge quantities of unlabeled videos can be collected at low cost. Here we address the task of unsupervised learning to detect and segment foreground objects in single images. We achieve our goal by training a student pathway, consisting of a deep neural network that learns to predict, from a single input image, the output of a teacher pathway that performs unsupervised object discovery in video. Our approach is different from the published methods that perform unsupervised discovery in videos or in collections of images at test time. We move the unsupervised discovery phase during the training stage, while at test time we apply the standard feed-forward processing along the student pathway. This has a dual benefit: firstly, it allows, in principle, unlimited generalization possibilities during training, while remaining fast at testing. Secondly, the student not only becomes able to detect in single images significantly better than its unsupervised video discovery teacher, but it also achieves state of the art results on two current benchmarks, YouTube Objects and Object Discovery datasets. At test time, our system is two orders of magnitude faster than other previous methods.", "title": "" }, { "docid": "899349ba5a7adb31f5c7d24db6850a82", "text": "Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns.\n We extend blue noise sampling to multiple classes where each individual class as well as their unions exhibit blue noise characteristics. We propose two flavors of algorithms to generate such multi-class blue noise samples, one extended from traditional Poisson hard disk sampling for explicit control of sample spacing, and another based on our soft disk sampling for explicit control of sample count. Our algorithms support uniform and adaptive sampling, and are applicable to both discrete and continuous sample space in arbitrary dimensions. We study characteristics of samples generated by our methods, and demonstrate applications in object placement, sensor layout, and color stippling.", "title": "" }, { "docid": "31a3750823b0c8dc4302fae37c81c022", "text": "Automatic Number Plate Recognition (ANPR) is a mass surveillance system that captures the image of vehicles and recognizes their license number. ANPR can be assisted in the detection of stolen vehicles. The detection of stolen vehicles can be done in an efficient manner by using the ANPR systems located in the highways. This paper presents a recognition method in which the vehicle plate image is obtained by the digital cameras and the image is processed to get the number plate information. A rear image of a vehicle is captured and processed using various algorithms. In this context, the number plate area is localized using a novel „feature-based number plate localization‟ method which consists of many algorithms. But our study mainly focusing on the two fast algorithms i.e., Edge Finding Method and Window Filtering Method for the better development of the number plate detection system", "title": "" }, { "docid": "025932fa63b24d65f3b61e07864342b7", "text": "The realization of the Internet of Things (IoT) paradigm relies on the implementation of systems of cooperative intelligent objects with key interoperability capabilities. One of these interoperability features concerns the cooperation among nodes towards a collaborative deployment of applications taking into account the available resources, such as electrical energy, memory, processing, and object capability to perform a given task, which are", "title": "" }, { "docid": "78744205cf17be3ee5a61d12e6a44180", "text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.", "title": "" }, { "docid": "32097bd3faa683f451ae982554f8ef5b", "text": "According to the growth of the Internet technology, there is a need to develop strategies in order to maintain security of system. One of the most effective techniques is Intrusion Detection System (IDS). This system is created to make a complete security in a computerized system, in order to pass the Intrusion system through the firewall, antivirus and other security devices detect and deal with it. The Intrusion detection techniques are divided into two groups which includes supervised learning and unsupervised learning. Clustering which is commonly used to detect possible attacks is one of the branches of unsupervised learning. Fuzzy sets play an important role to reduce spurious alarms and Intrusion detection, which have uncertain quality.This paper investigates k-means fuzzy and k-means algorithm in order to recognize Intrusion detection in system which both of the algorithms use clustering method.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "20fb7c98ad124e6d7a84fa63e2db30d8", "text": "Purpose – The purpose of this paper is to study the e-government service quality and risk perceptions of personal income taxpayers on e-government service value. Design/methodology/approach – The study uses qualitative in-depth interview and content analysis to explore the determinants of e-government service quality and risk dimensions of e-government service value. Findings – The findings suggest that perceived value of e-government service is e-government service quality, which consists of service design, web site design, technical support, and customer support quality. On the other hand, the three perceived risk concerns are performance, privacy, and financial audit risk. Research limitations/implications – The study interviews the small samples of income taxpayers to develop the determinants of e-government service value, future studies should utilize a quantitative study to strengthen the results. Future researchers could also expand the results to other groups of taxpayers (e.g. corporate tax) to explore and compare factors that contribute to e-government service value. Practical implications – The results can assist e-government service design not only to increase electronic service quality but also to reduce risk facets in order to enhance e-government service value and enlarge acceptance from income taxpayers. E-government service providers can use the research model to detect electronic service weaknesses and risks so that the appropriate resources can be allocated to improve the system more effectively. Originality/value – This study outlines e-government service value in terms of e-government service quality and risk perspectives or the E-GOVSQUAL-RISK model which contributes to the different knowledge on e-government service.", "title": "" }, { "docid": "693c5cb15aea4398c95fd9d67f6615e9", "text": "With the renaissance of neural network in recent years, relation classification has again become a research hotspot in natural language processing, and leveraging parse trees is a common and effective method of tackling this problem. In this work, we offer a new perspective on utilizing syntactic information of dependency parse tree and present a position encoding convolutional neural network (PECNN) based on dependency parse tree for relation classification. First, treebased position features are proposed to encode the relative positions of words in dependency trees and help enhance the word representations. Then, based on a redefinition of “context”, we design two kinds of tree-based convolution kernels for capturing the semantic and structural information provided by dependency trees. Finally, the features extracted by convolution module are fed to a classifier for labelling the semantic relations. Experiments on the benchmark dataset show that PECNN outperforms state-of-the-art approaches. We also compare the effect of different position features and visualize the influence of treebased position feature by tracing back the convolution process.", "title": "" }, { "docid": "3007cf623eff81d46a496e16a0d2d5bc", "text": "Grounded language learning bridges words like ‘red’ and ‘square’ with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this paper, we build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. Our system learns to ground natural language words describing objects using supervision from an interactive humanrobot “I Spy” game. In this game, the human and robot take turns describing one object among several, then trying to guess which object the other has described. All supervision labels were gathered from human participants physically present to play this game with a robot. We demonstrate that our multi-modal system for grounding natural language outperforms a traditional, vision-only grounding framework by comparing the two on the “I Spy” task. We also provide a qualitative analysis of the groundings learned in the game, visualizing what words are understood better with multi-modal sensory information as well as identifying learned word meanings that correlate with physical object properties (e.g. ‘small’ negatively correlates with object weight).", "title": "" }, { "docid": "ea7acc555f2cb2de898a3706c31006db", "text": "Securing the supply chain of integrated circuits is of utmost importance to computer security. In addition to counterfeit microelectronics, the theft or malicious modification of designs in the foundry can result in catastrophic damage to critical systems and large projects. In this letter, we describe a 3-D architecture that splits a design into two separate tiers: one tier that contains critical security functions is manufactured in a trusted foundry; another tier is manufactured in an unsecured foundry. We argue that a split manufacturing approach to hardware trust based on 3-D integration is viable and provides several advantages over other approaches.", "title": "" }, { "docid": "db6f363420ca19469a2b850147dfcdfb", "text": "This paper presents our approach for producing graphical user interfaces (GUIs) for functionally rich business information system (BIS) prototypes, upon a mobile platform. Those prototypes are specified with annotated UML class diagrams. Navigation in the generated GUIs is allowed through the semantic links that match the associations and cardinalities among the conceptual domain entities, as expressed in the model. We start by reviewing the Android scaffolding for producing flexible GUIs for mobile devices. The latter can present rather different displays, in terms of size, orientation and resolution. Then we show how our model-based generative technique allows producing prototypes that match both the Android GUIs requirements, while implementing our model-driven approach for user navigation.", "title": "" }, { "docid": "40a0e4f114b066ef7c090517a6befad5", "text": "Utility asset managers and engineers are concerned about the life and reliability of their power transformers which depends on the continued life of the paper insulation. The ageing rate of the paper is affected by water, oxygen and acids. Traditionally, the ageing rate of paper has been studied in sealed vessels however this approach does not allow the possibility to assess the affect of oxygen on paper with different water content. The ageing rate of paper has been studied for dry paper in air (excess oxygen). In these experiments we studied the ageing rate of Kraft and thermally upgraded Kraft paper in medium and high oxygen with varying water content. Furthermore, the oxygen content of the oil in sealed vessels is low which represents only sealed transformers. The ageing rate of the paper has not been determined for free breathing transformers with medium or high oxygen content and for different wetness of paper. In these ageing experiments the water and oxygen content was controlled using a special test rig to compare the ageing rate to previous work and to determine the ageing effect of paper by combining temperature, water content of paper and oxygen content of the oil. We found that the ageing rate of paper with the same water content increased with oxygen content in the oil. Hence, new life curves were developed based on the water content of the paper and the oxygen content of the oil.", "title": "" }, { "docid": "8cbe0ff905a58e575f2d84e4e663a857", "text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. Œis survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Speci€cally, we list and review the di‚erent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-Œings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.", "title": "" } ]
scidocsrr
e4e5cd44c838d50c69a0af61f354c541
Detection of Browser Fingerprinting by Static JavaScript Code Classification
[ { "docid": "6f045c9f48ce87f6b425ac6c5f5d5e9d", "text": "In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications.\n In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.", "title": "" } ]
[ { "docid": "96b47f766be916548226abac36b8f318", "text": "Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network’s ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular 3D models, despite the artefacts in input CMR volumes.", "title": "" }, { "docid": "7dde24346f2df846b9dbbe45cd9a99d6", "text": "The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a \"happy individual\" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered adequate. Most of the validity hypotheses formulated a priori (convergent and know-group) was further confirmed. The cut-off value of higher than 7 in remembered PHI was identified (AUC = 0.780, sensitivity = 69.2%, specificity = 78.2%) as the best one to identify a happy individual.We concluded that the Universal Portuguese version of the PHI is valid and reliable for use in the Brazilian population using online surveys.", "title": "" }, { "docid": "d763cefd5d584405e1a6c8e32c371c0c", "text": "Abstract: Whole world and administrators of Educational institutions’ in our country are concerned about regularity of student attendance. Student’s overall academic performance is affected by the student’s present in his institute. Mainly there are two conventional methods for attendance taking and they are by calling student nams or by taking student sign on paper. They both were more time consuming and inefficient. Hence, there is a requirement of computer-based student attendance management system which will assist the faculty for maintaining attendance of presence. The paper reviews various computerized attendance management system. In this paper basic problem of student attendance management is defined which is traditionally taken manually by faculty. One alternative to make student attendance system automatic is provided by Computer Vision. In this paper we review the various computerized system which is being developed by using different techniques. Based on this review a new approach for student attendance recording and management is proposed to be used for various colleges or academic institutes.", "title": "" }, { "docid": "7afe5c6affbaf30b4af03f87a018a5b3", "text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.", "title": "" }, { "docid": "cbc0e3dff1d86d88c416b1119fd3da82", "text": "One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, ar X iv :1 71 2. 02 05 2v 1 [ cs .R O ] 6 D ec 2 01 7 and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.", "title": "" }, { "docid": "2bb31e4565edc858453af69296a67ee6", "text": "OBJECTIVES\nNetworks of franchised health establishments, providing a standardized set of services, are being implemented in developing countries. This article examines associations between franchise membership and family planning and reproductive health outcomes for both the member provider and the client.\n\n\nMETHODS\nRegression models are fitted examining associations between franchise membership and family planning and reproductive health outcomes at the service provider and client levels in three settings.\n\n\nRESULTS\nFranchising has a positive association with both general and family planning client volumes, and the number of family planning brands available. Similar associations with franchise membership are not found for reproductive health service outcomes. In some settings, client satisfaction is higher at franchised than other types of health establishments, although the association between franchise membership and client outcomes varies across the settings.\n\n\nCONCLUSIONS\nFranchise membership has apparent benefits for both the provider and the client, providing an opportunity to expand access to reproductive health services, although greater attention is needed to shift the focus from family planning to a broader reproductive health context.", "title": "" }, { "docid": "d9f7d78b6e1802a17225db13edd033f6", "text": "The edit distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations we admit are deleting, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the longest common subsequence of two strings is a special case of the problem of computing edit distances. We describe an algorithm for computing the edit distance between two strings of length n and m, n > m, which requires O(n * max( 1, m/log n)) steps whenever the costs of edit operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.", "title": "" }, { "docid": "0360bfbb47af9e661114ea8d367a166f", "text": "Critical Discourse Analysis (CDA) is discourse analytical research that primarily studies the way social-power abuse and inequality are enacted, reproduced, legitimated, and resisted by text and talk in the social and political context. With such dissident research, critical discourse analysts take an explicit position and thus want to understand, expose, and ultimately challenge social inequality. This is also why CDA may be characterized as a social movement of politically committed discourse analysts. One widespread misunderstanding of CDA is that it is a special method of doing discourse analysis. There is no such method: in CDA all methods of the cross-discipline of discourse studies, as well as other relevant methods in the humanities and social sciences, may be used (Wodak and Meyer 2008; Titscher et al. 2000). To avoid this misunderstanding and to emphasize that many methods and approaches may be used in the critical study of text and talk, we now prefer the more general term critical discourse studies (CDS) for the field of research (van Dijk 2008b). However, since most studies continue to use the well-known abbreviation CDA, this chapter will also continue to use it. As an analytical practice, CDA is not one direction of research among many others in the study of discourse. Rather, it is a critical perspective that may be found in all areas of discourse studies, such as discourse grammar, Conversation Analysis, discourse pragmatics, rhetoric, stylistics, narrative analysis, argumentation analysis, multimodal discourse analysis and social semiotics, sociolinguistics, and ethnography of communication or the psychology of discourse-processing, among others. In other words, CDA is discourse study with an attitude. Some of the tenets of CDA could already be found in the critical theory of the Frankfurt School before World War II (Agger 1992b; Drake 2009; Rasmussen and Swindal 2004). Its current focus on language and discourse was initiated with the", "title": "" }, { "docid": "9a43476b4038e554c28e09bae9140e24", "text": "The success of text-based retrieval motivates us to investigate analogous techniques which can support the querying and browsing of image data. However, images differ significantly from text both syntactically and semantically in their mode of representing and expressing information. Thus, the generalization of information retrieval from the text domain to the image domain is non-trivial. This paper presents a framework for information retrieval in the image domain which supports content-based querying and browsing of images. A critical first step to establishing such a framework is to construct a codebook of \"keywords\" for images which is analogous to the dictionary for text documents. We refer to such \"keywords\" in the image domain as \"keyblocks.\" In this paper, we first present various approaches to generating a codebook containing keyblocks at different resolutions. Then we present a keyblock-based approach to content-based image retrieval. In this approach, each image is encoded as a set of one-dimensional index codes linked to the keyblocks in the codebook, analogous to considering a text document as a linear list of keywords. Generalizing upon text-based information retrieval methods, we then offer various techniques for image-based information retrieval. By comparing the performance of this approach with conventional techniques using color and texture features, we demonstrate the effectiveness of the keyblock-based approach to content-based image retrieval.", "title": "" }, { "docid": "1461157186183f11d7270d89eecd926a", "text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.", "title": "" }, { "docid": "2917b7b1453f9e6386d8f47129b605fb", "text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "title": "" }, { "docid": "ca67fcc762caa19ce3911c266c458098", "text": "A novel microstrip lowpass filter is proposed to achieve an ultra wide stopband with 12th harmonic suppression and extremely sharp skirt characteristics. The transition band is from 1.26 to 1.37 GHz with -3 and -20 dB, respectively. The operating mechanism of the filter is investigated based on proposed equivalent-circuit model, and the role of each section in creating null points is theoretically discussed. An overall good agreement between measured and simulated results is observed.", "title": "" }, { "docid": "70a07b1aedcb26f7f03ffc636b1d84a8", "text": "This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.\n We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.", "title": "" }, { "docid": "1968573cf98307276bf0f10037aa3623", "text": "In many imaging applications, the continuous phase information of the measured signal is wrapped to a single period of 2π, resulting in phase ambiguity. In this paper we consider the two-dimensional phase unwrapping problem and propose a Maximum a Posteriori (MAP) framework for estimating the true phase values based on the wrapped phase data. In particular, assuming a joint Gaussian prior on the original phase image, we show that the MAP formulation leads to a binary quadratic minimization problem. The latter can be efficiently solved by semidefinite relaxation (SDR). We compare the performances of our proposed method with the existing L1/L2-norm minimization approaches. The numerical results demonstrate that the SDR approach significantly outperforms the existing phase unwrapping methods.", "title": "" }, { "docid": "e3270182796d7244ef19865ebff581ed", "text": "Hyperscale datacenter providers have struggled to balance the growing need for specialized hardware (efficiency) with the economic benefits of homogeneity (manageability). In this paper we propose a new cloud architecture that uses reconfigurable logic to accelerate both network plane functions and applications. This Configurable Cloud architecture places a layer of reconfigurable logic (FPGAs) between the network switches and the servers, enabling network flows to be programmably transformed at line rate, enabling acceleration of local applications running on the server, and enabling the FPGAs to communicate directly, at datacenter scale, to harvest remote FPGAs unused by their local servers. We deployed this design over a production server bed, and show how it can be used for both service acceleration (Web search ranking) and network acceleration (encryption of data in transit at high-speeds). This architecture is much more scalable than prior work which used secondary rack-scale networks for inter-FPGA communication. By coupling to the network plane, direct FPGA-to-FPGA messages can be achieved at comparable latency to previous work, without the secondary network. Additionally, the scale of direct inter-FPGA messaging is much larger. The average round-trip latencies observed in our measurements among 24, 1000, and 250,000 machines are under 3, 9, and 20 microseconds, respectively. The Configurable Cloud architecture has been deployed at hyperscale in Microsoft's production datacenters worldwide.", "title": "" }, { "docid": "fe383fbca6d67d968807fb3b23489ad1", "text": "In this project, we attempt to apply machine-learning algorithms to predict Bitcoin price. For the first phase of our investigation, we aimed to understand and better identify daily trends in the Bitcoin market while gaining insight into optimal features surrounding Bitcoin price. Our data set consists of over 25 features relating to the Bitcoin price and payment network over the course of five years, recorded daily. Using this information we were able to predict the sign of the daily price change with an accuracy of 98.7%. For the second phase of our investigation, we focused on the Bitcoin price data alone and leveraged data at 10-minute and 10-second interval timepoints, as we saw an opportunity to evaluate price predictions at varying levels of granularity and noisiness. By predicting the sign of the future change in price, we are modeling the price prediction problem as a binomial classification task, experimenting with a custom algorithm that leverages both random forests and generalized linear models. These results had 50-55% accuracy in predicting the sign of future price change using 10 minute time intervals.", "title": "" }, { "docid": "59d57e31357eb72464607e89ba4ba265", "text": "Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks, kernels, and e-Science workloads. We also compare using long-term traces the performance characteristics and cost models of clouds with those of other platforms accessible to scientists. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community. Wp 1 http://www.pds.ewi.tudelft.nl/∼iosup/ S. Ostermann et al. Wp Early Cloud Computing EvaluationWp PDS", "title": "" }, { "docid": "9b69254f90c28e0256fdfbefc608c034", "text": "Multiple-station shared-use vehicle systems allow users to travel between different activity centers and are well suited for resort communities, recreational areas, as well as university and corporate campuses. In this type of shared-use vehicle system, trips are more likely to be oneway each time, differing from other shared-use vehicle system models such as neighborhood carsharing and station cars where round-trips are more prevalent. Although convenient to users, a multiple-station system can suffer from a vehicle distribution problem. As vehicles are used throughout the day, they may become disproportionally distributed among the stations. As a result, it is necessary on occasion to relocate vehicles from one station to another. Relocations can be performed by system staff, which can be cumbersome and costly. In order to alleviate the distribution problem and reduce the number or relocations, we introduce two user-based relocation mechanisms called trip joining (or ridesharing) or trip splitting. When the system realizes that it is becoming imbalanced, it urges users that have more than one passenger to take separate vehicles when more vehicles are needed at the destination station (trip splitting). Conversely, if two users are at the origin station at the same time traveling to the same destination, the system can urge them to rideshare (trip joining). We have implemented this concept both on a real-world university campus shared vehicle system and in a high-fidelity computer simulation model. The model results show that there can be as much as a 42% reduction in the number of relocations using these techniques.", "title": "" }, { "docid": "891efd54485c7cf73edd690e0d9b3cfa", "text": "Quantitative-diffusion-tensor MRI consists of deriving and displaying parameters that resemble histological or physiological stains, i.e., that characterize intrinsic features of tissue microstructure and microdynamics. Specifically, these parameters are objective, and insensitive to the choice of laboratory coordinate system. Here, these two properties are used to derive intravoxel measures of diffusion isotropy and the degree of diffusion anisotropy, as well as intervoxel measures of structural similarity, and fiber-tract organization from the effective diffusion tensor, D, which is estimated in each voxel. First, D is decomposed into its isotropic and anisotropic parts, [D] I and D - [D] I, respectively (where [D] = Trace(D)/3 is the mean diffusivity, and I is the identity tensor). Then, the tensor (dot) product operator is used to generate a family of new rotationally and translationally invariant quantities. Finally, maps of these quantitative parameters are produced from high-resolution diffusion tensor images (in which D is estimated in each voxel from a series of 2D-FT spin-echo diffusion-weighted images) in living cat brain. Due to the high inherent sensitivity of these parameters to changes in tissue architecture (i.e., macromolecular, cellular, tissue, and organ structure) and in its physiologic state, their potential applications include monitoring structural changes in development, aging, and disease.", "title": "" }, { "docid": "916f6f0942a08501139f6d4d1750816d", "text": "The development of local anesthesia in dentistry has marked the beginning of a new era in terms of pain control. Lignocaine is the most commonly used local anesthetic (LA) agent even though it has a vasodilative effect and needs to be combined with adrenaline. Centbucridine is a non-ester, non amide group LA and has not been comprehensively studied in the dental setting and the objective was to compare it to Lignocaine. This was a randomized study comparing the onset time, duration, depth and cardiovascular parameters between Centbucridine (0.5%) and Lignocaine (2%). The study was conducted in the dental outpatient department at the Government Dental College in India on patients attending for the extraction of lower molars. A total of 198 patients were included and there were no significant differences between the LAs except those who received Centbucridine reported a significantly longer duration of anesthesia compared to those who received Lignocaine. None of the patients reported any side effects. Centbucridine was well tolerated and its substantial duration of anesthesia could be attributed to its chemical compound. Centbucridine can be used for dental procedures and can confidently be used in patients who cannot tolerate Lignocaine or where adrenaline is contraindicated.", "title": "" } ]
scidocsrr
dcd91d36d1df29ebd521d0eb57b118e5
Symmetric Variational Autoencoder and Connections to Adversarial Learning
[ { "docid": "aee91ee5d4cbf51d9ce1344be4e5448c", "text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.", "title": "" } ]
[ { "docid": "2908fc6673d28a519c26bd97b2045090", "text": "Early sport specialization (ESS) refers to intense year round training in a specific sport with the exclusion of other sports at a young age. This approach to training is heavily debated and there are claims both in support and against ESS. ESS is considered to be more common in the modern day youth athlete and could be a source of overuse injuries and burnout. This case describes a 16 year old elite level baseball pitcher who engaged in high volume, intense training at a young age which lead to several significant throwing related injuries. The case highlights the historical context of ESS, the potential risk and benefits as well as the evidence for its effectiveness. It is important for health care professionals to be informed on the topic of ESS in order to educate athletes, parents, coaches and organizations of the potential risks and benefits.", "title": "" }, { "docid": "a549abeda438ce7ce001854aadb63d81", "text": "Cyberbullying is a phenomenon which negatively affects the individuals, the victims suffer from various mental issues, ranging from depression, loneliness, anxiety to low self-esteem. In parallel with the pervasive use of social media, cyberbullying is becoming more and more prevalent. Traditional mechanisms to fight against cyberbullying include the use of standards and guidelines, human moderators, and blacklists based on the profane words. However, these mechanisms fall short in social media and cannot scale well. Therefore, it is necessary to develop a principled learning framework to automatically detect cyberbullying behaviors. However, it is a challenging task due to short, noisy and unstructured content information and intentional obfuscation of the abusive words or phrases by social media users. Motivated by sociological and psychological findings on bullying behaviors and the correlation with emotions, we propose to leverage sentiment information to detect cyberbullying behaviors in social media by proposing a sentiment informed cyberbullying detection framework. Experimental results on two realworld, publicly available social media datasets show the superiority of the proposed framework. Further studies validate the effectiveness of leveraging sentiment information for cyberbullying detection.", "title": "" }, { "docid": "896af0db70c6f293a505d3454b6ac1d8", "text": "An algorithm is described for solving large-scale instances of the Symmetric Traveling Salesman Problem (STSP) to optimality. The core of the algorithm is a \"polyhedral\" cutting-plane procedure that exploits a subset of the system of linear inequalities defining the convex hull of the incidence vectors of the hamiltonian cycles of a complete graph. The cuts are generated by several identification procedures that have been described in a companion paper. Whenever the cutting-plane procedure does not terminate with an optimal solution the algorithm uses a tree-search strategy that, as opposed to branch-and-bound, keeps on producing cuts after branching. The algorithm has been implemented in FORTRAN. Two different linear programming (LP) packages have been used as the LP solver. The implementation of the algorithm and the interface with one of the LP solvers is described in sufficient detail to permit the replication of our experiments. Computational results are reported with up to 42 STSPs with sizes ranging from 48 to 2,392 nodes. Most of the medium-sized test problems are taken from the literature; all others are large-scale real-world problems. All of the instances considered in this study were solved to optimality by the algorithm in \"reasonable\" computation times.", "title": "" }, { "docid": "1c6078d68891b6600727a82841812666", "text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.", "title": "" }, { "docid": "2fbcd34468edf53ee08e0a76a048c275", "text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.", "title": "" }, { "docid": "ae595e2c1983a06321799d4e7288fc4b", "text": "Automated traceability applies information-retrieval techniques to generate candidate links, sharply reducing the effort of manual approaches to build and maintain a requirements trace matrix as well as providing after-the-fact traceability in legacy documents.The authors describe nine best practices for implementing effective automated traceability.", "title": "" }, { "docid": "e64320b71675f2a059a50fd9479d2056", "text": "Extreme sports (ES) are usually pursued in remote locations with little or no access to medical care with the athlete competing against oneself or the forces of nature. They involve high speed, height, real or perceived danger, a high level of physical exertion, spectacular stunts, and heightened risk element or death.Popularity for such sports has increased exponentially over the past two decades with dedicated TV channels, Internet sites, high-rating competitions, and high-profile sponsors drawing more participants.Recent data suggest that the risk and severity of injury in some ES is unexpectedly high. Medical personnel treating the ES athlete need to be aware there are numerous differences which must be appreciated between the common traditional sports and this newly developing area. These relate to the temperament of the athletes themselves, the particular epidemiology of injury, the initial management following injury, treatment decisions, and rehabilitation.The management of the injured extreme sports athlete is a challenge to surgeons and sports physicians. Appropriate safety gear is essential for protection from severe or fatal injuries as the margins for error in these sports are small.The purpose of this review is to provide an epidemiologic overview of common injuries affecting the extreme athletes through a focus on a few of the most popular and exciting extreme sports.", "title": "" }, { "docid": "995fca88b7813c5cfed1c92522cc8d29", "text": "Diode rectifiers with large dc-bus capacitors, used in the front ends of variable-frequency drives (VFDs) and other ac-to-dc converters, draw discontinuous current from the power system, resulting in current distortion and, hence, voltage distortion. Typically, the power system can handle current distortion without showing signs of voltage distortion. However, when the majority of the load on a distribution feeder is made up of VFDs, current distortion becomes an important issue since it can cause voltage distortion. Multipulse techniques to reduce input current harmonics are popular because they do not interfere with the existing power system either from higher conducted electromagnetic interference, when active techniques are used, or from possible resonance, when capacitor-based filters are employed. In this paper, a new 18-pulse topology is proposed that has two six-pulse rectifiers powered via a phase-shifting isolation transformer, while the third six-pulse rectifier is fed directly from the ac source via a matching inductor. This idea relies on harmonic current cancellation strategy rather than flux cancellation method and results in lower overall harmonics. It is also seen to be smaller in size and weight and lower in cost compared to an isolation transformer. Experimental results are given to validate the concept.", "title": "" }, { "docid": "3861e3655de5593526184df4b17f1493", "text": "A new approach to Image Quality Assessment (IQA) is presented. The idea is based on the fact that two images are similar if their structural relationship within their blocks is preserved. To this end, a transition matrix is defined which exploits structural transitions between corresponding blocks of two images. The matrix contains valuable information about differences of two images, which should be transformed to a quality index. Eigen-value analysis over the transition matrix leads to a new distance measure called Eigen-gap. According to simulation results, the Eigen-gap is not only highly correlated to subjective scores but also, its performance is as good as the SSIM, a trustworthy index.", "title": "" }, { "docid": "11bc0abc0aec11c1cf189eb23fd1be9d", "text": "Web spamming describes behavior that attempts to deceive search engine’s ranking algorithms. TrustRank is a recent algorithm that can combat web spam by propagating trust among web pages. However, TrustRank propagates trust among web pages based on the number of outgoing links, which is also how PageRank propagates authority scores among Web pages. This type of propagation may be suited for propagating authority, but it is not optimal for calculating trust scores for demoting spam sites. In this paper, we propose several alternative methods to propagate trust on the web. With experiments on a real web data set, we show that these methods can greatly decrease the number of web spam sites within the top portion of the trust ranking. In addition, we investigate the possibility of propagating distrust among web pages. Experiments show that combining trust and distrust values can demote more spam sites than the sole use of trust values.", "title": "" }, { "docid": "f073981b6c7893dd904fb04707f5ebeb", "text": "Plant growth-promoting rhizobacteria (PGPR) are the rhizosphere bacteria that can enhance plant growth by a wide variety of mechanisms like phosphate solubilization, siderophore production, biological nitrogen fixation, rhizosphere engineering, production of 1-Aminocyclopropane-1-carboxylate deaminase (ACC), quorum sensing (QS) signal interference and inhibition of biofilm formation, phytohormone production, exhibiting antifungal activity, production of volatile organic compounds (VOCs), induction of systemic resistance, promoting beneficial plant-microbe symbioses, interference with pathogen toxin production etc. The potentiality of PGPR in agriculture is steadily increased as it offers an attractive way to replace the use of chemical fertilizers, pesticides and other supplements. Growth promoting substances are likely to be produced in large quantities by these rhizosphere microorganisms that influence indirectly on the overall morphology of the plants. Recent progress in our understanding on the diversity of PGPR in the rhizosphere along with their colonization ability and mechanism of action should facilitate their application as a reliable component in the management of sustainable agricultural system. The progress to date in using the rhizosphere bacteria in a variety of applications related to agricultural improvement along with their mechanism of action with special reference to plant growth-promoting traits are summarized and discussed in this review.", "title": "" }, { "docid": "49cba878e4d36e08abd4acdfd48123a7", "text": "Advances in data storage and image acquisition technologie s have enabled the creation of large image datasets. In this scenario, it is necess ary to develop appropriate information systems to efficiently manage these collect ions. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) systems . Basically, these systems try to retrieve images similar to a user-define d sp cification or pattern (e.g., shape sketch, image example). Their goal is to suppor t image retrieval based on contentproperties (e.g., shape, color, texture), usually encoded into feature vectors . One of the main advantages of the CBIR approach is the possibi lity of an automatic retrieval process, instead of the traditional keyword-bas ed approach, which usually requires very laborious and time-consuming previous annot ation of database images. The CBIR technology has been used in several applications su ch as fingerprint identification, biodiversity information systems, digital librar ies, crime prevention, medicine, historical research, among others. This paper aims to introduce the problems and challenges con cerned with the creation of CBIR systems, to describe the existing solutions and appl ications, and to present the state of the art of the existing research in this area.", "title": "" }, { "docid": "98269ed4d72abecb6112c35e831fc727", "text": "The goal of this article is to place the role that social media plays in collective action within a more general theoretical structure, using the events of the Arab Spring as a case study. The article presents two broad theoretical principles. The first is that one cannot understand the role of social media in collective action without first taking into account the political environment in which they operate. The second principle states that a significant increase in the use of the new media is much more likely to follow a significant amount of protest activity than to precede it. The study examines these two principles using political, media, and protest data from twenty Arab countries and the Palestinian Authority. The findings provide strong support for the validity of the claims.", "title": "" }, { "docid": "b93825ddae40f61a27435bb255a3cc2e", "text": "Visual programming arguably provides greater benefit in explicit parallel programming, particularly coarse grain MIMD programming, than in sequential programming. Explicitly parallel programs are multi-dimensional objects; the natural representations of a parallel program are annotated directed graphs: data flow graphs, control flow graphs, etc. where the nodes of the graphs are sequential computations. The execution of parallel programs is a directed graph of instances of sequential computations. A visually based (directed graph) representation of parallel programs is thus more natural than a pure text string language where multi-dimensional structures must be implicitly defined. The naturalness of the annotated directed graph representation of parallel programs enables methods for programming and debugging which are qualitatively different and arguably superior to the conventional practice based on pure text string languages. Annotation of the graphs is a critical element of a practical visual programming system; text is still the best way to represent many aspects of programs. This paper presents a model of parallel programming and a model of execution for parallel programs which are the conceptual framework for a complete visual programming environment including capture of parallel structure, compilation and behavior analysis (performance and debugging). Two visually-oriented parallel programming systems, CODE 2.0 and HeNCE, each based on a variant of the model of programming, will be used to illustrate the concepts. The benefits of visually-oriented realizations of these models for program structure capture, software component reuse, performance analysis and debugging will be explored and hopefully demonstrated by examples in these representations. It is only by actually implementing and using visual parallel programming languages that we have been able to fully evaluate their merits.", "title": "" }, { "docid": "fdaf0a7bc6dfa30d0c3ed3a96950d8c8", "text": "In this article we exploit the discrete-time dynamics of a single neuron with self-connection to systematically design simple signal filters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configurations. Extending this neuro-module by two more recurrent neurons leads to versatile highand band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots.", "title": "" }, { "docid": "1c12c32599354acabc05adedc1867905", "text": "Curriculum learning is a machine learning technique adapted from the way humans acquire knowledge and skills, initially mastering simple tasks and progressing to more complex tasks. The work explores curriculum training by creating multiple levels of dataset with increasing complexity on which the trainings are performed. The experiments demonstrated that there is an average of 12% improvement test loss when compared to a non-curriculum approach. The experiment also demonstrates the advantage of creating synthetic dataset and how it aids in the overall improvement of accuracy. An improvement of 26% is attained on the test error loss when curriculum trained model was compared to training on a limited real world dataset. The work also goes onto propose a novel learning approach, the Self Paced Learning approach with Error-Diversity (SPL-ED) An overall reduction of 32% in the test loss is observed when compared to the non-curriculum training limited to real-world dataset.", "title": "" }, { "docid": "6edf0db1e517c8786f004fd79f4ef973", "text": "The alarming increase of resistance against multiple currently available antibiotics is leading to a rapid lose of treatment options against infectious diseases. Since the antibiotic resistance is partially due to a misuse or abuse of the antibiotics, this situation can be reverted when improving their use. One strategy is the optimization of the antimicrobial dosing regimens. In fact, inappropriate drug choice and suboptimal dosing are two major factors that should be considered because they lead to the emergence of drug resistance and consequently, poorer clinical outcomes. Pharmacokinetic/pharmacodynamic (PK/PD) analysis in combination with Monte Carlo simulation allows to optimize dosing regimens of the antibiotic agents in order to conserve their therapeutic value. Therefore, the aim of this review is to explain the basis of the PK/PD analysis and associated techniques, and provide a brief revision of the applications of PK/PD analysis from a therapeutic point-of-view. The establishment and reevaluation of clinical breakpoints is the sticking point in antibiotic therapy as the clinical use of the antibiotics depends on them. Two methodologies are described to establish the PK/PD breakpoints, which are a big part of the clinical breakpoint setting machine. Furthermore, the main subpopulations of patients with altered characteristics that can condition the PK/PD behavior (such as critically ill, elderly, pediatric or obese patients) and therefore, the outcome of the antibiotic therapy, are reviewed. Finally, some recommendations are provided from a PK/PD point of view to enhance the efficacy of prophylaxis protocols used in surgery.", "title": "" }, { "docid": "ac4edd65e7d81beb66b2f9d765b4ad30", "text": "This paper is concerned with actively predicting search intent from user browsing behavior data. In recent years, great attention has been paid to predicting user search intent. However, the prediction was mostly passive because it was performed only after users submitted their queries to search engines. It is not considered why users issued these queries, and what triggered their information needs. According to our study, many information needs of users were actually triggered by what they have browsed. That is, after reading a page, if a user found something interesting or unclear, he/she might have the intent to obtain further information and accordingly formulate a search query. Actively predicting such search intent can benefit both search engines and their users. In this paper, we propose a series of technologies to fulfill this task. First, we extract all the queries that users issued after reading a given page from user browsing behavior data. Second, we learn a model to effectively rank these queries according to their likelihoods of being triggered by the page. Third, since search intents can be quite diverse even if triggered by the same page, we propose an optimization algorithm to diversify the ranked list of queries obtained in the second step, and then suggest the list to users. We have tested our approach on large-scale user browsing behavior data obtained from a commercial search engine. The experimental results have shown that our approach can predict meaningful queries for a given page, and the search performance for these queries can be significantly improved by using the triggering page as contextual information.", "title": "" }, { "docid": "88968e939e9586666c83c13d4f640717", "text": "The economics of two-sided markets or multi-sided platforms has emerged over the past decade as one of the most active areas of research in economics and strategy. The literature has constantly struggled, however, with a lack of agreement on a proper definition: for instance, some existing definitions imply that retail firms such as grocers, supermarkets and department stores are multi-sided platforms (MSPs). We propose a definition which provides a more precise notion of MSPs by requiring that they enable direct interactions between the multiple customer types which are affiliated to them. Several important implications of this new definition are derived. First, cross-group network effects are neither necessary nor sufficient for an organization to be a MSP. Second, our definition emphasizes the difference between MSPs and alternative forms of intermediation such as “re-sellers” which take control over the interactions between the various sides, or input suppliers which have only one customer group affiliated as opposed to multiple. We discuss a number of examples that illustrate the insights that can be derived by applying our definition. Third, we point to the economic considerations that determine where firms choose to position themselves on the continuum between MSPs and resellers, or MSPs and input suppliers. 1 Britta Kelley provided excellent research assistance. We are grateful to Elizabeth Altman, Tom Eisenmann and Marc Rysman for comments on an earlier draft. 2 Harvard University, ahagiu@hbs.edu. 3 National University of Singapore, jwright@nus.edu.sg.", "title": "" }, { "docid": "d1072bc9960fc3697416c9d982ed5a9c", "text": "We compared face identification by humans and machines using images taken under a variety of uncontrolled illumination conditions in both indoor and outdoor settings. Natural variations in a person's day-to-day appearance (e.g., hair style, facial expression, hats, glasses, etc.) contributed to the difficulty of the task. Both humans and machines matched the identity of people (same or different) in pairs of frontal view face images. The degree of difficulty introduced by photometric and appearance-based variability was estimated using a face recognition algorithm created by fusing three top-performing algorithms from a recent international competition. The algorithm computed similarity scores for a constant set of same-identity and different-identity pairings from multiple images. Image pairs were assigned to good, moderate, and poor accuracy groups by ranking the similarity scores for each identity pairing, and dividing these rankings into three strata. This procedure isolated the role of photometric variables from the effects of the distinctiveness of particular identities. Algorithm performance for these constant identity pairings varied dramatically across the groups. In a series of experiments, humans matched image pairs from the good, moderate, and poor conditions, rating the likelihood that the images were of the same person (1: sure same - 5: sure different). Algorithms were more accurate than humans in the good and moderate conditions, but were comparable to humans in the poor accuracy condition. To date, these are the most variable illumination- and appearance-based recognition conditions on which humans and machines have been compared. The finding that machines were never less accurate than humans on these challenging frontal images suggests that face recognition systems may be ready for applications with comparable difficulty. We speculate that the superiority of algorithms over humans in the less challenging conditions may be due to the algorithms' use of detailed, view-specific identity information. Humans may consider this information less important due to its limited potential for robust generalization in suboptimal viewing conditions.", "title": "" } ]
scidocsrr
827c77273a2c1cefb22e9112b8b3cde7
Modelling the Stock Market using
[ { "docid": "a13788dcda6ba9caa99e3b6b5dab73f9", "text": "Our research examines a predictive machine learning approach for financial news articles analysis using several different textual representations: bag of words, noun phrases, and named entities. Through this approach, we investigated 9,211 financial news articles and 10,259,042 stock quotes covering the S&P 500 stocks during a five week period. We applied our analysis to estimate a discrete stock price twenty minutes after a news article was released. Using a support vector machine (SVM) derivative specially tailored for discrete numeric prediction and models containing different stock-specific variables, we show that the model containing both article terms and stock price at the time of article release had the best performance in closeness to the actual future stock price (MSE 0.04261), the same direction of price movement as the future price (57.1% directional accuracy) and the highest return using a simulated trading engine (2.06% return). We further investigated the different textual representations and found that a Proper Noun scheme performs better than the de facto standard of Bag of Words in all three metrics.", "title": "" } ]
[ { "docid": "32754752fc269878fe38626b4aa770a6", "text": "For communication over doubly dispersive channels, we consider the design of multicarrier modulation (MCM) schemes based on time-frequency shifts of prototype pulses. We consider the case where the receiver knows the channel state and the transmitter knows the channel statistics (e.g., delay spread and Doppler spread) but not the channel state. Previous work has examined MCM pulses designed for suppression of inter-symbol/inter-carrier interference (ISI/ICI) subject to orthogonal or biorthogonal constraints. In doubly dispersive channels, however, complete suppression of ISI/ICI is impossible, and the ISI/ICI pattern generated by these (bi)orthogonal schemes can be difficult to equalize, especially when operating at high bandwidth efficiency. We propose a different approach to MCM pulse design, whereby a limited expanse of ISI/ICI is tolerated in modulation/demodulation and treated near-optimally by a downstream equalizer. Specifically, we propose MCM pulse designs that maximize a signal-to-interference-plus-noise ratio (SINR) which suppresses ISI/ICI outside a target pattern. In addition, we propose two low-complexity turbo equalizers, based on minimum mean-squared error and maximum likelihood criteria, respectively, that leverage the structure of the target ISI/ICI pattern. The resulting system exhibits an excellent combination of low complexity, low bit-error rate, and high spectral efficiency.", "title": "" }, { "docid": "e89b6a72083a8a88ad29d43e9e2ecc72", "text": "High-throughput screening (HTS) system has the capability to produce thousands of images containing the millions of cells. An expert could categorize each cell’s phenotype using visual inspection under a microscope. In fact, this manual approach is inefficient because image acquisition systems can produce massive amounts of cell image data per hour. Therefore, we propose an automated and efficient machine-learning model for phenotype detection from HTS system. Our goal is to find the most distinctive features (using feature selection and reduction), which will provide the best phenotype classification both in terms of accuracy and validation time from the feature pool. First, we used minimum redundancy and maximum relevance (MRMR) to select the most discriminant features and evaluate their corresponding impact on the model performance with a support vector machine (SVM) classifier. Second, we used principal component analysis (PCA) to reduce our feature to the most relevant feature list. The main difference is that MRMR does not transform the original features, unlike PCA. Later, we calculated an overall classification accuracy of original features (i.e., 1025 features) and compared with feature selection and reduction accuracies (∼30 features). The feature selection method gives the highest accuracy than reduction and original features. We validated and evaluated our model against well-known benchmark problem (i.e. Hela dataset) with a classification accuracy of 92.70% and validation time in 0.41 seconds.", "title": "" }, { "docid": "c7ed28199d7a8ea4f35ccb26ea9530c1", "text": "In this paper, we study the problem of author identification in big scholarly data, which is to effectively rank potential authors for each anonymous paper by using historical data. Most of the existing deanonymization approaches predict relevance score of paper-author pair via feature engineering, which is not only time and storage consuming, but also introduces irrelevant and redundant features or miss important attributes. Representation learning can automate the feature generation process by learning node embeddings in academic network to infer the correlation of paper-author pair. However, the learned embeddings are often for general purpose (independent of the specific task), or based on network structure only (without considering the node content). To address these issues and make a further progress in solving the author identification problem, we propose Camel, a content-aware and meta-path augmented metric learning model. Specifically, first, the directly correlated paper-author pairs are modeled based on distance metric learning by introducing a push loss function. Next, the paper content embedding encoded by the gated recurrent neural network is integrated into the distance loss. Moreover, the historical bibliographic data of papers is utilized to construct an academic heterogeneous network, wherein a meta-path guided walk integrative learning module based on the task-dependent and content-aware Skipgram model is designed to formulate the correlations between each paper and its indirect author neighbors, and further augments the model. Extensive experiments demonstrate that Camel outperforms the state-of-the-art baselines. It achieves an average improvement of 6.3% over the best baseline method.", "title": "" }, { "docid": "e4d21fb10d9ca88902f5b0fa11dd5cc2", "text": "We describe an efficient algorithm for releasing a provably private estimate of the degree distribution of a network. The algorithm satisfies a rigorous property of differential privacy, and is also extremely efficient, running on networks of 100 million nodes in a few seconds. Theoretical analysis shows that the error scales linearly with the number of unique degrees, whereas the error of conventional techniques scales linearly with the number of nodes. We complement the theoretical analysis with a thorough empirical analysis on real and synthetic graphs, showing that the algorithm's variance and bias is low, that the error diminishes as the size of the input graph increases, and that common analyses like fitting a power-law can be carried out very accurately.", "title": "" }, { "docid": "e0c83197770752c9fdfe5e51edcd3d46", "text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.", "title": "" }, { "docid": "98b27823b392cc75dc9d74ce06ad1b5c", "text": "Studies have shown that some musical pieces may preferentially activate reward centers in the brain. Less is known, however, about the structural aspects of music that are associated with this activation. Based on the music cognition literature, we propose two hypotheses for why some musical pieces are preferred over others. The first, the Absolute-Surprise Hypothesis, states that unexpected events in music directly lead to pleasure. The second, the Contrastive-Surprise Hypothesis, proposes that the juxtaposition of unexpected events and subsequent expected events leads to an overall rewarding response. We tested these hypotheses within the framework of information theory, using the measure of \"surprise.\" This information-theoretic variable mathematically describes how improbable an event is given a known distribution. We performed a statistical investigation of surprise in the harmonic structure of songs within a representative corpus of Western popular music, namely, the McGill Billboard Project corpus. We found that chords of songs in the top quartile of the Billboard chart showed greater average surprise than those in the bottom quartile. We also found that the different sections within top-quartile songs varied more in their average surprise than the sections within bottom-quartile songs. The results of this study are consistent with both the Absolute- and Contrastive-Surprise Hypotheses. Although these hypotheses seem contradictory to one another, we cannot yet discard the possibility that both absolute and contrastive types of surprise play roles in the enjoyment of popular music. We call this possibility the Hybrid-Surprise Hypothesis. The results of this statistical investigation have implications for both music cognition and the human neural mechanisms of esthetic judgments.", "title": "" }, { "docid": "359d3e06c221e262be268a7f5b326627", "text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.", "title": "" }, { "docid": "d84bd9aecd5e5a5b744bbdbffddfd65f", "text": "Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver. However, the index construction of these variables could result in their strong correlation, thus preventing rated characters from being plotted accurately. Phase 1 of this study tested the indices of the Godspeed questionnaire as measures of humanlike characters. The results indicate significant and strong correlations among the relevant indices (Bartneck, Kulić, Croft, & Zoghbi, 2009). Phase 2 of this study developed alternative indices with nonsignificant correlations (p > .05) between the proposed y-axis eeriness and x-axis perceived humanness (r = .02). The new humanness and eeriness indices facilitate plotting relations among rated characters of varying human likeness. 2010 Elsevier Ltd. All rights reserved. 1. Plotting emotional responses to humanlike characters Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver (Fig. 1). The graph predicts that more human-looking characters will be perceived as more agreeable up to a point at which they become so human people find their nonhuman imperfections unsettling (MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Mori, 1970). This dip in appraisal marks the start of the uncanny valley (bukimi no tani in Japanese). As characters near complete human likeness, they rise out of the valley, and people once again feel at ease with them. In essence, a character’s imperfections expose a mismatch between the human qualities that are expected and the nonhuman qualities that instead follow, or vice versa. As an example of things that lie in the uncanny valley, Mori (1970) cites corpses, zombies, mannequins coming to life, and lifelike prosthetic hands. Assuming the uncanny valley exists, what dependent variable is appropriate to represent Mori’s graph? Mori referred to the y-axis as shinwakan, a neologism even in Japanese, which has been variously translated as familiarity, rapport, and comfort level. Bartneck, Kanda, Ishiguro, and Hagita (2009) have proposed using likeability to represent shinwakan, and they applied a likeability index to the evaluation of interactions with Ishiguro’s android double, the Geminoid HI-1. Likeability is virtually synonymous with interpersonal warmth (Asch, 1946; Fiske, Cuddy, & Glick, 2007; Rosenberg, Nelson, & Vivekananthan, 1968), which is also strongly correlated with other important measures, such as comfortability, communality, sociability, and positive (vs. negative) affect (Abele & Wojciszke, 2007; MacDorman, Ough, & Ho, 2007; Mehrabian & Russell, 1974; Sproull, Subramani, Kiesler, Walker, & Waters, 1996; Wojciszke, Abele, & Baryla, 2009). Warmth is the primary dimension of human social perception, accounting for 53% of the variance in perceptions of everyday social behaviors (Fiske, Cuddy, Glick, & Xu, 2002; Fiske et al., 2007; Wojciszke, Bazinska, & Jaworski, 1998). Despite the importance of warmth, this concept misses the essence of the uncanny valley. Mori (1970) refers to negative shinwakan as bukimi, which translates as eeriness. However, eeriness is not the negative anchor of warmth. A person can be cold and disagreeable without being eerie—at least not eerie in the way that an artificial human being is eerie. In addition, the set of negative emotions that predict eeriness (e.g., fear, anxiety, and disgust) are more specific than coldness (Ho, MacDorman, & Pramono, 2008). Thus, shinwakan and bukimi appear to constitute distinct dimensions. Although much has been written on potential benchmarks for anthropomorphic robots (for reviews see Kahn et al., 2007; MacDorman & Cowley, 2006; MacDorman & Kahn, 2007), no indices have been developed and empirically validated for measuring shinwakan or related concepts across a range of humanlike stimuli, such as computer-animated human characters and humanoid robots. The Godspeed questionnaire, compiled by Bartneck, Kulić, Croft, and Zoghbi (2009), includes at least two concepts, anthropomorphism and likeability, that could potentially serve as the xand y-axes of Mori’s graph (Bartneck, Kanda, et al., 2009). Although the 0747-5632/$ see front matter 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.05.015 * Corresponding author. Tel.: +1 317 215 7040. E-mail address: kmacdorm@indiana.edu (K.F. MacDorman). URL: http://www.macdorman.com (K.F. MacDorman). Computers in Human Behavior 26 (2010) 1508–1518", "title": "" }, { "docid": "b3c9bc55f5a9d64a369ec67e1364c4fc", "text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.", "title": "" }, { "docid": "0924582d3887954562bc0ce3cced62c6", "text": "In this paper a comparative design study has been shown with 6.5kV Si-IGBT/Si-P IN diode, 6.5kV Si-IGBT/SiC-JBS diode, and 10kV SiC-M OSFET/SiC-JBS diode in an act ive front-end (AFE) converter for medium-voltage shipboard application. Megawatt converters based on the aforementioned technologies are being designed and compared at tw o different switching frequencies. In this regard, accurate circuit models for 5-10A die for each (a) silicon 6.5kV IGBT, (b) 6.5kV Si-IGBT incorporating a 6.5kV SiC-JB S Diode, and (c) 10kV SiC MOSFET with 10kV SiC JBS Diode, are paralleled to ma ke a 100A switch and used as conv erter switching devices in SPICE circuit simulation to perform the comparative analysis. Switching waveforms, characteristics, switching power and energy loss measurements are follo wed by an efficiency comparison of a 1MW converter with 7.5kVdc at 1kH z and 5kH z switching frequencies. It is shown that 6.5kV Si-IGBT/SiC-JBS diode, with its high efficiency performance up to 5kHz, is a strong candidate for MW ran ge converters. The 10kV SiC-MOSFET/SiC-JBS diode is an option for h igher switching frequency MW converters.", "title": "" }, { "docid": "a7f1565d548359c9f19bed304c2fbba6", "text": "Handwritten character generation is a popular research topic with various applications. Various methods have been proposed in the literatures which are based on methods such as pattern recognition, machine learning, deep learning or others. However, seldom method could generate realistic and natural handwritten characters with a built-in determination mechanism to enhance the quality of generated image and make the observers unable to tell whether they are written by a person. To address these problems, in this paper, we proposed a novel generative adversarial network, multi-scale multi-class generative adversarial network (MSMC-CGAN). It is a neural network based on conditional generative adversarial network (CGAN), and it is designed for realistic multi-scale character generation. MSMC-CGAN combines the global and partial image information as condition, and the condition can also help us to generate multi-class handwritten characters. Our model is designed with unique neural network structures, image features and training method. To validate the performance of our model, we utilized it in Chinese handwriting generation, and an evaluation method called mean opinion score (MOS) was used. The MOS results show that MSMC-CGAN achieved good performance.", "title": "" }, { "docid": "32c5bbc07cba1aac769ee618e000a4a5", "text": "In this paper we present Jimple, a 3-address intermediate representation that has been designed to simplify analysis and transformation of Java bytecode. We motivate the need for a new intermediate representation by illustrating several difficulties with optimizing the stack-based Java bytecode directly. In general, these difficulties are due to the fact that bytecode instructions affect an expression stack, and thus have implicit uses and definitions of stack locations. We propose Jimple as an alternative representation, in which each statement refers explicitly to the variables it uses. We provide both the definition of Jimple and a complete procedure for translating from Java bytecode to Jimple. This definition and translation have been implemented using Java, and finally we show how this implementation forms the heart of the Sable research projects.", "title": "" }, { "docid": "458dacc4d32c5a80bd88b88bf537e50e", "text": "The aim of the study is to investigate the spiritual intelligence role in predicting Quchan University students’ quality of life. In order to collect data, a sample of 143 students of Quechan University was selected randomly enrolled for 89–90 academic year. The instruments of the data collecting are World Health Organization Quality of Life (WHOQOL) and Spiritual Intelligence Questionnaire. For analyzing the data, the standard deviation, and Pearson’s correlation coefficient in descriptive level, and in inferential level, the regression test was used. The results of the study show that the spiritual intelligence has effective role on predicting quality of life.", "title": "" }, { "docid": "994fa5e298eaeeaa03009e97c46cb575", "text": "Three models of the relations of coping efficacy, coping, and psychological problems of children of divorce were investigated. A structural equation model using cross-sectional data of 356 nine- to twelve-year-old children of divorce yielded results that supported coping efficacy as a mediator of the relations between both active coping and avoiding coping and psychological problems. In a prospective longitudinal model with a subsample of 162 of these children, support was found for Time 2 coping efficacy as a mediator of the relations between Time 1 active coping and Time 2 internalizing of problems. Individual growth curve models over four waves also found support for coping efficacy as a mediator of the relations between active coping and psychological problems. No support was found for alternative models of coping as a mediator of the relations between efficacy and symptoms or for coping efficacy as a moderator of the relations between coping and symptoms.", "title": "" }, { "docid": "5f63681c406856bc0664ee5a32d04b18", "text": "In 2008, the emergence of the blockchain as the foundation of the first-ever decentralized cryptocurrency not only revolutionized the financial industry but proved a boon for peer-to-peer (P2P) information exchange in the most secure, efficient, and transparent manner. The blockchain is a public ledger that works like a log by keeping a record of all transactions in chronological order, secured by an appropriate consensus mechanism and providing an immutable record. Its exceptional characteristics include immutability, irreversibility, decentralization, persistence, and anonymity.", "title": "" }, { "docid": "ab0541d9ec1ea0cf7ad85d685267c142", "text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.", "title": "" }, { "docid": "8df304fdd0099a836a25414b0bbfb62f", "text": "Ahtract -Diagrams are widely used in several areas of computer wience, and their effectiveness is thoroughly recognized. One of the main qualities requested for them is readability; this is especially, but not exclusively, true in the area of information systems, where diagrams are used to model data and functions of the application. Up to now, diagrams have been produced manually or with the aid of a graphic editor; in both caws placement of symbols and routing of connections are under responsibility of the designer. The goal of the work is to investigate how readability of diagrams can be achieved by means of automatic tools. Existing results in the literature are compared, and a comprehensive algorithmic approach to the problem is proposed. The algorithm presented draws graphs on a grid and is suitable for both undirected graphs and mixed graphs that contain as subgraphs hierarchic structures. Finally, several applications of a graphic tool that embodies the aforementioned facility are shown.", "title": "" }, { "docid": "7b969718bd1f20994f5ce61f9a242973", "text": "Level of Development (LOD) is a protocol to address the basic guidelines information of Building Information Modeling (BIM). LOD is created to identify specific content requirements of a BIM model elements at a given time. It is used to reduce the problem of inadequate information needed in projects. This paper aims to explore the implementation of LOD in projects using BIM in the construction industry. In order to do so, the definition, purposes and content of each level of LOD had been identified based on past literature. In addition, semi-structured interviews were conducted with BIM consultants from the public and private sector. The findings revealed that the implementation of LOD is varied depending on the requirements of construction players. From the use of LOD, it helps construction players to get the information that they need for a specific purpose in various project phases. The use of LOD in projects using BIM shows the capability and the level of understanding of construction players in using BIM.", "title": "" }, { "docid": "8e09b4718b472dbb7df2bc4ab8d8750a", "text": "In this article, we propose an access control mechanism for Web-based social networks, which adopts a rule-based approach for specifying access policies on the resources owned by network participants, and where authorized users are denoted in terms of the type, depth, and trust level of the relationships existing between nodes in the network. Different from traditional access control systems, our mechanism makes use of a semidecentralized architecture, where access control enforcement is carried out client-side. Access to a resource is granted when the requestor is able to demonstrate being authorized to do that by providing a proof. In the article, besides illustrating the main notions on which our access control model relies, we present all the protocols underlying our system and a performance study of the implemented prototype.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" } ]
scidocsrr
f1c1c1e1d6c71aed01df7a67578d40eb
Secure nearest neighbor revisited
[ { "docid": "83187228617d62fb37f99cf107c7602a", "text": "A very important class of spatial queries consists of nearestneighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree’s hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I/O complexity than their best competitors.", "title": "" } ]
[ { "docid": "f2f53f1bdf451c945053bb8f2b8ca9a1", "text": "In this paper we investigated cybercrime and examined the relevant laws available to combat this crime in Nigeria. Therefore, we had a critical review of criminal laws in Nigeria and also computer network and internet security. The internet as an instrument to aid crime ranges from business espionage, to banking fraud, obtaining un-authorized and sabotaging data in computer networks of some key organizations. We investigated these crimes and noted some useful observations. From our observations, we profound solution to the inadequacies of existing enabling laws. Prevention of cybercrime requires the co-operation of all the citizens and not necessarily the police alone who presently lack specialists in its investigating units to deal with cybercrime. The eradication of this crime is crucial in view of the devastating effect on the image of Nigeria and the attendant consequence on the economy. Out of over 140 million Nigerians less than 5x10-4% are involved in cybercrime across Nigeria.", "title": "" }, { "docid": "cbe43df21793547c8e86793bdb4fb728", "text": "Optical Character Recognition (OCR) aims to recognize text in natural images. Inspired by a recently proposed model for general image classification, Recurrent Convolution Neural Network (RCNN), we propose a new architecture named Gated RCNN (GRCNN) for solving this problem. Its critical component, Gated Recurrent Convolution Layer (GRCL), is constructed by adding a gate to the Recurrent Convolution Layer (RCL), the critical component of RCNN. The gate controls the context modulation in RCL and balances the feed-forward information and the recurrent information. In addition, an efficient Bidirectional Long ShortTerm Memory (BLSTM) is built for sequence modeling. The GRCNN is combined with BLSTM to recognize text in natural images. The entire GRCNN-BLSTM model can be trained end-to-end. Experiments show that the proposed model outperforms existing methods on several benchmark datasets including the IIIT-5K, Street View Text (SVT) and ICDAR.", "title": "" }, { "docid": "0b0e389556e7c132690d7f2a706664d1", "text": "E-government challenges are well researched in literature and well known by governments. However, being aware of the challenges of e-government implementation is not sufficient, as challenges may interrelate and impact each other. Therefore, a systematic analysis of the challenges and their interrelationships contributes to providing a better understanding of how to tackle the challenges and how to develop sustainable solutions. This paper aims to investigate existing challenges of e-government and their interdependencies in Tanzania. The collection of e-government challenges in Tanzania is implemented through interviews, desk research and observations of actors in their job. In total, 32 challenges are identified. The subsequent PESTEL analysis studied interrelationships of challenges and identified 34 interrelationships. The analysis of the interrelationships informs policy decision makers of issues to focus on along the planning of successfully implementing the existing e-government strategy in Tanzania. The study also identified future research needs in evaluating the findings through quantitative analysis.", "title": "" }, { "docid": "d135e72c317ea28a64a187b17541f773", "text": "Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.", "title": "" }, { "docid": "de6962a0ec7e03beec0181a14b9f27d3", "text": "The paper presents the LinkSmart middleware platform that addresses the Internet of Things and Services approach. The platform was designed to support the interoperability and seamless integration of various external devices, sensors, and services into the mainstream enterprise systems. The design and development of LinkSmart goes across two integrated European research projects, namely the FP6 IST project Hydra and the FP7 ICT project EBBITS. Modular architecture and functionality of LinkSmart prototype, developed by combining the service-oriented architecture, peer-to-peer networking, and semantic web services technologies, is described with focus on semantic binding of networked devices by means of underlying ontologies and knowledgebased inference mechanisms. Extensions of the solution towards the service orchestration, complex event handling, business process modelling and workflow processing are discussed and described on a mechanism of context-aware processing of sensor data.", "title": "" }, { "docid": "c2fd86b36364ac9c40e873176443c4c8", "text": "In a public service announcement on 17 March 2016, the Federal Bureau of Investigation jointly with the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) released a warning regarding the increasing vulnerability of motor vehicles to remote exploits [18]. Engine shutdowns, disabled brakes, and locked doors are a few examples of possible vehicle cybersecurity attacks. Modern cars grow into a new target for cyberattacks as they become increasingly connected. While driving on the road, sharks (i.e., hackers) need only to be within communication range of a vehicle to attack it. However, in some cases, they can hack into it while they are miles away. In this article, we aim to illuminate the latest vehicle cybersecurity threats including malware attacks, on-board diagnostic (OBD) vulnerabilities, and automobile apps threats. We illustrate the in-vehicle network architecture and demonstrate the latest defending mechanisms designed to mitigate such threats.", "title": "" }, { "docid": "ae4cebb3b37c1d168a827249c314af6f", "text": "A broadcast news stream consists of a number of stories and each story consists of several sentences. We capture this structure using a hierarchical model based on a word-level Recurrent Neural Network (RNN) sentence modeling layer and a sentence-level bidirectional Long Short-Term Memory (LSTM) topic modeling layer. First, the word-level RNN layer extracts a vector embedding the sentence information from the given transcribed lexical tokens of each sentence. These sentence embedding vectors are fed into a bidirectional LSTM that models the sentence and topic transitions. A topic posterior for each sentence is estimated discriminatively and a Hidden Markov model (HMM) follows to decode the story sequence and identify story boundaries. Experiments on the topic detection and tracking (TDT2) task indicate that the hierarchical RNN topic modeling achieves the best story segmentation performance with a higher F1-measure compared to conventional state-of-the-art methods. We also compare variations of our model to infer the optimal structure for the story segmentation task.", "title": "" }, { "docid": "1b33ca2433ab0846d369a4f8ad278076", "text": "Software-defined networking (SDN), is evolving as a new paradigm for the next generation of network architecture. The separation of control plane and data plane within SDN, brings the flexibility to manage, configure, secure, and optimize network resources using dynamic software programs. From a security point of view SDN has the ability to collect information from the network devices and allow applications to program the forwarding devices, which unleashes a powerful technology for proactive and smart security policy. These functions enable the integration of security tools that can be used in distributed scenarios, unlike the traditional security solutions based on a static firewall programmed by an administrator such as Intrusion Detection and Prevention System (IDS/IPS). This network programmability may be integrated to create a new communication platform for the Internet of Things (IoT). In this paper, we present our preliminary study that is focused on the understanding of an effective approach to build a cluster network using SDN. By using network virtualization and OpenFlow technologies to generate virtual nodes, we simulate a prototype system of over 500 devices controlled by SDN, and it represents a cluster. The results show that the network devices are only able to forward the packets by predefined rules on the controller. For this reason, we propose a method to control the IP header at the application-level to overcome this problem using Opflex within SDN architecture.", "title": "" }, { "docid": "526586bfdce4f8bdd0841dcd05ac05a2", "text": "Systematic reviews show some evidence for the efficacy of group-based social skills group training in children and adolescents with autism spectrum disorder, but more rigorous research is needed to endorse generalizability. In addition, little is known about the perspectives of autistic individuals participating in social skills group training. Using a qualitative approach, the objective of this study was to examine experiences and opinions about social skills group training of children and adolescents with higher functioning autism spectrum disorder and their parents following participation in a manualized social skills group training (\"KONTAKT\"). Within an ongoing randomized controlled clinical trial (NCT01854346) and based on outcome data from the Social Responsiveness Scale, six high responders and five low-to-non-responders to social skills group training and one parent of each child (N = 22) were deep interviewed. Interestingly, both high responders and low-to-non-responders (and their parents) reported improvements in social communication and related skills (e.g. awareness of own difficulties, self-confidence, independence in everyday life) and overall treatment satisfaction, although more positive intervention experiences were expressed by responders. These findings highlight the added value of collecting verbal data in addition to quantitative data in a comprehensive evaluation of social skills group training.", "title": "" }, { "docid": "4680bed6fb799e6e181cc1c2a4d56947", "text": "We address the problem of vision-based multi-person tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. Specifically, we are interested in the application of such a system for supporting path planning algorithms in the avoidance of dynamic obstacles. The complexity of the problem calls for an integrated solution, which extracts as much visual information as possible and combines it through cognitive feedback. We propose such an approach, which jointly estimates camera position, stereo depth, object detections, and trajectories based only on visual information. The interplay between these components is represented in a graphical model. For each frame, we first estimate the ground surface together with a set of object detections. Based on these results, we then address object interactions and estimate trajectories. Finally, we employ the tracking results to predict future motion for dynamic objects and fuse this information with a static occupancy map estimated from dense stereo. The approach is experimentally evaluated on several long and challenging video sequences from busy inner-city locations recorded with different mobile setups. The results show that the proposed integration makes stable tracking and motion prediction possible, and thereby enables path planning in complex and highly dynamic scenes.", "title": "" }, { "docid": "60a33ee582ae69fc48b8a3dcb059cd68", "text": "Recent years have witnessed the fast proliferation of mobile devices (e.g., smartphones and wearable devices) in people's lives. In addition, these devices possess powerful computation and communication capabilities and are equipped with various built-in functional sensors. The large quantity and advanced functionalities of mobile devices have created a new interface between human beings and environments. Many mobile crowd sensing applications have thus been designed which recruit normal users to contribute their resources for sensing tasks. To guarantee good performance of such applications, it's essential to recruit sufficient participants. Thus, how to effectively and efficiently motivate normal users draws growing attention in the research community. This paper surveys diverse strategies that are proposed in the literature to provide incentives for stimulating users to participate in mobile crowd sensing applications. The incentives are divided into three categories: entertainment, service, and money. Entertainment means that sensing tasks are turned into playable games to attract participants. Incentives of service exchanging are inspired by the principle of mutual benefits. Monetary incentives give participants payments for their contributions. We describe literature works of each type comprehensively and summarize them in a compact form. Further challenges and promising future directions concerning incentive mechanism design are also discussed.", "title": "" }, { "docid": "c4e3e580dc532e2e80c54da698005619", "text": "Proximity search on heterogeneous graphs aims to measure the proximity between two nodes on a graph w.r.t. some semantic relation for ranking. Pioneer work often tries to measure such proximity by paths connecting the two nodes. However, paths as linear sequences have limited expressiveness for the complex network connections. In this paper, we explore a more expressive DAG (directed acyclic graph) data structure for modeling the connections between two nodes. Particularly, we are interested in learning a representation for the DAGs to encode the proximity between two nodes. We face two challenges to use DAGs, including how to efficiently generate DAGs and how to effectively learn DAG embedding for proximity search. We find distance-awareness as important for proximity search and the key to solve the above challenges. Thus we develop a novel Distance-aware DAG Embedding (D2AGE) model. We evaluate D2AGE on three benchmark data sets with six semantic relations, and we show that D2AGE outperforms the state-of-the-art baselines. We release the code on https://github.com/shuaiOKshuai.", "title": "" }, { "docid": "2b540b2e48d5c381e233cb71c0cf36fe", "text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.", "title": "" }, { "docid": "8655653e5a4a64518af8da996ac17c25", "text": "Although a rigorous review of literature is essential for any research endeavor, technical solutions that support systematic literature review approaches are still scarce. Systematic literature searches in particular are often described as complex, error-prone and time-consuming, due to the prevailing lack of adequate technical support. In this study, we therefore aim to learn how to design information systems that effectively facilitate systematic literature searches. Using the design science research paradigm, we develop design principles that intend to increase comprehensiveness, precision, and reproducibility of systematic literature searches. The design principles are derived through multiple design cycles that include the instantiation of the principles in form of a prototype web application and qualitative evaluations. Our design knowledge could serve as a foundation for future research on systematic search systems and support the development of innovative information systems that, eventually, improve the quality and efficiency of systematic literature reviews.", "title": "" }, { "docid": "ba4d30e7ea09d84f8f7d96c426e50f34", "text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.", "title": "" }, { "docid": "43cd3b5ac6e2e2f240f4feb44be65b99", "text": "Executive Overview Toyota’s Production System (TPS) is based on “lean” principles including a focus on the customer, continual improvement and quality through waste reduction, and tightly integrated upstream and downstream processes as part of a lean value chain. Most manufacturing companies have adopted some type of “lean initiative,” and the lean movement recently has gone beyond the shop floor to white-collar offices and is even spreading to service industries. Unfortunately, most of these efforts represent limited, piecemeal approaches—quick fixes to reduce lead time and costs and to increase quality—that almost never create a true learning culture. We outline and illustrate the management principles of TPS that can be applied beyond manufacturing to any technical or service process. It is a true systems approach that effectively integrates people, processes, and technology—one that must be adopted as a continual, comprehensive, and coordinated effort for change and learning across the organization.", "title": "" }, { "docid": "d57c094925826fd1745678f2bb25ecbf", "text": "BACKGROUND AND OBJECTIVES\nThe Escherichia coli (E. coli) bacterium is one of the main causative agents of urinary tract infections (UTI) worldwide. The ability of this bacterium to form biofilms on medical devices such as catheters plays an important role in the development of UTI. The aim of the present study was to investigate the possible relationship between virulence factors and biofilm formation of E. coli isolates responsible for urinary tract infection.\n\n\nMATERIALS AND METHODS\nA total of 100 E. coli isolates isolated from patients with UTI were collected and characterized by routine bacteriological methods. In vitro biofilm formation by these isolates was determined using the 96-well microtiter-plate test, and the presence of fimA, papC, and hly virulence genes was examined by PCR assay. Data analysis was performed using SPSS 16.0 software.\n\n\nRESULTS\nFrom 100 E. coli isolates isolated from UTIs, 92% were shown to be biofilm positive. The genes papC, fimA, and hly were detected in 43%, 94% and 26% of isolates, respectively. Biofilm formation in isolates that expressed papC, fimA, and hly genes was 100%, 93%, and 100%, respectively. A significant relationship was found between presence of the papC gene and biofilm formation in E. coli isolates isolated from UTI (P<0.01), but there was no statistically significant correlation between presence of fimA and hly genes with biofilm formation (P<0.072, P<0.104).\n\n\nCONCLUSION\nRESULTS showed that fimA and hly genes do not seem to be necessary or sufficient for the production of biofilm in E. coli, but the presence of papC correlates with increased biofilm formation of urinary tract isolates. Overall, the presence of fimA, papC, and hly virulence genes coincides with in vitro biofilm formation in uropathogenic E. coli isolates.", "title": "" }, { "docid": "5b5345a894d726186ba7f6baf76cb65e", "text": "In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.", "title": "" }, { "docid": "0aeb9567ed3ddf5ca7f33725fb5aa310", "text": "Code-reuse attacks based on return oriented programming are among the most popular exploitation techniques used by attackers today. Few practical defenses are able to stop such attacks on arbitrary binaries without access to source code. A notable exception are the techniques that employ new hardware, such as Intel’s Last Branch Record (LBR) registers, to track all indirect branches and raise an alert when a sensitive system call is reached by means of too many indirect branches to short gadgets—under the assumption that such gadget chains would be indicative of a ROP attack. In this paper, we evaluate the implications. What is “too many” and how short is “short”? Getting the thresholds wrong has serious consequences. In this paper, we show by means of an attack on Internet Explorer that while current defenses based on these techniques raise the bar for exploitation, they can be bypassed. Conversely, tuning the thresholds to make the defenses more aggressive, may flag legitimate program behavior as an attack. We analyze the problem in detail and show that determining the right values is difficult.", "title": "" }, { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
scidocsrr
22f9d159ab57dd293a1faf828c1dcc73
Predicting cyber attacks with bayesian networks using unconventional signals
[ { "docid": "2f565b3b6c5f14a12f93ada87f925791", "text": "Cyber attacks prediction is an important part of risk management. Existing cyber attacks prediction methods did not fully consider the specific environment factors of the target network, which may make the results deviate from the true situation. In this paper, we propose a cyber attacks prediction model based on Bayesian network. We use attack graphs to represent all the vulnerabilities and possible attack paths. Then we capture the using environment factors using Bayesian network model. Cyber attacks predictions are performed on the constructed Bayesian network. Experimental analysis shows that our method gets more accurate results.", "title": "" } ]
[ { "docid": "bfbfb4f363394e8c0e7bd96b6a7b2493", "text": "This study determined the intraand intertester reliability of the Palpation Meter (PALM) in measuring frontal and sagittal plane pelvic positions among asymptomatic adults during static standing. Four examiners measured 24 physical therapy students in two trials. The sagittal plane measurement was taken as the angle formed by a line connecting the ASIS and PSIS versus the horizontal. The frontal plane measurement was taken as the angle formed by a line connecting the superior border of the iliac crests versus the horizontal. Unlike previous studies, this study attempted to replicate the realities of clinical practice by using the PALM to perform measurements over clothing without applying adhesive markers for landmarks, and without controls for postural sway. Intraclass correlation coefficients suggest intratester reliability was high for both frontal (0.84) and sagittal plane measures (0.98), and intertester reliability was high for sagittal plane measures (0.89) but moderate for frontal plane measures (0.65). Standard error of the means for frontal and sagittal plane measures are presented, and clinicians are cautioned to observe the limitations of precision inherent in this device.", "title": "" }, { "docid": "6796d480f95be4605e6651d8fa7162bd", "text": "This paper presents a survey of latest image segmentation techniques using fuzzy clustering. Fuzzy C-Means (FCM) Clustering is the most wide spread clustering approach for image segmentation because of its robust characteristics for data classification. In this paper, four image segmentation algorithms using clustering, taken from the literature are reviewed. To address the drawbacks of conventional FCM, all these approaches have modified the objective function of conventional FCM and have incorporated spatial information in the objective function of the standard FCM. The techniques that have been reviewed in this survey are Segmentation for noisy medical images with spatial probability, Novel Fuzzy C-Means Clustering (NFCM), Fuzzy Local Information C-Means (FLICM) Clustering Algorithm and Improved Spatial Fuzzy C-Means Clustering (ISFCM) algorithm.", "title": "" }, { "docid": "ee027c9ee2f66bc6cf6fb32a5697ee49", "text": "Patellofemoral pain (PFP) is a very common problem in athletes who participate in jumping, cutting and pivoting sports. Several risk factors may play a part in the pathogenesis of PFP. Overuse, trauma and intrinsic risk factors are particularly important among athletes. Physical examination has a key role in PFP diagnosis. Furthermore, common risk factors should be investigated, such as hip muscle dysfunction, poor core muscle endurance, muscular tightness, excessive foot pronation and patellar malalignment. Imaging is seldom needed in special cases. Many possible interventions are recommended for PFP management. Due to the multifactorial nature of PFP, the clinical approach should be individualized, and the contribution of different factors should be considered and managed accordingly. In most cases, activity modification and rehabilitation should be tried before any surgical interventions.", "title": "" }, { "docid": "6861e6f002a6de06d0a07a8fc764e61f", "text": "Autonomous vehicles are now the future of automobile industry. Human drivers can be completely taken out of the loop through the implementation of safe and intelligent autonomous vehicles. Although we can say that HW and SW development continues to play a large role in the automotive industry, test and validation of these systems is a must. The ability to test these vehicles thoroughly and efficiently will ensure their proper and flawless operation. When a large number of people with heterogeneous knowledge and skills try to develop an autonomous vehicle together, it is important to use a sensible engineering process. State of the art techniques for such development include Waterfall, Agile & V-model, where test & validation (T&V) process is an integral part of such a development cycle. This paper will propose a new methodology using machine learning & deep neural network (AI-core) for lab & real-world T&V for ADAS (Advanced driver assistance system) and autonomous vehicles. The methodology will initially connect T&V of individual systems in each level of development and that of complete system efficiently, by using the proposed phase methodology, in which autonomous driving functions are grouped under categories, special T&V processes are carried on simulation as well as in HIL systems. The complete transition towards AI in the field of T&V will be a sequence of steps. Initially the AI-core is fed with available test scenarios, boundary conditions for the test cases and scenarios, and examples, the AI-core will conduct virtual tests on simulation environment using available test scenarios and further generates new test cases and scenarios for efficient and precise tests. These test cases and scenarios are meant to cover all available cases and concentrate on the area where bugs or failures occur. The complete surrounding environment in the simulation is also controlled by the AI-core which means that the system can attain endless/all-possible combinations of the surrounding environment which is necessary. Results of the tests are sorted and stored, critical and important tests are again repeated in the real-world environment using automated cars with other real subsystems to depict the surrounding environment, which are all controlled by the AI-core, and meanwhile the AI-core is always in the loop and learning from each and every executed test case and its results/outcomes. The main goal is to achieve efficient and high quality test and validation of systems for automated driving, which can save precious time in the development process. As a future scope of this methodology, we can step-up to make most parts of test and validation completely autonomous.", "title": "" }, { "docid": "7962b3bed9635c7f355f11a79b0564fd", "text": "This paper describes the Ontologies of Linguistic Annotation (OLiA) as one of the data sets currently available as part of Linguistic Linked Open Data (LLOD) cloud. Within the LLOD cloud, the OLiA ontologies serve as a reference hub for annotation terminology for linguistic phenomena on a great band-width of languages, they have been used to facilitate interoperability and information integration of linguistic annotations in corpora, NLP pipelines, and lexical-semantic resources and mediate their linking with multiple community-maintained terminology repositories.", "title": "" }, { "docid": "38292a0baef7edc1e91bb2b07082f0e3", "text": "General value functions (GVFs) are an approach to representing models of an agent’s world as a collection of predictive questions. A GVF is defined by: a policy, a prediction target, and a timescale. Traditionally predictions for a given timescale must be specified by the engineer and each timescale learned independently. Here we present γ-nets, a method for generalizing value function estimation over timescale, allowing a given GVF to be trained and queried for any fixed timescale. The key to our approach is to use timescale as one of the network inputs. The prediction target for any fixed timescale is then available at every timestep and we are free to train on any number of timescales. We present preliminary results on a simple test signal. 1. Value Functions and Timescale Reinforcement learning (RL) studies algorithms in which an agent learns to maximize the amount of reward it receives over its lifetime. A key method in RL is the estimation of value — the expected cumulative sum of discounted future rewards (called the return). In loose terms this tells an agent how good it is to be in a particular state. The agent can then learn a policy — a way of behaving — which maximizes the amount of reward received. Sutton et al. (2011) broadened the use of value estimation by introducing general value functions (GVFs), in which value estimates are made of other sensorimotor signals, not just reward. GVFs can be thought of as representing an agent’s model of itself and its environment as a collection of questions about future sensorimotor returns; a predictive representation of state. A GVF is defined by three elements: 1) the policy, 2) the cumulant (the sensorimotor signal to be Department of Computing Science, University of Alberta, Edmonton, Canada Cogitai, Inc., United States. Correspondence to: Craig Sherstan <sherstan@ualberta.ca>, Patrick M. Pilarski <pilarski@ualberta.ca>. Accepted at the FAIM workshop “Prediction and Generative Modeling in Reinforcement Learning”, Stockholm, Sweden, 2018. Copyright 2018 by the author(s). Transition", "title": "" }, { "docid": "296ce1f0dd7bf02c8236fa858bb1957c", "text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.", "title": "" }, { "docid": "df78964a221e583f886a8707d7868827", "text": "Mobile phones have evolved from devices that are just used for voice and text communication to platforms that are able to capture and transmit a range of data types (image, audio, and location). The adoption of these increasingly capable devices by society has enabled a potentially pervasive sensing paradigm participatory sensing. A coordinated participatory sensing system engages individuals carrying mobile phones to explore phenomena of interest using in situ data collection. For participatory sensing to succeed, several technical challenges need to be solved. In this paper, we discuss one particular issue: developing a recruitment framework to enable organizers to identify well-suited participants for data collections based on geographic and temporal availability as well as participation habits. This recruitment system is evaluated through a series of pilot data collections where volunteers explored sustainable processes on a university campus.", "title": "" }, { "docid": "1e81bb30757f4863dbde4e0a212eaa09", "text": "This paper compares the removal performances of two complete wastewater treatment plants (WWTPs) for all priority substances listed in the Water Framework Directive and additional compounds of interest including flame retardants, surfactants, pesticides, and personal care products (PCPs) (n = 104). First, primary treatments such as physicochemical lamellar settling (PCLS) and primary settling (PS) are compared. Similarly, biofiltration (BF) and conventional activated sludge (CAS) are then examined. Finally, the removal efficiency per unit of nitrogen removed of both WWTPs for micropollutants is discussed, as nitrogenous pollution treatment results in a special design of processes and operational conditions. For primary treatments, hydrophobic pollutants (log K ow > 4) are well removed (>70 %) for both systems despite high variations of removal. PCLS allows an obvious gain of about 20 % regarding pollutant removals, as a result of better suspended solids elimination and possible coagulant impact on soluble compounds. For biological treatments, variations of removal are much weaker, and the majority of pollutants are comparably removed within both systems. Hydrophobic and volatile compounds are well (>60 %) or very well removed (>80 %) by sorption and volatilization. Some readily biodegradable molecules are better removed by CAS, indicating a better biodegradation. A better sorption of pollutants on activated sludge could be also expected considering the differences of characteristics between a biofilm and flocs. Finally, comparison of global processes efficiency using removals of micropollutants load normalized to nitrogen shows that PCLS + BF is as efficient as PS + CAS despite a higher compactness and a shorter hydraulic retention time (HRT). Only some groups of pollutants seem better removed by PS + CAS like alkylphenols, flame retardants, or di-2-ethylhexyl phthalate (DEHP), thanks to better biodegradation and sorption resulting from HRT and biomass characteristics. For both processes, and out of the 68 molecules found in raw water, only half of them are still detected in the water discharged, most of the time close to their detection limit. However, some of them are detected at higher concentrations (>1 μg/L and/or lower than environmental quality standards), which is problematic as they represent a threat for aquatic environment.", "title": "" }, { "docid": "ecab65461852051278a59482ad49c225", "text": "We show that a set of gates that consists of all one-bit quantum gates (U(2)) and the two-bit exclusive-or gate (that maps Boolean values (x; y) to (x; xy)) is universal in the sense that all unitary operations on arbitrarily many bits n (U(2 n)) can be expressed as compositions of these gates. We investigate the number of the above gates required to implement other gates, such as generalized Deutsch-Toooli gates, that apply a speciic U(2) transformation to one input bit if and only if the logical AND of all remaining input bits is satissed. These gates play a central role in many proposed constructions of quantum computational networks. We derive upper and lower bounds on the exact number of elementary gates required to build up a variety of two-and three-bit quantum gates, the asymptotic number required for n-bit Deutsch-Toooli gates, and make some observations about the number required for arbitrary n-bit unitary operations.", "title": "" }, { "docid": "1dfa61f341919dcb4169c167a92c2f43", "text": "This paper presents an algorithm for the detection of micro-crack defects in the multicrystalline solar cells. This detection goal is very challenging due to the presence of various types of image anomalies like dislocation clusters, grain boundaries, and other artifacts due to the spurious discontinuities in the gray levels. In this work, an algorithm featuring an improved anisotropic diffusion filter and advanced image segmentation technique is proposed. The methods and procedures are assessed using 600 electroluminescence images, comprising 313 intact and 287 defected samples. Results indicate that the methods and procedures can accurately detect micro-crack in solar cells with sensitivity, specificity, and accuracy averaging at 97%, 80%, and 88%, respectively.", "title": "" }, { "docid": "876c0be7acfa5d7b9e863da5b7cfefdc", "text": "In the era of big data, one is often confronted with the problem of high dimensional data for many machine learning or data mining tasks. Feature selection, as a dimension reduction technique, is useful for alleviating the curse of dimensionality while preserving interpretability. In this paper, we focus on unsupervised feature selection, as class labels are usually expensive to obtain. Unsupervised feature selection is typically more challenging than its supervised counterpart due to the lack of guidance from class labels. Recently, regression-based methods with L2,1 norms have gained much popularity as they are able to evaluate features jointly which, however, consider only linear correlations between features and pseudo-labels. In this paper, we propose a novel nonlinear joint unsupervised feature selection method based on kernel alignment. The aim is to find a succinct set of features that best aligns with the original features in the kernel space. It can evaluate features jointly in a nonlinear manner and provides a good ‘0/1’ approximation for the selection indicator vector. We formulate it as a constrained optimization problem and develop a Spectral Projected Gradient (SPG) method to solve the optimization problem. Experimental results on several real-world datasets demonstrate that our proposed method outperforms the state-of-the-art approaches significantly.", "title": "" }, { "docid": "c613a7c8bca5b0c198d2a1885ecb0efb", "text": "Botnets have traditionally been seen as a threat to personal computers; however, the recent shift to mobile platforms resulted in a wave of new botnets. Due to its popularity, Android mobile Operating System became the most targeted platform. In spite of rising numbers, there is a significant gap in understanding the nature of mobile botnets and their communication characteristics. In this paper, we address this gap and provide a deep analysis of Command and Control (C&C) and built-in URLs of Android botnets detected since the first appearance of the Android platform. By combining both static and dynamic analyses with visualization, we uncover the relationships between the majority of the analyzed botnet families and offer an insight into each malicious infrastructure. As a part of this study we compile and offer to the research community a dataset containing 1929 samples representing 14 Android botnet families.", "title": "" }, { "docid": "4b74b9d4c4b38082f9f667e363f093b2", "text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.", "title": "" }, { "docid": "aaa2c86f4e2c615799c35277a4000578", "text": "Convolutional sparse coding (CSC) can model local connections between image content and reduce the code redundancy when compared with patch-based sparse coding. However, CSC needs a complicated optimization procedure to infer the codes (i.e., feature maps). In this brief, we proposed a convolutional sparse auto-encoder (CSAE), which leverages the structure of the convolutional AE and incorporates the max-pooling to heuristically sparsify the feature maps for feature learning. Together with competition over feature channels, this simple sparsifying strategy makes the stochastic gradient descent algorithm work efficiently for the CSAE training; thus, no complicated optimization procedure is involved. We employed the features learned in the CSAE to initialize convolutional neural networks for classification and achieved competitive results on benchmark data sets. In addition, by building connections between the CSAE and CSC, we proposed a strategy to construct local descriptors from the CSAE for classification. Experiments on Caltech-101 and Caltech-256 clearly demonstrated the effectiveness of the proposed method and verified the CSAE as a CSC model has the ability to explore connections between neighboring image content for classification tasks.", "title": "" }, { "docid": "012e08734efe6af83faef4703a092b16", "text": "Dunlap and Van Liere’s New Environmental Paradigm (NEP) Scale, published in 1978, has become a widely used measure of proenvironmental orientation. This article develops a revised NEP Scale designed to improve upon the original one in several respects: (1) It taps a wider range of facets of an ecological worldview, (2) It offers a balanced set of proand anti-NEP items, and (3) It avoids outmoded terminology. The new scale, termed the New Ecological Paradigm Scale, consists of 15 items. Results of a 1990 Washington State survey suggest that the items can be treated as an internally consistent summated rating scale and also indicate a modest growth in pro-NEP responses among Washington residents over the 14 years since the original study.", "title": "" }, { "docid": "cdc7632a3650ed6d392c9ddcf4003ff9", "text": "An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, typical also to human vision. The logical composition of real visual tasks allows using meaningful intermediate results to elaborate the answer (e.g. reasoning) and provide corrections and alternatives when answers are negative. In addition, an external knowledge base is also integrated into the process, to supply common-knowledge information that may be required to understand the question and produce an answer. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.", "title": "" }, { "docid": "1c8ac344f85ff4d4a711536841168b6a", "text": "Internet Protocol Television (IPTV) is an increasingly popular multimedia service which is used to deliver television, video, audio and other interactive content over proprietary IP-based networks. Video on Demand (VoD) is one of the most popular IPTV services, and is very important for IPTV providers since it represents the second most important revenue stream after monthly subscriptions. In addition to high-quality VoD content, profitable VoD service provisioning requires an enhanced content accessibility to greatly improve end-user experience. Moreover, it is imperative to offer innovative features to attract new customers and retain existing ones. To achieve this goal, IPTV systems typically employ VoD recommendation engines to offer personalized lists of VoD items that are potentially interesting to a user from a large amount of available titles. In practice, a good recommendation engine does not offer popular and well-known titles, but is rather able to identify interesting among less popular items which would otherwise be hard to find. In this paper we report our experience in building a VoD recommendation system. The presented evaluation shows that our recommendation system is able to recommend less popular items while operating under a high load of end-user requests.", "title": "" }, { "docid": "adb6144e24291071f6c80e1190582f4e", "text": "Molecular docking is an important method in computational drug discovery. In large-scale virtual screening, millions of small drug-like molecules (chemical compounds) are compared against a designated target protein (receptor). Depending on the utilized docking algorithm for screening, this can take several weeks on conventional HPC systems. However, for certain applications including large-scale screening tasks for newly emerging infectious diseases such high runtimes can be highly prohibitive. In this paper, we investigate how the massively parallel neo-heterogeneous architecture of Tianhe-2 Supercomputer consisting of thousands of nodes comprising CPUs and MIC coprocessors that can efficiently be used for virtual screening tasks. Our proposed approach is based on a coordinated parallel framework called mD3DOCKxb in which CPUs collaborate with MICs to achieve high hardware utilization. mD3DOCKxb comprises a novel efficient communication engine for dynamic task scheduling and load balancing between nodes in order to reduce communication and I/O latency. This results in a highly scalable implementation with parallel efficiency of over 84% (strong scaling) when executing on 8,000 Tianhe-2 nodes comprising 192,000 CPU cores and 1,368,000 MIC cores.", "title": "" } ]
scidocsrr
4adac0af692f39dcf8880e2752ac89d1
Beyond Fano's inequality: bounds on the optimal F-score, BER, and cost-sensitive risk and their implications
[ { "docid": "b45aae55cc4e7bdb13463eff7aaf6c60", "text": "Text retrieval systems typically produce a ranking of documents and let a user decide how far down that ranking to go. In contrast, programs that filter text streams, software that categorizes documents, agents which alert users, and many other IR systems must make decisions without human input or supervision. It is important to define what constitutes good effectiveness for these autonomous systems, tune the systems to achieve the highest possible effectiveness, and estimate how the effectiveness changes as new data is processed. We show how to do this for binary text classification systems, emphasizing that different goals for the system lead to different optimal behaviors. Optimizing and estimating effectiveness is greatly aided if classifiers that explicitly estimate the probability of class membership are used.", "title": "" }, { "docid": "8366003636c8596841f749d69346deee", "text": "Probabilistic classifiers are developed by assuming generative models which are product distributions over the original attribute space (as in naive Bayes) or more involved spaces (as in general Bayesian networks). While this paradigm has been shown experimentally successful on real world applications, despite vastly simplified probabilistic assumptions, the question of why these approaches work is still open. This paper resolves this question. We show that almost all joint distributions with a given set of marginals (i.e., all distributions that could have given rise to the classifier learned) or, equivalently, almost all data sets that yield this set of marginals, are very close (in terms of distributional distance) to the product distribution on the marginals; the number of these distributions goes down exponentially with their distance from the product distribution. Consequently, as we show, for almost all joint distributions with this set of marginals, the penalty incurred in using the marginal distribution rather than the true one is small. In addition to resolving the puzzle surrounding the success of probabilistic classifiers our results contribute to understanding the tradeoffs in developing probabilistic classifiers and will help in developing better classifiers.", "title": "" } ]
[ { "docid": "b6508d1f2b73b90a0cfe6399f6b44421", "text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.", "title": "" }, { "docid": "1afb10bf586f26417b66b942f8c26586", "text": "A combination of surface energy-guided blade coating and inkjet printing is used to fabricate an all-printed high performance, high yield, and low variability organic thin film transistor (OTFT) array on a plastic substrate. Functional inks and printing processes were optimized to yield self-assembled homogenous thin films in every layer of the OTFT stack. Specifically, we investigated the effect of capillary number, semiconductor ink composition (small molecule-polymer ratio), and additive high boiling point solvent concentrations on film fidelity, pattern design, device performance and yields.", "title": "" }, { "docid": "a5879d5e7934380913cd2683ba2525b9", "text": "This paper deals with the design & development of a theft control system for an automobile, which is being used to prevent/control the theft of a vehicle. The developed system makes use of an embedded system based on GSM technology. The designed & developed system is installed in the vehicle. An interfacing mobile is also connected to the microcontroller, which is in turn, connected to the engine. Once, the vehicle is being stolen, the information is being used by the vehicle owner for further processing. The information is passed onto the central processing insurance system, where by sitting at a remote place, a particular number is dialed by them to the interfacing mobile that is with the hardware kit which is installed in the vehicle. By reading the signals received by the mobile, one can control the ignition of the engine; say to lock it or to stop the engine immediately. Again it will come to the normal condition only after entering a secured password. The owner of the vehicle & the central processing system will know this secured password. The main concept in this design is introducing the mobile communications into the embedded system. The designed unit is very simple & low cost. The entire designed unit is on a single chip. When the vehicle is stolen, owner of vehicle may inform to the central processing system, then they will stop the vehicle by just giving a ring to that secret number and with the help of SIM tracking knows the location of vehicle and informs to the local police or stops it from further movement.", "title": "" }, { "docid": "d89b373abfdb180c8ebbbe486c115fa0", "text": "In this study we examine the antecedents of small independent software vendor (ISV) decisions to join a platform ecosystem. Using data on the history of partnering activities from 1201 ISVs from 1996 to 2004, we find that appropriability strategies based on intellectual property rights and the possession of downstream complementary capabilities by ISVs are positively related to partnership formation, and ISVs use these two mechanisms as substitutes to prevent expropriation by the platform owner. In addition, we show that greater competition in downstream product markets between the ISV and the platform owner is associated with a lower likelihood of partnership formation, while the platform’s penetration into the ISV’s target industries is positively associated with the propensity to partner. The results highlight the role of innovation appropriation, downstream complementary capabilities, and collaborative competition in the formation of a platform ecosystem.", "title": "" }, { "docid": "ff3229e4afdedd01a936c7e70f8d0d02", "text": "This paper highlights an updated anatomy of parametrial extension with emphasis on magnetic resonance imaging (MRI) assessment of disease spread in the parametrium in patients with locally advanced cervical cancer. Pelvic landmarks were identified to assess the anterior and posterior extensions of the parametria, besides the lateral extension, as defined in a previous anatomical study. A series of schematic drawings and MRI images are shown to document the anatomical delineation of disease on MRI, which is crucial not only for correct image-based three-dimensional radiotherapy but also for the surgical oncologist, since neoadjuvant chemoradiotherapy followed by radical surgery is emerging in Europe as a valid alternative to standard chemoradiation.", "title": "" }, { "docid": "c58cfb643a35033d59fe50c89fe1d445", "text": "This survey of denial-of-service threats and countermeasures considers wireless sensor platforms' resource constraints as well as the denial-of-sleep attack, which targets a battery-powered device's energy supply. Here, we update the survey of denial-of-service threats with current threats and countermeasures.In particular, we more thoroughly explore the denial-of-sleep attack, which specifically targets the energy-efficient protocols unique to sensor network deployments. We start by exploring such networks' characteristics and then discuss how researchers have adapted general security mechanisms to account for these characteristics.", "title": "" }, { "docid": "72453a8b2b70c781e1a561b5cfb9eecb", "text": "Pair Programming is an innovative collaborative software development methodology. Anecdotal and empirical evidence suggests that this agile development method produces better quality software in reduced time with higher levels of developer satisfaction. To date, little explanation has been offered as to why these improved performance outcomes occur. In this qualitative study, we focus on how individual differences, and specifically task conflict, impact results of the collaborative software development process and related outcomes. We illustrate that low to moderate levels of task conflict actually enhance performance, while high levels mitigate otherwise anticipated positive results.", "title": "" }, { "docid": "3e83d63920d7d8650a2eeaa2e68ec640", "text": "Antibiotic resistance consists of a dynamic web. In this review, we describe the path by which different antibiotic residues and antibiotic resistance genes disseminate among relevant reservoirs (human, animal, and environmental settings), evaluating how these events contribute to the current scenario of antibiotic resistance. The relationship between the spread of resistance and the contribution of different genetic elements and events is revisited, exploring examples of the processes by which successful mobile resistance genes spread across different niches. The importance of classic and next generation molecular approaches, as well as action plans and policies which might aid in the fight against antibiotic resistance, are also reviewed.", "title": "" }, { "docid": "e06704072916406ce462819ee4727f41", "text": "In this paper we address the problem of sports video classification Using Hidden Markov Models (HMMs). For each sports genre, we construct two HMMs representing motion and color features respectively. The observation sequences generated from the principal motion direction and the principal color of each frame are fed to a motion and a color HMM respectively. The outputs are integrated to make a final decision. We tested our scheme on 220 minutes of sports video with four genre types: Ice hockey, basketball, football, and soccer, and achieved an overall classification accuracy of 93%.", "title": "" }, { "docid": "5f70d96454e4a6b8d2ce63bc73c0765f", "text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "204a2331af6c32a502005d5d19f4fc10", "text": "This paper presents a detailed comparative study of spoke type brushless dc (SPOKE-BLDC) motors due to the operating conditions and designs a new type SPOKE-BLDC with flux barriers for high torque applications, such as tractions. The current dynamic analysis method considering the local magnetic saturation of the rotor and the instantaneous current by pwm driving circuit is developed based on the coupled finite element analysis with rotor dynamic equations. From this analysis, several new structures using the flux barriers are designed and the characteristics are compared in order to reduce the large torque ripple and improve the average torque of SPOKE-BLDC. From these results, it is confirmed that the flux barriers, which are inserted on the optimized position of the rotor, have made remarkable improvement for the torque characteristics of the SPOKE-BLDC.", "title": "" }, { "docid": "2984294e4fd66a8eceab0ca8dd76361f", "text": "The popularization of Bitcoin, a decentralized crypto-currency has inspired the production of several alternative, or “alt”, currencies. Ethereum, CryptoNote, and Zerocash all represent unique contributions to the cryptocurrency space. Although most alt currencies harbor their own source of innovation, they have no means of adopting the innovations of other currencies which may succeed them. We aim to remedy the potential for atrophied evolution in the crypto-currency space by presenting Tezos, a generic and self-amending crypto-ledger. Tezos can instantiate any blockchain based protocol. Its seed protocol specifies a procedure for stakeholders to approve amendments to the protocol, including amendments to the amendment procedure itself. Upgrades to Tezos are staged through a testing environment to allow stakeholders to recall potentially problematic amendments. The philosophy of Tezos is inspired by Peter Suber’s Nomic[1], a game built around a fully introspective set of rules. In this paper, we hope to elucidate the potential benefits of Tezos, our choice to implement as a proof-of-stake system, and our choice to write it", "title": "" }, { "docid": "e418ccca35d3480145b79e129537e43c", "text": "Smart eyewear computing is a relatively new subcategory in ubiquitous computing research, which has enormous potential. In this paper we present a first evaluation of soon commercially available Electrooculography (EOG) glasses (J!NS MEME) for the use in activity recognition. We discuss the potential of EOG glasses and other smart eye-wear. Afterwards, we show a first signal level assessment of MEME, and present a classification task using the glasses. We are able to distinguish of 4 activities for 2 users (typing, reading, eating and talking) using the sensor data (EOG and acceleration) from the glasses with an accuracy of 70 % for 6 sec. windows and up to 100 % for a 1 minute majority decision. The classification is done user-independent.\n The results encourage us to further explore the EOG glasses as platform for more complex, real-life activity recognition systems.", "title": "" }, { "docid": "11c8a8b5e99c6150f9d6810b3ab79864", "text": "Finding telecommunications fraud in masses of call records is more difficult than finding a needle in a haystack. In the haystack problem, there is only one needle that does not look like hay, the pieces of hay all look similar, and neither the needle nor the hay changes much over time. Fraudulent calls may be rare like needles in haystacks, but they are much more challenging to find. Callers", "title": "" }, { "docid": "baaf84ec42f3624cb949f37b5cab83e8", "text": "In this paper, we propose a practical method for user grouping and decoding-order setting in a successive interference canceller (SIC) for downlink non-orthogonal multiple access (NOMA). While the optimal user grouping and decoding order, which depend on the instantaneous channel conditions among users within a cell, are assumed in previous work, the proposed method uses user grouping and a decoding order that are unified among all frequency blocks. The proposed decoding order in the SIC enables the application of NOMA with a SIC to a system where all the elements within a codeword for a user are distributed among multiple frequency blocks (resource blocks). The unified user grouping eases the complexity in the SIC process at the user terminal. The unified user grouping also reduces the complexity of the efficient downlink control signaling in NOMA with a SIC. The unified user grouping and decoding order among frequency blocks in principle reduce the achievable throughput compared to the optimal one. However, based on numerical results, we show that the proposed method does not significantly degrade the system-level throughput in downlink cellular networks.", "title": "" }, { "docid": "c0f68f1b8b6fee87203f62baf133b793", "text": "Modern PWM inverter output voltage has high dv/dt, which causes problems such as voltage doubling that can lead to insulation failure, ground currents that results in electromagnetic interference concerns. The IGBT switching device used in such inverter are becoming faster, exacerbating these problems. This paper proposes a new procedure for designing the LC clamp filter. The filter increases the rise time of the output voltage of inverter, resulting in smaller dv/dt. In addition suitable selection of resonance frequency gives LCL filter configuration with improved attenuation. By adding this filter at output terminal of inverter which uses long cable, voltage doubling effect is reduced at the motor terminal. The design procedure is carried out in terms of the power converter based per unit scheme. This generalizes the design procedure to a wide range of power level and to study optimum designs. The effectiveness of the design is verified by computer simulation and experimental measurements.", "title": "" }, { "docid": "c515c780d32f051f75de8a06aedc7d1a", "text": "Science and technologies based on terahertz frequency electromagnetic radiation (100 GHz–30 THz) have developed rapidly over the last 30 years. For most of the 20th Century, terahertz radiation, then referred to as sub-millimeter wave or far-infrared radiation, was mainly utilized by astronomers and some spectroscopists. Following the development of laser based terahertz time-domain spectroscopy in the 1980s and 1990s the field of THz science and technology expanded rapidly, to the extent that it now touches many areas from fundamental science to ‘real world’ applications. For example THz radiation is being used to optimize materials for new solar cells, and may also be a key technology for the next generation of airport security scanners. While the field was emerging it was possible to keep track of all new developments, however now the field has grown so much that it is increasingly difficult to follow the diverse range of new discoveries and applications that are appearing. At this point in time, when the field of THz science and technology is moving from an emerging to a more established and interdisciplinary field, it is apt to present a roadmap to help identify the breadth and future directions of the field. The aim of this roadmap is to present a snapshot of the present state of THz science and technology in 2017, and provide an opinion on the challenges and opportunities that the future holds. To be able to achieve this aim, we have invited a group of international experts to write 18 sections that cover most of the key areas of THz science and technology. We hope that The 2017 Roadmap on THz science and technology will prove to be a useful resource by providing a wide ranging introduction to the capabilities of THz radiation for those outside or just entering the field as well as providing perspective and breadth for those who are well established. We also feel that this review should serve as a useful guide for government and funding agencies.", "title": "" }, { "docid": "2bb988a1d2b3269e7ebe989a65f44487", "text": "The future connectivity landscape and, notably, the 5G wireless systems will feature Ultra-Reliable Low Latency Communication (URLLC). The coupling of high reliability and low latency requirements in URLLC use cases makes the wireless access design very challenging, in terms of both the protocol design and of the associated transmission techniques. This paper aims to provide a broad perspective on the fundamental tradeoffs in URLLC as well as the principles used in building access protocols. Two specific technologies are considered in the context of URLLC: massive MIMO and multi-connectivity, also termed interface diversity. The paper also touches upon the important question of the proper statistical methodology for designing and assessing extremely high reliability levels.", "title": "" }, { "docid": "7794902fc9408b431b01f9822328053e", "text": "Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance.", "title": "" } ]
scidocsrr
467865571e6578d06cc55b87b50660d6
Adoption and Use of Social Media in Small and Medium-Sized Enterprises
[ { "docid": "412a5d414b2ac845a14af5e05abe5f6f", "text": "Organizations often face challenges with the adoption and use of new information systems, and social media is no exception. In this exploratory case study, we aim to identify internal and external challenges related to the adoption and use of social media in a large case company. Our findings show that internal challenges include resources, ownership, authorization, attitudes and economic issues, whereas external challenges are associated with company reputation, legal issues and public/private network identity. We add to the knowledge created by previous studies by introducing the challenges related to ownership and authorization of social media services, which were found to be of high relevance in corporate social media adoption. In order to overcome these obstacles, we propose that organizations prepare strategies and guidelines for social media adoption and use.", "title": "" } ]
[ { "docid": "861e2a3c19dafdd3273dc718416309c2", "text": "For the last 40 years high - capacity Unmanned Air Vehicles have been use mostly for military services such as tracking, surveillance, engagement with active weapon or in the simplest term for data acquisition purpose. Unmanned Air Vehicles are also demanded commercially because of their advantages in comparison to manned vehicles such as their low manufacturing and operating cost, configuration flexibility depending on customer request, not risking pilot in the difficult missions. Nevertheless, they have still open issues such as integration to the manned flight air space, reliability and airworthiness. Although Civil Unmanned Air Vehicles comprise 3% of the UAV market, it is estimated that they will reach 10% level within the next 5 years. UAV systems with their useful equipment (camera, hyper spectral imager, air data sensors and with similar equipment) have been in use more and more for civil applications: Tracking and monitoring in the event of agriculture / forest / marine pollution / waste / emergency and disaster situations; Mapping for land registry and cadastre; Wildlife and ecologic monitoring; Traffic Monitoring and; Geology and mine researches. They can bring minimal risk and cost advantage to many civil applications, in which it was risky and costly to use manned air vehicles before. When the cost of Unmanned Air Vehicles designed and produced for military service is taken into account, civil market demands lower cost and original products which are suitable for civil applications. Most of civil applications which are mentioned above require UAVs that are able to take off and land on limited runway, and moreover move quickly in the operation region for mobile applications but hover for immobile measurement and tracking when necessary. This points to a hybrid unmanned vehicle concept optimally, namely the Vertical Take Off and Landing (VTOL) UAVs. At the same time, this system requires an efficient cost solution for applicability / convertibility for different civil applications. It means an Air Vehicle having easily portability of payload depending on application concept and programmability of operation (hover and cruise flight time) specific to the application. The main topic of this project is designing, producing and testing the TURAC VTOL UAV that have the following features : Vertical takeoff and landing, and hovering like helicopter ; High cruise speed and fixed-wing ; Multi-functional and designed for civil purpose ; The project involves two different variants ; The TURAC A variant is a fully electrical platform which includes 2 tilt electric motors in the front, and a fixed electric motor and ducted fan in the rear ; The TURAC B variant uses fuel cells.", "title": "" }, { "docid": "755f7e93dbe43a0ed12eb90b1d320cb2", "text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).", "title": "" }, { "docid": "bdc1d214884770b979161ba709454486", "text": "The traditional two-stage stochastic programming approach is to minimize the total expected cost with the assumption that the distribution of the random parameters is known. However, in most practices, the actual distribution of the random parameters is not known, and instead, only a series of historical data are available. Thus, the solution obtained from the traditional twostage stochastic program can be biased and suboptimal for the true problem, if the estimated distribution of the random parameters is not accurate, which is usually true when only a limited amount of historical data are available. In this paper, we propose a data-driven risk-averse stochastic optimization approach. Based on the observed historical data, we construct the confidence set of the ambiguous distribution of the random parameters, and develop a riskaverse stochastic optimization framework to minimize the total expected cost under the worstcase distribution within the constructed confidence set. We introduce the Wasserstein metric to construct the confidence set and by using this metric, we can successfully reformulate the risk-averse two-stage stochastic program to its tractable counterpart. In addition, we derive the worst-case distribution and develop efficient algorithms to solve the reformulated problem. Moreover, we perform convergence analysis to show that the risk averseness of the proposed formulation vanishes as the amount of historical data grows to infinity, and accordingly, the corresponding optimal objective value converges to that of the traditional risk-neutral twostage stochastic program. We further precisely derive the convergence rate, which indicates the value of data. Finally, the numerical experiments on risk-averse stochastic facility location and stochastic unit commitment problems verify the effectiveness of our proposed framework.", "title": "" }, { "docid": "93a49a164437d3cc266d8e859f2bb265", "text": "...................................................................................................................................................4", "title": "" }, { "docid": "e32e17bb36f39d6020bced297b3989fe", "text": "Memory networks are a recently introduced model that combines reasoning, attention and memory for solving tasks in the areas of language understanding and dialogue -- where one exciting direction is the use of these models for dialogue-based recommendation. In this talk we describe these models and how they can learn to discuss, answer questions about, and recommend sets of items to a user. The ultimate goal of this research is to produce a full dialogue-based recommendation assistant. We will discuss recent datasets and evaluation tasks that have been built to assess these models abilities to see how far we have come.", "title": "" }, { "docid": "50a3d11065395cf43cd0b5531c73ae83", "text": "Firewalls play a crucial role in assuring the security of today's critical infrastructures, forming a first line of defense by being placed strategically at the front-end of the networks. Sometimes, however, they have exploitable weaknesses, allowing an adversary to bypass them in different ways. Therefore, their design should include improved resilience capabilities to allow them to operate correctly in highly adverse environments. This paper proposes SieveQ, a message queue service that protects and regulates the access to critical systems, in a way similar to an application-level firewall. SieveQ achieves fault and intrusion tolerance by employing an architecture based on two filtering layers, enabling efficient removal of invalid messages at early stages and decreasing the costs associated with Byzantine Fault-Tolerant (BFT) replication of previous solutions. Our experimental evaluation shows that SieveQ improves existing replicated-firewalls resilience in the presence of corrupted messages by faulty nodes. Furthermore, it accommodates high loads, as it is able to handle sixteen times more security events per second than what was processed by the Security Information and Event Management (SIEM) infrastructure employed in the 2012 Summer Olympic Games.", "title": "" }, { "docid": "46b2f2dd5b17fd5108ac7f60144ff017", "text": "Accurately detecting pedestrians in images plays a critically important role in many computer vision applications. Extraction of effective features is the key to this task. Promising features should be discriminative, robust to various variations and easy to compute. In this work, we present novel features, termed dense center-symmetric local binary patterns (CS-LBP) and pyramid center-symmetric local binary/ternary patterns (CS-LBP/LTP), for pedestrian detection. The standard LBP proposed by Ojala et al. [1] mainly captures the texture information. The proposed CS-LBP feature, in contrast, captures the gradient information and some texture information. Moreover, the proposed dense CS-LBP and the pyramid CS-LBP/LTP are easy to implement and computationally efficient, which is desirable for real-time applications. Experiments on the INRIA pedestrian dataset show that the dense CS-LBP feature with linear supporct vector machines (SVMs) is comparable with the histograms of oriented gradients (HOG) feature with linear SVMs, and the pyramid CS-LBP/LTP features outperform both HOG features with linear SVMs and the start-of-the-art pyramid HOG (PHOG) feature with the histogram intersection kernel SVMs. We also demonstrate that the combination of our pyramid CS-LBP feature and the PHOG feature could significantly improve the detection performance—producing state-of-the-art accuracy on the INRIA pedestrian dataset.", "title": "" }, { "docid": "babe85fa78ea1f4ce46eb0cfd77ae2b8", "text": "x + a1x + · · ·+ an = 0. On s’interesse surtout à la résolution “par radicaux”, c’est-à-dire à la résolution qui n’utilise que des racines m √ a. Il est bien connu depuis le 16 siècle que l’on peut résoudre par radicaux des équations de degré n ≤ 4. Par contre, selon un résultat célèbre d’Abel, l’équation générale de degré n ≥ 5 n’est pas résoluble par radicaux. L’idée principale de la théorie de Galois est d’associer à chaque équation son groupe de symétrie. Cette construction permet de traduire des propriétés de l’équation (telles que la résolubilité par radicaux) aux propriétés du groupe associé. Le cours ne suivra pas le chemin historique. L’ouvrage [Ti 1, 2] est une référence agréable pour l’histoire du sujet.", "title": "" }, { "docid": "b9f7c3cbf856ff9a64d7286c883e2640", "text": "Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.", "title": "" }, { "docid": "1dddc78629fabad27975329595569a3f", "text": "The susceptibility of tin-plated contacts to fretting corrosion is a major limitation for its use in electrical connectors. The present paper evaluates the influence of a variety of factors, such as, fretting amplitude (track length), frequency, temperature, humidity, normal load and current load on the fretting corrosion behaviour of tin-plated contacts. This paper also addresses the development of fretting corrosion maps and lubrication as a preventive strategy to increase the life-time of tin-plated contacts. The fretting corrosion tests were carried out using a fretting apparatus in which a hemispherical rider and flat contacts (tin-plated copper alloy) were mated in sphere plane geometry and subjected to fretting under gross-slip conditions. The variation in contact resistance as a function of fretting cycles and the time to reach a threshold value (100mO) of contact resistance enables a better understanding of the influence of various factors on the fretting corrosion behaviour of tin-plated contacts. Based on the change in surface profile and nature of changes in the contact zone assessed by laser scanning microscope (LSM) and surface analytical techniques, the mechanism of fretting corrosion of tin-plated contacts and fretting corrosion maps are proposed. Lubrication increases the life-time of tin-plated contacts by several folds and proved to be a useful preventive strategy. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "485474467ef98bcc0a7e765bf97e34e4", "text": "In this study, the microbial ecology of three naturally fermented sausages produced in northeast Italy was studied by culture-dependent and -independent methods. By plating analysis, the predominance of lactic acid bacteria populations was pointed out, as well as the importance of coagulase-negative cocci. Also in the case of one fermentation, the fecal enterocci reached significant counts, highlighting their contribution to the particular transformation process. Yeast counts were higher than the detection limit (> 100 CFU/g) in only one fermented sausage. Analysis of the denaturing gradient gel electrophoresis (DGGE) patterns and sequencing of the bands allowed profiling of the microbial populations present in the sausages during fermentation. The bacterial ecology was mainly characterized by the stable presence of Lactobacillus curvatus and Lactobacillus sakei, but Lactobacillus paracasei was also repeatedly detected. An important piece of evidence was the presence of Lactococcus garvieae, which clearly contributed in two fermentations. Several species of Staphylococcus were also detected. Regarding other bacterial groups, Bacillus sp., Ruminococcus sp., and Macrococcus caseolyticus were also identified at the beginning of the transformations. In addition, yeast species belonging to Debaryomyces hansenii, several Candida species, and Willopsis saturnus were observed in the DGGE gels. Finally, cluster analysis of the bacterial and yeast DGGE profiles highlighted the uniqueness of the fermentation processes studied.", "title": "" }, { "docid": "a333cb68e8b851d9eb82be9971c5d98b", "text": "A frequency-domain-based delay estimator is described, designed speci cally for speech signals in a microphone-array environment. It is shown to be capable of obtaining precision delay estimates over a wide range of SNR conditions and is simple enough computationally to make it practical for real-time systems. A location algorithm based upon the delay estimator is then developed. With this algorithm it is possible to localize talker positions to a region only a few centimeters in diameter (not very di erent from the size of the source), and to track a moving source. Experimental results using data from a real 16-element array are presented to indicate the true performance of the algorithms.", "title": "" }, { "docid": "2a0577aa61ca1cbde207306fdb5beb08", "text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.", "title": "" }, { "docid": "78c6ec58cec2607d5111ee415d683525", "text": "Forty-three normal hearing participants were tested in two experiments, which focused on temporal coincidence in auditory visual (AV) speech perception. In these experiments, audio recordings of/pa/and/ba/were dubbed onto video recordings of /ba/or/ga/, respectively (ApVk, AbVg), to produce the illusory \"fusion\" percepts /ta/, or /da/ [McGurk, H., & McDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-747]. In Experiment 1, an identification task using McGurk pairs with asynchronies ranging from -467 ms (auditory lead) to +467 ms was conducted. Fusion responses were prevalent over temporal asynchronies from -30 ms to +170 ms and more robust for audio lags. In Experiment 2, simultaneity judgments for incongruent and congruent audiovisual tokens (AdVd, AtVt) were collected. McGurk pairs were more readily judged as asynchronous than congruent pairs. Characteristics of the temporal window over which simultaneity and fusion responses were maximal were quite similar, suggesting the existence of a 200 ms duration asymmetric bimodal temporal integration window.", "title": "" }, { "docid": "fd5d08971b41e4d80b926416aa9d4c58", "text": "With the widespread deployment of broadband connections worldwide, software development and maintenance are increasingly being done by multiple engineers, often working around-theclock to maximize code churn rates. To ensure rapid quality assurance of such software, techniques such as “nightly/daily building and smoke testing” have become widespread since they often reveal bugs early in the software development process. During these builds, a development version of the software is checked out from the source code repository tree, compiled, linked, and (re)tested with the goal of (re)validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically rerun for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (Daily Automated Regression Tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis, smoke test case generation, test oracle creation, to code instrumentation, test execution, coverage evaluation, regeneration of test cases, and their re-execution. Together with the operating system’s task scheduler, DART can execute frequently with little input from the developer/tester to retest the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We empirically compare the relative costs of employing different levels of detail in the GUI test oracle. We also show the events and statements covered by the smoke test cases.", "title": "" }, { "docid": "da6b8e2a985c20a4659f2436f7701c0e", "text": "The goal of this roadmap paper is to summarize the state-ofthe-art and to identify critical challenges for the systematic software engineering of self-adaptive systems. The paper is partitioned into four parts, one for each of the identified essential views of self-adaptation: modelling dimensions, requirements, engineering, and assurances. For each view, we present the state-of-the-art and the challenges that our community must address. This roadmap paper is a result of the Dagstuhl Seminar 08031 on “Software Engineering for Self-Adaptive Systems, ” which took place", "title": "" }, { "docid": "6a91acaea9a7c668675b2c9d82e34ca0", "text": "Recent years have seen increasing interest in systems that reason about and manipulate executable code. Such systems can generally benefit from information about aliasing. Unfortunately, most existing alias analyses are formulated in terms of high-level language features, and are unable to cope with features, such as pointer arithmetic, that pervade executable programs. This paper describes a simple algorithm that can be used to obtain aliasing information for executabie code. In order to be practical, the algorithm is carefut to keep its memory requirements low, sacrificing precision where necessary to achieve this goal. Experimental results indicate that it is nevertheless able to provide a reasonable amount of information about memory references across a variety of benchmark programs.", "title": "" }, { "docid": "af2c56d5b54d45d075be451d0c08be63", "text": "In this paper we present a novel transliteration technique which is based on deep belief networks. Common approaches use finite state machines or other methods similar to conventional machine translation. Instead of using conventional NLP techniques, the approach presented here builds on deep belief networks, a technique which was shown to work well for other machine learning problems. We show that deep belief networks have certain properties which are very interesting for transliteration and possibly also for translation and that a combination with conventional techniques leads to an improvement over both components on an Arabic-English transliteration task.", "title": "" }, { "docid": "75c2b1565c61136bf014d5e67eb52daf", "text": "This paper describes a system for dense depth estimation for multiple images in real-time. The algorithm runs almost entirely on standard graphics hardware, leaving the main CPU free for other tasks as image capture, compression and storage during scene capture. We follow a plain-sweep approach extended by truncated SSD scores, shiftable windows and best camera selection. We do not need specialized hardware and exploit the computational power of freely programmable PC graphics hardware. Dense depth maps are computed with up to 20 fps.", "title": "" } ]
scidocsrr
47505fa90eb189a429ee15ef05c0d58a
Improved Stochastic Gradient Descent Algorithm for SVM
[ { "docid": "e5a7acf6980c93c1d4fe91797a5c119f", "text": "Online algorithms that process one example at a time are advantageous when dealing with very large data or with data streams. Stochastic gradient descent (SGD) is such an algorithm and it is an attractive choice for online SVM training due to its simplicity and effectiveness. When equipped with kernel functions, similarly to other SVM learning algorithms, SGD is susceptible to “the curse of kernelization” that causes unbounded linear growth in model size and update time with data size. This may render SGD inapplicable to large data sets. We address this issue by presenting a class of Budgeted SGD (BSGD) algorithms for large-scale kernel SVM training which have constant space and time complexity per update. BSGD keeps the number of support vectors bounded during training through several budget maintenance strategies. We treat the budget maintenance as a source of the gradient error, and relate the gap between the BSGD and the optimal SVM solutions via the average model degradation due to budget maintenance. To minimize the gap, we study greedy budget maintenance methods based on removal, projection, and merging of support vectors. We propose budgeted versions of several popular online SVM algorithms that belong to the SGD family. We further derive BSGD algorithms for multi-class SVM training. Comprehensive empirical results show that BSGD achieves much higher accuracy than the state-of-the-art budgeted online algorithms and comparable to non-budget algorithms, while achieving impressive computational efficiency both in time and space during training and prediction.", "title": "" } ]
[ { "docid": "f3af9103e6c71f5bf974e2f2be5059a4", "text": "The temporal availability of propagules is a critical factor in sustaining pioneer riparian tree populations along snowmelt-driven rivers because seedling establishment is strongly linked to seasonal hydrology. River regulation in semi-arid regions threatens to decouple seed development and dispersal from the discharge regime to which they evolved. Using the lower Tuolumne River as a model system, we quantified and modeled propagule availability for Populus fremontii (POFR), Salix gooddingii (SAGO), and Salix exigua (SAEX), the tree and shrub species that dominate near-channel riparian stands in the San Joaquin Basin, CA. A degree-day model was fit to field data of seasonal seed density and local temperature from three sites in 2002–2004 to predict the onset of the peak dispersal period. To evaluate historical synchrony of seed dispersal and seasonal river hydrology, we compared peak spring runoff timing to modeled peak seed release periods for the last 75 years. The peak seed release period began on May 15 for POFR (range April 23–June 10), May 30 for SAGO (range May 19–June 11) and May 31 for SAEX (range May 8–June 30). Degree-day models for the onset of seed release reduced prediction error by 40–67% over day-of-year means; the models predicted best the interannual, versus site-to-site, variation in timing. The historical analysis suggests that POFR seed release coincided with peak runoff in almost all years, whereas SAGO and SAEX dispersal occurred during the spring flood recession. The degree-day modeling approach reduce uncertainty in dispersal timing and shows potential for guiding flow releases on regulated rivers to increase riparian tree recruitment at the lowest water cost.", "title": "" }, { "docid": "d258a14fc9e64ba612f2c8ea77f85d08", "text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.", "title": "" }, { "docid": "d1ef00d0860b0cab22280415c17430cb", "text": "The FreeBSD project has been engaged in ongoing work to provide scalable support for multi-processor computer systems since version 5. Sufficient progress has been made that the C library’s malloc(3) memory allocator is now a potential bottleneck for multi-threaded applications running on multiprocessor systems. In this paper, I present a new memory allocator that builds on the state of the art to provide scalable concurrent allocation for applications. Benchmarks indicate that with this allocator, memory allocation for multi-threaded applications scales well as the number of processors increases. At the same time, single-threaded allocation performance is similar to the previous allocator implementation.", "title": "" }, { "docid": "fe70c7614c0414347ff3c8bce7da47e7", "text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.", "title": "" }, { "docid": "917e970b54d5c1e11750ddbbe21eaa77", "text": "The vision of the smart home is increasingly becoming reality. Devices become smart, interconnected and accessible through the Internet. In the classical building automation domain already a lot of devices are interconnected providing interfaces for control or collecting data. Unfortunately for historical reasons they use specialized protocols for their communication hampering the integration into newly introduced smart home technologies. In order to make use of the valuable information gateways are required. BACnet as a protocol of the building automation domain already can make use of IP and defined a way to represent building data as Web services in general, called BACnet/WS. But using full fledged Web services would require too much resources in the scenario of smart home thus we need a more resource friendly solution. In this work a Devices Profile for Web Services (DPWS) adaptation of the BACnet/WS specification is proposed. DPWS enables Web service conform communication with a focus on a small footprint, which in turn enables interdisciplinary communication of constrained devices.", "title": "" }, { "docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21", "text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.", "title": "" }, { "docid": "4d714135a7fba7a01e65bd20cba68a8f", "text": "Internal control regulation effectiveness remains controversial given the recent financial crisis. To address this issue we examine the financial reporting effects of the Federal Depository Insurance Corporation Improvement Act (FDICIA) internal control provisions. Exemptions from these provisions for banks with assets under $500 million and for non-US banks provides two unaffected control samples. Our difference-indifferences method suggests that FDICIA-mandated internal control requirements increased loan-loss provision validity, earnings persistence and cash-flow predictability and reduced benchmark-beating and accounting conservatism for affected versus unaffected banks. More pronounced effects in interim versus fourth quarters suggest that greater auditor presence substitutes for internal control regulation.", "title": "" }, { "docid": "1ec97b7acdd514a200499f6d8f96ab61", "text": "When Tarzan asks Jane Do you like my friends? and Jane answers Some of them, her underinformative reply implicates Not all of them. This scalar inference arises when a lessthan-maximally-informative utterance implies the denial of a more informative proposition. Default Inference accounts (e.g. Levinson, 1983; 2000) argue that this inference is linked to lexical items (e.g. some) and is generated automatically and largely independently of context. Alternatively, Relevance Theory (Sperber and Wilson, 1986/1995) treats such inferences as contextual and as arriving effortfully with deeper processing of utterances. We compare these accounts in four experiments that employ a sentence verification paradigm. We focus on underinformative sentences, such as Some elephants are mammals, because these are false with a scalar inference and true without it. Experiment 1 shows that participants are less accurate and take significantly longer to answer correctly when instructions call for a Some but not all interpretation rather than a Some and possibly all interpretation. Experiment 2, which modified the paradigm of Experiment 1 so that correct responses to both interpretations resulted in the same overt response, reports results that confirm those of the first Experiment. Experiment 3, which imposed no interpretations, reveals that those who employed a Some but not all reading to the underinformative items took longest to respond. Experiment 4 shows that the rate of scalar inferences increased as a permitted response time did. These results argue against a neo-Gricean account and in favor of Relevance theory. Time course of scalars -3 There is a growing body of psycholinguistic work that focuses on the comprehension of logical terms. These studies can be broken down into two sets. One investigates the way logical inferences are made on-line in the context of story comprehension (e.g. Lea, O'Brien, Fisch, Noveck, & et al., 1990; Lea, 1995) . In this approach, the comprehension of a term like or is considered to be tantamount to knowing logical inference schemas attached to it. For or it would be or-elimination (where the two premises p or q; not-q – imply p). The other line of research investigates non-standard existential quantifiers, such as few or a few, demonstrating how the meanings of quantifiers besides conveying notions about amount transmit information about the speaker’s prior expectations as well as indicate where the addressee ought to place her focus (Moxey, Sanford, & Dawydiak, 2001; Paterson, Sanford, Moxey, & Dawydiak, 1998, Sanford, Moxey, & Paterson, 1996). For example, positive quantifiers like a few, put the focus on the quantified objects (e.g. those who got to the match in A few of the fans went to the match) while negative quantifiers like few place the focus on the quantified objects’ complement (e.g. those fans who did not go to the match in Few of the fans went to the match). In the present paper, we investigate a class of inference which we will refer to as a scalar inference that is orthogonal to the ones discussed above, but is arguably central to the way listeners treat logical terms. These arise when a less-than-maximally-informative utterance is taken to imply the denial of the more informative proposition (or else to imply a lack of knowledge concerning the more informative one). Consider the following dialogues: 1) Peter: Are Cheryl and Tony coming for dinner? Jill: We are going to have Cheryl or Tony. Time course of scalars -4 2) John: Did you get to meet all of my friends? Robyn: Some of them. In (1), Jill’s statement can be taken to mean that not both Cheryl and Tony are coming for dinner and, in (2), that Robyn did not meet all of John’s friends. These interpretations are the result of scalar inferences, which we will describe in detail below. Before we do so, note that the responses in each case are compatible with the questioner’s stronger expectation from a strictly logical point of view; if Jill knows that both Cheryl and Tony are coming, her reply is still true and if in fact Robyn did meet all of John’s friends, she also spoke truthfully. Or is logically compatible with and and some is logically compatible with all. Linguistic background Scalar inferences are examples of what Paul Grice (1989) called generalized implicatures as he aimed to reconcile logical terms with their non-logical meanings. Grice, who was especially concerned by propositional connectives, focused on logical terms that become, through conversational contexts, part of the speaker’s overall meaning. In one prime example, he described how the disjunction or has a weak sense, which is compatible with formal logic’s ∨ (the inclusive-or), but as benefiting from a stronger sense (but not both) through conversational uses (making the disjunction exclusive). What the disjunction says, he argued, is compatible with the weaker sense, but through conversational principles it often means the stronger one. Any modern account of the way logical terms are understood in context would not be complete without considering these pragmatic inferences. Grice’s generalized implicatures were assumed to occur very systematically although the context may be such that they do not occur. These were contrasted with particularized implicatures, which were assumed to be less systematic and always clearly context Time course of scalars -5 dependent. His reasons for making the distinction had to do with his debates with fellow philosophers on the meaning of logical connectives and of quantifiers, and not with the goal of providing a processing model of comprehension, and there is some vagueness in his view of the exact role of the context in the case of generalized implicatures (see Carston, 2002, pages 107-116). In summary, Grice can be said to have inspired work on implicatures (by providing a framework), but there is not enough in the theory to describe, for example, how a scalar inference manifests itself in real time. Pragmatic theorists, who have followed up on Grice and are keen on describing how scalar inferences actually work, can be divided into two camps. On the one hand, there are those who assume that the inference generally goes through unless subsequently cancelled by the context. That is, scalars operate on the (relatively weak) terms the speaker’s choice of a weak term implies the rejection of a stronger term from the same scale. To elucidate with disjunctions, the connectives or and and may be viewed as part of a scale (<or, and>), where and constitutes the more informative element of the scale (since p and q entails p or q). In the event that a speaker chooses to utter a disjunctive sentence, p or q, the hearer will take it as suggesting that the speaker either has no evidence that a stronger element in the scale, i.e. p and q, holds or that she perhaps has evidence that it does not hold. Presuming that the speaker is cooperative and well informed the hearer will tend to infer that it is not the case that p and q hold, thereby interpreting the disjunction as exclusive. A strong default approach has been defended by Neo-Griceans like Levinson (2000) and to some extent by Horn (1984, page 13). More recently, Chierchia (in press) and colleagues (Chierchia, Guasti, Gualmini, Meroni, and Crain, 2001) have essentially defended the strong default view by making a syntactic distinction with respect to scalar terms: When a scalar is embedded in a downward-entailing context (e.g. negations and question forms), Chierchia and colleagues predict that one would Time course of scalars -6 not find the production of scalar inferences (also see Noveck et al., 2002). Otherwise, Chierchia and colleagues do assume that scalar inferences go through. For the sake of exposition, we focus on Levinson (2000) because he has provided the most extensive proposal for the way pragmatically enriched “default” or “preferred” meanings of weak scalar terms are put in place. Scalars are considered by Levinson to result from a Q-heuristic, dictating that “What isn’t said isn’t (the case).” It is named Q because it is directly related to Grice’s (1989) first maxim of quantity: Make your utterance as informative as is required. In other words, this proposal assumes that scalars are general and automatic. When one hears a weak scalar term like or, some, might etc. the default assumption is that the speaker knows that a stronger term from the same scale is not warranted or that she does not have enough information to know whether the stronger term is called for. Default means that relatively weak terms prompt the inference automatically or becomes not both, some becomes some but not all etc. Also, a scalar inference can be cancelled. If this happens, it occurs subsequent to the production of the scalar term. On the other hand, there are pragmatists who argue against the default view and in favor of a more contextual account. Such an account assumes that an utterance can be inferentially enriched in order to better appreciate the speaker’s intention, but this is not done on specific words as a first step to arrive at a default meaning. We focus on Relevance Theory because it arguably presents the most extensive contextualist view of pragmatic inferences in general and of scalar inferences in particular (see Post face of Sperber and Wilson, 1995). According to this account, a scalar is but one example of pragmatic inferences which arise when a speaker intends and expects a hearer to draw an interpretation of an utterance that is relevant enough. How far the hearer goes in processing an utterance’s meaning is governed by principles concerning effect and effort; namely, listeners try to gain as many effects as possible for the least effort. Time course of scalars -7 A non-enriched interpretation of a scalar term (the one that more closely coincides with the word’s meaning) could very well lead to a satisfying interpretation of this term in an utterance. Consider Some m", "title": "" }, { "docid": "ffb87dc7922fd1a3d2a132c923eff57d", "text": "It has been suggested that pulmonary artery pressure at the end of ejection is close to mean pulmonary artery pressure, thus contributing to the optimization of external power from the right ventricle. We tested the hypothesis that dicrotic notch and mean pulmonary artery pressures could be of similar magnitude in 15 men (50 +/- 12 yr) referred to our laboratory for diagnostic right and left heart catheterization. Beat-to-beat relationships between dicrotic notch and mean pulmonary artery pressures were studied 1) at rest over 10 consecutive beats and 2) in 5 patients during the Valsalva maneuver (178 beats studied). At rest, there was no difference between dicrotic notch and mean pulmonary artery pressures (21.8 +/- 12.0 vs. 21.9 +/- 11.1 mmHg). There was a strong linear relationship between dicrotic notch and mean pressures 1) over the 10 consecutive beats studied in each patient (mean r = 0.93), 2) over the 150 resting beats (r = 0.99), and 3) during the Valsalva maneuver in each patient (r = 0.98-0.99) and in the overall beats (r = 0.99). The difference between dicrotic notch and mean pressures was -0.1 +/- 1.7 mmHg at rest and -1.5 +/- 2.3 mmHg during the Valsalva maneuver. Substitution of the mean pulmonary artery pressure by the dicrotic notch pressure in the standard formula of the pulmonary vascular resistance (PVR) resulted in an equation relating linearly end-systolic pressure and stroke volume. The slope of this relation had the dimension of a volume elastance (in mmHg/ml), a simple estimate of volume elastance being obtained as 1.06(PVR/T), where T is duration of the cardiac cycle. In conclusion, dicrotic notch pressure was of similar magnitude as mean pulmonary artery pressure. These results confirmed our primary hypothesis and indicated that human pulmonary artery can be treated as if it is an elastic chamber with a volume elastance of 1.06(PVR/T).", "title": "" }, { "docid": "5915dd433e50ae74ebcfe50229b27e58", "text": "Ultrasound imaging of thyroid gland provides the ability to acquire valuable information for medical diagnosis. This study presents a novel scheme for the analysis of longitudinal ultrasound images aiming at efficient and effective computer-aided detection of thyroid nodules. The proposed scheme involves two phases: a) application of a novel algorithm for the detection of the boundaries of the thyroid gland and b) detection of thyroid nodules via classification of Local Binary Pattern feature vectors extracted only from the area between the thyroid boundaries. Extensive experiments were performed on a set of B-mode thyroid ultrasound images. The results show that the proposed scheme is a faster and more accurate alternative for thyroid ultrasound image analysis than the conventional, exhaustive feature extraction and classification scheme.", "title": "" }, { "docid": "872f556cb441d9c8976e2bf03ebd62ee", "text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.", "title": "" }, { "docid": "1ac1c6f30b0a306b7c9f643f83fb4731", "text": "As a bridge to connect vision and language, visual relations between objects in the form of relation triplet $łangle subject,predicate,object\\rangle$, such as \"person-touch-dog'' and \"cat-above-sofa'', provide a more comprehensive visual content understanding beyond objects. In this paper, we propose a novel vision task named Video Visual Relation Detection (VidVRD) to perform visual relation detection in videos instead of still images (ImgVRD). As compared to still images, videos provide a more natural set of features for detecting visual relations, such as the dynamic relations like \"A-follow-B'' and \"A-towards-B'', and temporally changing relations like \"A-chase-B'' followed by \"A-hold-B''. However, VidVRD is technically more challenging than ImgVRD due to the difficulties in accurate object tracking and diverse relation appearances in video domain. To this end, we propose a VidVRD method, which consists of object tracklet proposal, short-term relation prediction and greedy relational association. Moreover, we contribute the first dataset for VidVRD evaluation, which contains 1,000 videos with manually labeled visual relations, to validate our proposed method. On this dataset, our method achieves the best performance in comparison with the state-of-the-art baselines.", "title": "" }, { "docid": "d96ab69ee3d31f9e8a5b447d7dc5f5fb", "text": "Heteroepitaxy between transition-metal dichalcogenide (TMDC) monolayers can fabricate atomically thin semiconductor heterojunctions without interfacial contamination, which are essential for next-generation electronics and optoelectronics. Here we report a controllable two-step chemical vapor deposition (CVD) process for lateral and vertical heteroepitaxy between monolayer WS2 and MoS2 on a c-cut sapphire substrate. Lateral and vertical heteroepitaxy can be selectively achieved by carefully controlling the growth of MoS2 monolayers that are used as two-dimensional (2D) seed crystals. Using hydrogen as a carrier gas, we synthesize ultraclean MoS2 monolayers, which enable lateral heteroepitaxial growth of monolayer WS2 from the MoS2 edges to create atomically coherent and sharp in-plane WS2/MoS2 heterojunctions. When no hydrogen is used, we obtain MoS2 monolayers decorated with small particles along the edges, inducing vertical heteroepitaxial growth of monolayer WS2 on top of the MoS2 to form vertical WS2/MoS2 heterojunctions. Our lateral and vertical atomic layer heteroepitaxy steered by seed defect engineering opens up a new route toward atomically controlled fabrication of 2D heterojunction architectures.", "title": "" }, { "docid": "ff163abbdfa5db81f54fc42aa52ab0c3", "text": "Drawing on the self-system model, this study conceptualized school engagement as a multidimensional construct, including behavioral, emotional, and cognitive engagement, and examined whether changes in the three types of school engagement related to changes in problem behaviors from 7th through 11th grades (approximately ages 12-17). In addition, a transactional model of reciprocal relations between school engagement and problem behaviors was tested to predict school dropout. Data were collected on 1,272 youth from an ethnically and economically diverse county (58% African American, 36% European American; 51% females). Results indicated that adolescents who had declines in behavioral and emotional engagement with school tended to have increased delinquency and substance use over time. There were bidirectional associations between behavioral and emotional engagement in school and youth problem behaviors over time. Finally, lower behavioral and emotional engagement and greater problem behaviors predicted greater likelihood of dropping out of school.", "title": "" }, { "docid": "3fe2cb22ac6aa37d8f9d16dea97649c5", "text": "The term biosensors encompasses devices that have the potential to quantify physiological, immunological and behavioural responses of livestock and multiple animal species. Novel biosensing methodologies offer highly specialised monitoring devices for the specific measurement of individual and multiple parameters covering an animal's physiology as well as monitoring of an animal's environment. These devices are not only highly specific and sensitive for the parameters being analysed, but they are also reliable and easy to use, and can accelerate the monitoring process. Novel biosensors in livestock management provide significant benefits and applications in disease detection and isolation, health monitoring and detection of reproductive cycles, as well as monitoring physiological wellbeing of the animal via analysis of the animal's environment. With the development of integrated systems and the Internet of Things, the continuously monitoring devices are expected to become affordable. The data generated from integrated livestock monitoring is anticipated to assist farmers and the agricultural industry to improve animal productivity in the future. The data is expected to reduce the impact of the livestock industry on the environment, while at the same time driving the new wave towards the improvements of viable farming techniques. This review focusses on the emerging technological advancements in monitoring of livestock health for detailed, precise information on productivity, as well as physiology and well-being. Biosensors will contribute to the 4th revolution in agriculture by incorporating innovative technologies into cost-effective diagnostic methods that can mitigate the potentially catastrophic effects of infectious outbreaks in farmed animals.", "title": "" }, { "docid": "9311198676b2cc5ad31145c53c91134d", "text": "A novel fractal called Fractal Clover Leaf (FCL) is introduced and shown to have well miniaturization capabilities. The proposed patches are fed by L-shape probe to achieve wide bandwidth operation in PCS band. A numerical parametric study on the proposed antenna is presented. It is found that the antenna can attain more than 72% size reduction as well as 17% impedance bandwidth (VSWR<2), in cost of less gain. It is also shown that impedance matching could be reached by tuning probe parameters. The proposed antenna is suitable for handset applications and tight packed planar phased arrays to achieve lower scan angels than rectangular patches.", "title": "" }, { "docid": "40f8240220dad82a7a2da33932fb0e73", "text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.", "title": "" }, { "docid": "34508dac189b31c210d461682fed9f67", "text": "Life is more than cat pictures. There are tough days, heartbreak, and hugs. Under what contexts do people share these feelings online, and how do their friends respond? Using millions of de-identified Facebook status updates with poster-annotated feelings (e.g., “feeling thankful” or “feeling worried”), we examine the magnitude and circumstances in which people share positive or negative feelings and characterize the nature of the responses they receive. We find that people share greater proportions of both positive and negative emotions when their friend networks are smaller and denser. Consistent with social sharing theory, hearing about a friend’s troubles on Facebook causes friends to reply with more emotional and supportive comments. Friends’ comments are also more numerous and longer. Posts with positive feelings, on the other hand, receive more likes, and their comments have more positive language. Feelings that relate to the poster’s self worth, such as “feeling defeated,” “feeling unloved,” or “feeling accomplished” amplify these effects.", "title": "" }, { "docid": "5d4797cffc06cbde079bf4019dc196db", "text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)&#x2014;a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "title": "" }, { "docid": "998fe25641f4f6dc6649b02226c5e86a", "text": "We present the malicious administrator problem, in which one or more network administrators attempt to damage routing, forwarding, or network availability by misconfiguring controllers. While this threat vector has been acknowledged in previous work, most solutions have focused on enforcing specific policies for forwarding rules. We present a definition of this problem and a controller design called Fleet that makes a first step towards addressing this problem. We present two protocols that can be used with the Fleet controller, and argue that its lower layer deployed on top of switches eliminates many problems of using multiple controllers in SDNs. We then present a prototype simulation and show that as long as a majority of non-malicious administrators exists, we can usually recover from link failures within several seconds (a time dominated by failure detection speed and inter-administrator latency).", "title": "" } ]
scidocsrr
3ad0cd7dc7167ddcfee192b8d413736b
Geometric Loss Functions for Camera Pose Regression with Deep Learning
[ { "docid": "5c62f66d948f15cea55c1d2c9d10f229", "text": "This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.", "title": "" }, { "docid": "acefbbb42607f2d478a16448644bd6e6", "text": "The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http://homes.cs.washington.edu/~ccwu/vsfm/.", "title": "" } ]
[ { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "e82e4599a7734c9b0292a32f551dd411", "text": "Generating a text abstract from a set of documents remains a challenging task. The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web. In contrast, parallel data for multi-document summarization are scarce and costly to obtain. There is a pressing need to adapt an encoder-decoder model trained on single-document summarization data to work with multiple-document input. In this paper, we present an initial investigation into a novel adaptation method. It exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an abstractive summary. The adaptation method is robust and itself requires no training data. Our system compares favorably to state-of-the-art extractive and abstractive approaches judged by automatic metrics and human assessors.", "title": "" }, { "docid": "2ddc4919771402dabedd2020649d1938", "text": "Increase in energy demand has made the renewable resources more attractive. Additionally, use of renewable energy sources reduces combustion of fossil fuels and the consequent CO2 emission which is the principal cause of global warming. The concept of photovoltaic-Wind hybrid system is well known and currently thousands of PV-Wind based power systems are being deployed worldwide, for providing power to small, remote, grid-independent applications. This paper shows the way to design the aspects of a hybrid power system that will target remote users. It emphasizes the renewable hybrid power system to obtain a reliable autonomous system with the optimization of the components size and the improvement of the cost. The system can provide electricity for a remote located village. The main power of the hybrid system comes from the photovoltaic panels and wind generators, while the batteries are used as backup units. The optimization software used for this paper is HOMER. HOMER is a design model that determines the optimal architecture and control strategy of the hybrid system. The simulation results indicate that the proposed hybrid system would be a feasible solution for distributed generation of electric power for stand-alone applications at remote locations", "title": "" }, { "docid": "a4c17b823d325ed5f339f78cd4d1e9ab", "text": "A 34–40 GHz VCO fabricated in 65 nm digital CMOS technology is demonstrated in this paper. The VCO uses a combination of switched capacitors and varactors for tuning and has a maximum Kvco of 240 MHz/V. It exhibits a phase noise of better than −98 dBc/Hz @ 1-MHz offset across the band while consuming 12 mA from a 1.2-V supply, an FOMT of −182.1 dBc/Hz. A cascode buffer following the VCO consumes 11 mA to deliver 0 dBm LO signal to a 50Ω load.", "title": "" }, { "docid": "308effb16ccec5e315da4d02119080d0", "text": "In this paper, we describe a method to photogrammetrically estimate the intrinsic and extrinsic parameters of fish-eye cameras using the properties of equidistance perspective, particularly vanishing point estimation, with the aim of providing a rectified image for scene viewing applications. The estimated intrinsic parameters are the optical center and the fish-eye lensing parameter, and the extrinsic parameters are the rotations about the world axes relative to the checkerboard calibration diagram.", "title": "" }, { "docid": "a38eef36ae38baf83c55262fbdd26278", "text": "An electrochemical sensor based on the electrocatalytic activity of functionalized graphene for sensitive detection of paracetamol is presented. The electrochemical behaviors of paracetamol on graphene-modified glassy carbon electrodes (GCEs) were investigated by cyclic voltammetry and square-wave voltammetry. The results showed that the graphene-modified electrode exhibited excellent electrocatalytic activity to paracetamol. A quasi-reversible redox process of paracetamol at the modified electrode was obtained, and the over-potential of paracetamol decreased significantly compared with that at the bare GCE. Such electrocatalytic behavior of graphene is attributed to its unique physical and chemical properties, e.g., subtle electronic characteristics, attractive pi-pi interaction, and strong adsorptive capability. This electrochemical sensor shows an excellent performance for detecting paracetamol with a detection limit of 3.2x10(-8)M, a reproducibility of 5.2% relative standard deviation, and a satisfied recovery from 96.4% to 103.3%. The sensor shows great promise for simple, sensitive, and quantitative detection and screening of paracetamol.", "title": "" }, { "docid": "1fb13cda340d685289f1863bb2bfd62b", "text": "1 Assistant Professor, Department of Prosthodontics, Ibn-e-Siena Hospital and Research Institute, Multan Medical and Dental College, Multan, Pakistan 2 Assistant Professor, Department of Prosthodontics, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 3 Head Department of Prosthodontics, Armed Forces Institute of Dentistry, Rawalpindi, Pakistan For Correspondence: Dr Salman Ahmad, House No 10, Street No 2, Gulshan Sakhi Sultan Colony, Surej Miani Road, Multan, Pakistan. Email: drsalman21@gmail.com. Cell: 0300–8732017 INTRODUCTION", "title": "" }, { "docid": "332bcd9b49f3551d8f07e4f21a881804", "text": "Attention plays a critical role in effective learning. By means of attention assessment, it helps learners improve and review their learning processes, and even discover Attention Deficit Hyperactivity Disorder (ADHD). Hence, this work employs modified smart glasses which have an inward facing camera for eye tracking, and an inertial measurement unit for head pose estimation. The proposed attention estimation system consists of eye movement detection, head pose estimation, and machine learning. In eye movement detection, the central point of the iris is found by the locally maximum curve via the Hough transform where the region of interest is derived by the identified left and right eye corners. The head pose estimation is based on the captured inertial data to generate physical features for machine learning. Here, the machine learning adopts Genetic Algorithm (GA)-Support Vector Machine (SVM) where the feature selection of Sequential Floating Forward Selection (SFFS) is employed to determine adequate features, and GA is to optimize the parameters of SVM. Our experiments reveal that the proposed attention estimation system can achieve the accuracy of 93.1% which is fairly good as compared to the conventional systems. Therefore, the proposed system embedded in smart glasses brings users mobile, convenient, and comfortable to assess their attention on learning or medical symptom checker.", "title": "" }, { "docid": "7ee5886ae2df12f12d65f5080561ecc6", "text": "Sliding mode control schemes of the static and dynamic types are proposed for the control of a magnetic levitation system. The proposed controllers guarantee the asymptotic regulation of the states of the system to their desired values. Simulation results of the proposed controllers are given to illustrate the effectiveness of them. Robustness of the control schemes to changes in the parameters of the system is also investigated.", "title": "" }, { "docid": "a604527951768b088fe2e40104fa78bb", "text": "In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It’s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor’s expertise and experience.But still cases are reported of wrong diagnosis and treatment.Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%. Keywords—Data mining , classification , Parkinson disease , Artificial neural networks , Feature Selection , Information Gain", "title": "" }, { "docid": "3c80aa753cac4bebd8c6808a361973c7", "text": "We develop a computer-assisted method for the discovery of insightful conceptualizations, in the form of clusterings (i.e., partitions) of input objects. Each of the numerous fully automated methods of cluster analysis proposed in statistics, computer science, and biology optimize a different objective function. Almost all are well defined, but how to determine before the fact which one, if any, will partition a given set of objects in an \"insightful\" or \"useful\" way for a given user is unknown and difficult, if not logically impossible. We develop a metric space of partitions from all existing cluster analysis methods applied to a given dataset (along with millions of other solutions we add based on combinations of existing clusterings) and enable a user to explore and interact with it and quickly reveal or prompt useful or insightful conceptualizations. In addition, although it is uncommon to do so in unsupervised learning problems, we offer and implement evaluation designs that make our computer-assisted approach vulnerable to being proven suboptimal in specific data types. We demonstrate that our approach facilitates more efficient and insightful discovery of useful information than expert human coders or many existing fully automated methods.", "title": "" }, { "docid": "b4284204ae7d9ef39091a651583b3450", "text": "Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time t, which summarizes all sensory information available at time t, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.", "title": "" }, { "docid": "f42648f411cbf7e31940acf81bf107d0", "text": "Observing that the creation of certain types of artistic artifacts necessitate intelligence, we present the Lovelace 2.0 Test of creativity as an alternative to the Turing Test as a means of determining whether an agent is intelligent. The Lovelace 2.0 Test builds off prior tests of creativity and additionally provides a means of directly comparing the relative intelligence of different agents.", "title": "" }, { "docid": "bac01694d6b578b5b873d5de131cb844", "text": "The methylotrophic yeast Komagataella phaffii (Pichia pastoris) has been developed into a highly successful system for heterologous protein expression in both academia and industry. However, overexpression of recombinant protein often leads to severe burden on the physiology of K. phaffii and triggers cellular stress. To elucidate the global effect of protein overexpression, we set out to analyze the differential transcriptome of recombinant strains with 12 copies and a single copy of phospholipase A2 gene (PLA 2) from Streptomyces violaceoruber. Through GO, KEGG and heat map analysis of significantly differentially expressed genes, the results indicated that the 12-copy strain suffered heavy cellular stress. The genes involved in protein processing and stress response were significantly upregulated due to the burden of protein folding and secretion, while the genes in ribosome and DNA replication were significantly downregulated possibly contributing to the reduced cell growth rate under protein overexpression stress. Three most upregulated heat shock response genes (CPR6, FES1, and STI1) were co-overexpressed in K. phaffii and proved their positive effect on the secretion of reporter enzymes (PLA2 and prolyl endopeptidase) by increasing the production up to 1.41-fold, providing novel helper factors for rational engineering of K. phaffii.", "title": "" }, { "docid": "a608f681a3833d932bf723ca26dfe511", "text": "The purpose of the study was to explore whether personality traits moderate the association between social comparison on Facebook and subjective well-being, measured as both life satisfaction and eudaimonic well-being. Data were collected via an online questionnaire which measured Facebook use, social comparison behavior and personality traits for 337 respondents. The results showed positive associations between Facebook intensity and both measures of subjective well-being, and negative associations between Facebook social comparison and both measures of subjective well-being. Personality traits were assessed by the Reinforcement Sensitivity Theory personality questionnaire, which revealed that Reward Interest was positively associated with eudaimonic well-being, and Goal-Drive Persistence was positively associated with both measures of subjective well-being. Impulsivity was negatively associated with eudaimonic well-being and the Behavioral Inhibition System was negatively associated with both measures of subjective well-being. Interactions between personality traits and social comparison on Facebook indicated that for respondents with high Goal-Drive Persistence, Facebook social comparison had a positive association with eudaimonic well-being, thus confirming that some personality traits moderate the association between Facebook social comparison and subjective well-being. The results of this study highlight how individual differences in personality may impact how social comparison on Facebook affects individuals’ subjective well-being.", "title": "" }, { "docid": "787f95f8c28bfcf14eef486725a25bd2", "text": "BACKGROUND\nThere is a lack of knowledge on the primary and secondary static stabilizing functions of the posterior oblique ligament (POL), the proximal and distal divisions of the superficial medial collateral ligament (sMCL), and the meniscofemoral and meniscotibial portions of the deep medial collateral ligament (MCL).\n\n\nHYPOTHESIS\nIdentification of the primary and secondary stabilizing functions of the individual components of the main medial knee structures will provide increased knowledge of the medial knee ligamentous stability.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nTwenty-four cadaveric knees were equally divided into 3 groups with unique sequential sectioning sequences of the POL, sMCL (proximal and distal divisions), and deep MCL (meniscofemoral and meniscotibial portions). A 6 degree of freedom electromagnetic tracking system monitored motion after application of valgus loads (10 N.m) and internal and external rotation torques (5 N.m) at 0 degrees , 20 degrees , 30 degrees , 60 degrees , and 90 degrees of knee flexion.\n\n\nRESULTS\nThe primary valgus stabilizer was the proximal division of the sMCL. The primary external rotation stabilizer was the distal division of the sMCL at 30 degrees of knee flexion. The primary internal rotation stabilizers were the POL and the distal division of the sMCL at all tested knee flexion angles, the meniscofemoral portion of the deep MCL at 20 degrees , 60 degrees , and 90 degrees of knee flexion, and the meniscotibial portion of the deep MCL at 0 degrees and 30 degrees of knee flexion.\n\n\nCONCLUSION\nAn intricate relationship exists among the main medial knee structures and their individual components for static function to applied loads.\n\n\nCLINICAL SIGNIFICANCE\nInterpretation of clinical knee motion testing following medial knee injuries will improve with the information in this study. Significant increases in external rotation at 30 degrees of knee flexion were found with all medial knee structures sectioned, which indicates that a positive dial test may be found not only for posterolateral knee injuries but also for medial knee injuries.", "title": "" }, { "docid": "15fddcfa5a9cbf80fe6640c815ca89ea", "text": "Relation extraction is one of the core challenges in automated knowledge base construction. One line of approach for relation extraction is to perform multi-hop reasoning on the paths connecting an entity pair to infer new relations. While these methods have been successfully applied for knowledge base completion, they do not utilize the entity or the entity type information to make predictions. In this work, we incorporate selectional preferences, i.e., relations enforce constraints on the allowed entity types for the candidate entities, to multi-hop relation extraction by including entity type information. We achieve a 17.67% (relative) improvement in MAP score in a relation extraction task when compared to a method that does not use entity type information.", "title": "" }, { "docid": "6ab58e75daf299f3463be4432def87b2", "text": "Less than thirty years after the giant magnetoresistance (GMR) effect was described, GMR sensors are the preferred choice in many applications demanding the measurement of low magnetic fields in small volumes. This rapid deployment from theoretical basis to market and state-of-the-art applications can be explained by the combination of excellent inherent properties with the feasibility of fabrication, allowing the real integration with many other standard technologies. In this paper, we present a review focusing on how this capability of integration has allowed the improvement of the inherent capabilities and, therefore, the range of application of GMR sensors. After briefly describing the phenomenological basis, we deal on the benefits of low temperature deposition techniques regarding the integration of GMR sensors with flexible (plastic) substrates and pre-processed CMOS chips. In this way, the limit of detection can be improved by means of bettering the sensitivity or reducing the noise. We also report on novel fields of application of GMR sensors by the recapitulation of a number of cases of success of their integration with different heterogeneous complementary elements. We finally describe three fully functional systems, two of them in the bio-technology world, as the proof of how the integrability has been instrumental in the meteoric development of GMR sensors and their applications.", "title": "" }, { "docid": "5ae157937813e060a72ecb918d4dc5d1", "text": "Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.", "title": "" }, { "docid": "999c1fa41498e8a330dfbd8fdb4c6d6e", "text": "Wellness is a widely popular concept that is commonly applied to fitness and self-help products or services. Inference of personal wellness-related attributes, such as body mass index or diseases tendency, as well as understanding of global dependencies between wellness attributes and users’ behavior is of crucial importance to various applications in personal and public wellness domains. Meanwhile, the emergence of social media platforms and wearable sensors makes it feasible to perform wellness profiling for users from multiple perspectives. However, research efforts on wellness profiling and integration of social media and sensor data are relatively sparse, and this study represents one of the first attempts in this direction. Specifically, to infer personal wellness attributes, we proposed multi-source individual user profile learning framework named “TweetFit”. “TweetFit” can handle data incompleteness and perform wellness attributes inference from sensor and social media data simultaneously. Our experimental results show that the integration of the data from sensors and multiple social media sources can substantially boost the wellness profiling performance.", "title": "" } ]
scidocsrr
19a55f1440b08836541dcac73b34c0f6
Feature Extraction Techniques for Recognition of Malayalam Handwritten Characters : Review
[ { "docid": "b09d23c24625dc17e351d79ce88405b8", "text": "-This paper presents an overview of feature extraction methods for off-line recognition of segmented (isolated) characters. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representations 6f the characters, such as solid binary characters, character contours, skeletons (thinned characters) or gray-level subimages of each individual character. The feature extraction methods are discussed in terms of invariance properties, reconstructability and expected distortions and variability of the characters. The problem of choosing the appropriate feature extraction method for a given application is also discussed. When a few promising feature extraction methods have been identified, they need to be evaluated experimentally to find the best method for the given application. Feature extraction Optical character recognition Character representation Invariance Reconstructability I. I N T R O D U C T I O N Optical character recognition (OCR) is one of the most successful applications of automatic pattern recognition. Since the mid 1950s, OCR has been a very active field for research and development, ca) Today, reasonably good OCR packages can be bought for as little as $100. However, these are only able to recognize high quality printed text documents or neatly written handprinted text. The current research in OCR is now addressing documents that are not well handled by the available systems, including severely degraded, omnifont machine-printed text and (unconstrained) handwritten text. Also, efforts are being made to achieve lower substitution error rates and reject rates even on good quality machine-printed text, since an experienced human typist still has a much lower error rate, albeit at a slower speed. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance. Our own interest in character recognition is to recognize hand-printed digits in hydrographic maps (Fig. 1), but we have tried not to emphasize this particular application in the paper. Given the large number of feature extraction methods reported in the literature, a newcomer to the field is faced with the following question: which feature ext Author to whom correspondence should be addressed. This work was done while OD. Trier was visiting Michigan State University. traction method is the best for a given application? This question led us to characterize the available feature extraction methods, so that the most promising methods could be sorted out. An experimental evaluation of these few promising methods must still be performed to select the best method for a specific application. In this process, one might find that a specific feature extraction method needs to be further developed. A full performance evaluation of each method in terms of classification accuracy and speed is not within the scope of this review paper. In order to study performance issues, we will have to implement all the feature extraction methods, which is an enormous task. In addition, the performance also depends on the type of classifier used. Different feature types may need different types of classifiers. Also, the classification results reported in the literature are not comparable because they are based on different data sets. Given the vast number of papers published on OCR every year, it is impossible to include all the available feature extraction methods in this survey. Instead, we have tried to make a representative selection to illustrate the different principles that can be used. Two-dimensional (2-D) object classification has several applications in addition to character recognition. These include airplane recognition, 12) recognition of mechanical parts and tools, 13l and tissue classification in medical imaging34) Several of the feature extraction techniques described in this paper for OCR have also been found to be useful in such applications.", "title": "" } ]
[ { "docid": "3155b09ca1e44aa4fee2bb58ebb1fa35", "text": "In this paper, we present a novel approach for identifying argumentative discourse structures in persuasive essays. The structure of argumentation consists of several components (i.e. claims and premises) that are connected with argumentative relations. We consider this task in two consecutive steps. First, we identify the components of arguments using multiclass classification. Second, we classify a pair of argument components as either support or non-support for identifying the structure of argumentative discourse. For both tasks, we evaluate several classifiers and propose novel feature sets including structural, lexical, syntactic and contextual features. In our experiments, we obtain a macro F1-score of 0.726 for identifying argument components and 0.722 for argumentative relations.", "title": "" }, { "docid": "377aec61877995ad2b677160fa43fefb", "text": "One of the major issues involved with communication is acoustic echo, which is actually a delayed version of sound reflected back to the source of sound hampering communication. Cancellation of these involve the use of acoustic echo cancellers involving adaptive filters governed by adaptive algorithms. This paper presents a review of some of the algorithms of acoustic echo cancellation covering their merits and demerits. Various algorithms like LMS, NLMS, FLMS, LLMS, RLS, AFA, LMF have been discussed. Keywords— Adaptive Filter, Acoustic Echo, LMS, NLMS, FX-LMS, AAF, LLMS, RLS.", "title": "" }, { "docid": "529c514971f88b433f594c8c6e825d76", "text": "Permanent-magnet motors with rare-earth magnets are among the best candidates for high-performance applications such as automotive applications. However, due to their cost and risks relating to the security of supply, alternative solutions such as ferrite magnets have recently become popular. In this paper, the two major design challenges of using ferrite magnets for a high-torque-density and high-speed application, i.e., their low remanent flux density and low coercivity, are addressed. It is shown that a spoke-type design utilizing a distributed winding may overcome the torque density challenge due to a simultaneous flux concentration and a reluctance torque possibility. Furthermore, the demagnetization challenge can be overcome through the careful optimization of the rotor structure, with the inclusion of nonmagnetic voids on the top and bottom of the magnets. To meet the challenges of a high-speed operation, an extensive rotor structural analysis has been undertaken, during which electromagnetics and manufacturing tolerances are taken into account. Electromagnetic studies are validated through the testing of a prototype, which is custom built for static torque and demagnetization evaluation. The disclosed motor design surpasses the state-of-the-art performance and cost, merging the theories into a multidisciplinary product.", "title": "" }, { "docid": "57d34cc782ed729e67b868b3527bf69d", "text": "We develop a new dynamic model of peer-to-peer Internet-enabled rental markets for durable goods in which consumers may also trade their durable assets in (traditional) secondary markets, transaction costs and depreciation rates may vary with usage intensity, and consumers are heterogeneous in their price sensitivity and asset utilization rates. We characterize the stationary equilibrium of the model. We analyze the welfare and distributional effects of introducing these rental markets by calibrating our model with US automobile industry data and 2 years of transaction-level data we have obtained from Getaround, a large peer-to-peer car rental marketplace. Our counterfactual analyses vary marketplace access levels and matching frictions, showing that peer-to-peer rental markets change the allocation of goods significantly, substituting rental for ownership and lowering used-good prices while increasing consumer surplus. Consumption shifts are significantly more pronounced for below-median income users, who also provide a majority of rental supply. Our results also suggest that these below-median income consumers will enjoy a disproportionate fraction of eventual welfare gains from this kind of ’sharing economy’ through broader inclusion, higher quality rental-based consumption, and new ownership facilitated by rental supply revenues. (JEL D4, L1, L81) ∗Fraiberger: Department of Economics, New York University (email: samuel.fraiberger@nyu.edu). Sundararajan: Leonard N. Stern School of Business, New York University (email: digitalarun@nyu.edu). We thank the executive team at Getaround (and especially Padden Murphy, Jessica Scorpio, Sam Zaid and Ranjit Chacko) for numerous helpful conversations and for providing access to their anonymized data. We thank Anmol Bandhari, Andrew Caplin, Natalie Foster, Lisa Gansky, Shane Greenstein, Anindya Ghose, John Horton, John Lazarev, Alessandro Lizzeri, Romain Ranciere, Justin Rao, David Rothschild, Shachar Reichman, Marshall Van Alstyne, and seminar participants at Carnegie-Mellon University, New York University and the 2014 MIT/BU Platform Strategy Research Symposium for helpful discussions on preliminary versions of this work. Fraiberger and Sundararajan have no current or prior commercial relationship with Getaround.", "title": "" }, { "docid": "39f51064adf460624a35fb00a730a715", "text": "For most outdoor applications, systems such as GPS provide users with accurate position estimates. However, reliable range-based localization using radio signals in indoor or urban environments can be a problem due to multipath fading and line-of-sight (LOS) blockage. The measurement bias introduced by these delays causes significant localization error, even when using additional sensors such as an inertial measurement unit (IMU) to perform outlier rejection. We describe an algorithm for accurate indoor localization of a sensor in a network of known beacons. The sensor measures the range to the beacons using an Ultra-Wideband (UWB) signal and uses statistical inference to infer and correct for the bias due to LOS blockage in the range measurements. We show that a particle filter can be used to estimate the joint distribution over both pose and beacon biases. We use the particle filter estimation technique specifically to capture the non-linearity of transitions in the beacon bias as the sensor moves. Results using real-world and simulated data are presented.", "title": "" }, { "docid": "268a0714e54e6f2e19f2159f291be7da", "text": "Neural recording systems are a central component of Brain-Machince Interfaces (BMIs). In most of these systems the emphasis is on faithful reproduction and transmission of the recorded signal to remote systems for further processing or data analysis. Here we follow an alternative approach: we propose a neural recording system that can be directly interfaced locally to neuromorphic spiking neural processing circuits for compressing the large amounts of data recorded, carrying out signal processing and neural computation to extract relevant information, and transmitting only the low-bandwidth outcome of the processing to remote computing or actuating modules. The fabricated system includes a low-noise amplifier, a delta-modulator analog-to-digital converter, and a low-power band-pass filter. The bio-amplifier has a programmable gain of 45-54 dB, with a Root Mean Squared (RMS) input-referred noise level of 2.1 μV, and consumes 90 μW . The band-pass filter and delta-modulator circuits include asynchronous handshaking interface logic compatible with event-based communication protocols. We describe the properties of the neural recording circuits, validating them with experimental measurements, and present system-level application examples, by interfacing these circuits to a reconfigurable neuromorphic processor comprising an array of spiking neurons with plastic and dynamic synapses. The pool of neurons within the neuromorphic processor was configured to implement a recurrent neural network, and to process the events generated by the neural recording system in order to carry out pattern recognition.", "title": "" }, { "docid": "a1b9827493928d1c53ac1be8750bf928", "text": "Image-based localization is an important problem in robotics and an integral part of visual mapping and navigation systems. An approach to robustly match images to previously recorded ones must be able to cope with seasonal changes especially when it is supposed to work reliably over long periods of time. In this paper, we present a novel approach to visual localization of mobile robots in outdoor environments, which is able to deal with substantial seasonal changes. We formulate image matching as a minimum cost flow problem in a data association graph to effectively exploit sequence information. This allows us to deal with non-matching image sequences that result from temporal occlusions or from visiting new places. We present extensive experimental evaluations under substantial seasonal changes. Our approach achieves accurate matching across seasons and outperforms existing state-of-the-art methods such as FABMAP2 and SeqSLAM.", "title": "" }, { "docid": "08dcf41de314afe40b4430132be40380", "text": "Robust speech recognition in everyday conditions requires the solution to a number of challenging problems, not least the ability to handle multiple sound sources. The specific case of speech recognition in the presence of a competing talker has been studied for several decades, resulting in a number of quite distinct algorithmic solutions whose focus ranges from modeling both target and competing speech to speech separation using auditory grouping principles. The purpose of the monaural speech separation and recognition challenge was to permit a large-scale comparison of techniques for the competing talker problem. The task was to identify keywords in sentences spoken by a target talker when mixed into a single channel with a background talker speaking similar sentences. Ten independent sets of results were contributed, alongside a baseline recognition system. Performance was evaluated using common training and test data and common metrics. Listeners’ performance in the same task was also measured. This paper describes the challenge problem, compares the performance of the contributed algorithms, and discusses the factors which distinguish the systems. One highlight of the comparison was the finding that several systems achieved near-human performance in some conditions, and one out-performed listeners overall. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8b195d6fdebb8786e6327f07806d5e8b", "text": "OBJECTIVE\nTo report the use of Mirtazapine in the treatment of anorexia nervosa with depression primarily regarding its propensity for weight gain.\n\n\nMETHOD\nWe present an outpatient case report of anorexia nervosa with depression. The patient's subsequent progress was recorded.\n\n\nRESULTS\nThe patient gained 2.5 kg within 3 months to eventually attain a body mass index of 15 after 5 months. Her depression achieved full remission at 6 weeks of treatment.\n\n\nCONCLUSIONS\nMirtazapine is the choice medication in this case. However, treating depression requires caution, given these patients' physical vulnerability. Controlled trials of Mirtazapine for anorexia nervosa are needed.", "title": "" }, { "docid": "8f9309ebfc87de5eb7cf715c0370da54", "text": "Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.", "title": "" }, { "docid": "2383c90591822bc0c8cec2b1b2309b7a", "text": "Apple's iPad has attracted a lot of attention since its release in 2010 and one area in which it has been adopted is the education sector. The iPad's large multi-touch screen, sleek profile and the ability to easily download and purchase a huge variety of educational applications make it attractive to educators. This paper presents a case study of the iPad's adoption in a primary school, one of the first in the world to adopt it. From interviews with teachers and IT staff, we conclude that the iPad's main strengths are the way in which it provides quick and easy access to information for students and the support it provides for collaboration. However, staff need to carefully manage both the teaching and the administrative environment in which the iPad is used, and we provide some lessons learned that can help other schools considering adopting the iPad in the classroom.", "title": "" }, { "docid": "7db5807fc15aeb8dfe4669a8208a8978", "text": "This document is an output from a project funded by the UK Department for International Development (DFID) for the benefit of developing countries. The views expressed are not necessarily those of DFID. Contents Contents i List of tables ii List of figures ii List of boxes ii Acronyms iii Acknowledgements iv Summary 1 1. Introduction: why worry about disasters? 7 Objectives of this Study 7 Global disaster trends 7 Why donors should be concerned 9 What donors can do 9 2. What makes a disaster? 11 Characteristics of a disaster 11 Disaster risk reduction 12 The diversity of hazards 12 Vulnerability and capacity, coping and adaptation 15 Resilience 16 Poverty and vulnerability: links and differences 16 'The disaster management cycle' 17 3. Why should disasters be a development concern? 19 3.1 Disasters hold back development 19 Disasters undermine efforts to achieve the Millennium Development Goals 19 Macroeconomic impacts of disasters 21 Reallocation of resources from development to emergency assistance 22 Disaster impact on communities and livelihoods 23 3.2 Disasters are rooted in development failures 25 Dominant development models and risk 25 Development can lead to disaster 26 Poorly planned attempts to reduce risk can make matters worse 29 Disaster responses can themselves exacerbate risk 30 3.3 'Disaster-proofing' development: what are the gains? 31 From 'vicious spirals' of failed development and disaster risk… 31 … to 'virtuous spirals' of risk reduction 32 Disaster risk reduction can help achieve the Millennium Development Goals 33 … and can be cost-effective 33 4. Why does development tend to overlook disaster risk? 36 4.1 Introduction 36 4.2 Incentive, institutional and funding structures 36 Political incentives and governance in disaster prone countries 36 Government-donor relations and moral hazard 37 Donors and multilateral agencies 38 NGOs 41 4.3 Lack of exposure to and information on disaster issues 41 4.4 Assumptions about the risk-reducing capacity of development 43 ii 5. Tools for better integrating disaster risk reduction into development 45 Introduction 45 Poverty Reduction Strategy Papers (PRSPs) 45 UN Development Assistance Frameworks (UNDAFs) 47 Country assistance plans 47 National Adaptation Programmes of Action (NAPAs) 48 Partnership agreements with implementing agencies and governments 49 Programme and project appraisal guidelines 49 Early warning and information systems 49 Risk transfer mechanisms 51 International initiatives and policy forums 51 Risk reduction performance targets and indicators for donors 52 6. Conclusions and recommendations 53 6.1 Main conclusions 53 6.2 Recommendations 54 Core recommendation …", "title": "" }, { "docid": "a76071628d25db972127702b974d4849", "text": "Surveying 3D scenes is a common task in robotics. Systems can do so autonomously by iteratively obtaining measurements. This process of planning observations to improve the model of a scene is called Next Best View (NBV) planning. NBV planning approaches often use either volumetric (e.g., voxel grids) or surface (e.g., triangulated meshes) representations. Volumetric approaches generalise well between scenes as they do not depend on surface geometry but do not scale to high-resolution models of large scenes. Surface representations can obtain high-resolution models at any scale but often require tuning of unintuitive parameters or multiple survey stages. This paper presents a scene-model-free NBV planning approach with a density representation. The Surface Edge Explorer (SEE) uses the density of current measurements to detect and explore observed surface boundaries. This approach is shown experimentally to provide better surface coverage in lower computation time than the evaluated state-of-the-art volumetric approaches while moving equivalent distances.", "title": "" }, { "docid": "1ddfd2f44ed394318454b071124d423d", "text": "Urban growth along the middle section of the ancient silk-road of China (so called West Yellow River Corridor—He-Xi Corridor) has taken a unique path deviating from what is commonly seen in the coastal China. Urban growth here has been driven by historical heritage, transportation connection between East and West China, and mineral exploitation. However, it has been constrained by water shortage and harsh natural environment because this region is located in arid and semi-arid climate zones. This paper attempts to construct a multi-city agent-based model to explore possible trajectories of regional urban growth along the entire He-Xi Corridor under a severe environment risk, over urban growth under an extreme threat of water shortage. In contrast with current ABM approaches, our model will simulate urban growth in a large administrative region consisting of a system of cities. It simultaneously considers the spatial variations of these cities in terms of population size, development history, water resource endowment and sustainable development potential. It also explores potential impacts of exogenous inter-city interactions on future urban growth on the basis of urban gravity model. The algorithmic foundations of three types of agents, developers, conservationists and regional-planners, are discussed. Simulations with regard to three different development scenarios are presented and analyzed.", "title": "" }, { "docid": "150f27f47e9ffd6cd4bc0756bd08aed4", "text": "Sunni extremism poses a significant danger to society, yet it is relatively easy for these extremist organizations to spread jihadist propaganda and recruit new members via the Internet, Darknet, and social media. The sheer volume of these sites make them very difficult to police. This paper discusses an approach that can assist with this problem, by automatically identifying a subset of web pages and social media content (or any text) that contains extremist content. The approach utilizes machine learning, specifically neural networks and deep learning, to classify text as containing “extremist” or “benign” (i.e., not extremist) content. This method is robust and can effectively learn to classify extremist multilingual text of varying length. This study also involved the construction of a high quality dataset for training and testing, put together by a team of 40 people (some with fluency in Arabic) who expended 9,500 hours of combined effort. This dataset should facilitate future research on this topic.", "title": "" }, { "docid": "86995161610892b1198fd96b71e903cd", "text": "This paper discusses a distributed frequency modulation continuous wave radar system. This K-band radar system has high sensitivity, linearity, and flatness to detect low-radar cross section targets and measure their range and velocity. To reduce the leakage between a transmitter and a receiver, the system uses not RF cables but fiber-optic links that have low distortion characteristics and low propagation loss. The transmitter and the receiver are each mounted on a designed fixture to reduce the ground reflections. In addition, they are located on different platforms to reduce the leakage signal flowing directly from the transmitter to the receiver. Measurements in terms of the range and the velocity of a small drone have been carried out to evaluate the proposed distributed radar system. The results show that we can clearly detect the small drone within a 500 m range, which demonstrates the high sensitivity of the system and high isolation between the transmitter and the receiver.", "title": "" }, { "docid": "5f941adae33e1433ebaeeb2dbb69e6ca", "text": "Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling also suffers from high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.", "title": "" }, { "docid": "d9605c1cde4c40d69c2faaea15eb466c", "text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.", "title": "" }, { "docid": "a7317f3f1b4767f20c38394e519fa0d8", "text": "The development of the concept of burden for use in research lacks consistent conceptualization and operational definitions. The purpose of this article is to analyze the concept of burden in an effort to promote conceptual clarity. The technique advocated by Walker and Avant is used to analyze this concept. Critical attributes of burden include subjective perception, multidimensional phenomena, dynamic change, and overload. Predisposing factors are caregiver's characteristics, the demands of caregivers, and the involvement in caregiving. The consequences of burden generate problems in care-receiver, caregiver, family, and health care system. Overall, this article enables us to advance this concept, identify the different sources of burden, and provide directions for nursing intervention.", "title": "" }, { "docid": "18f530c400498658d73aba21f0ce984e", "text": "Anomaly and event detection has been studied widely for having many applications in fraud detection, network intrusion detection, detection of epidemic outbreaks, and so on. In this paper we propose an algorithm that operates on a time-varying network of agents with edges representing interactions between them and (1) spots \"anomalous\" points in time at which many agents \"change\" their behavior in a way it deviates from the norm; and (2) attributes the detected anomaly to those agents that contribute to the \"change\" the most. Experiments on a large mobile phone network (of 2 million anonymous customers with 50 million interactions over a period of 6 months) shows that the \"change\"-points detected by our algorithm coincide with the social events and the festivals in our data.", "title": "" } ]
scidocsrr
9543a2baec04817a0762c20d119a76e8
A 3D printed soft gripper integrated with curvature sensor for studying soft grasping
[ { "docid": "03a8635fcb64117d5a2a6f890c2b03b5", "text": "This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.", "title": "" } ]
[ { "docid": "90f3c2ea17433ee296702cca53511b9e", "text": "This paper presents the design process, detailed analysis, and prototyping of a novel-structured line-start solid-rotor-based axial-flux permanent-magnet (AFPM) motor capable of autostarting with solid-rotor rings. The preliminary design is a slotless double-sided AFPM motor with four poles for high torque density and stable operation. Two concentric unilevel-spaced raised rings are added to the inner and outer radii of the rotor discs for smooth line-start of the motor. The design allows the motor to operate at both starting and synchronous speeds. The basic equations for the solid rings of the rotor of the proposed AFPM motor are discussed. Nonsymmetry of the designed motor led to its 3-D time-stepping finite-element analysis (FEA) via Vector Field Opera 14.0, which evaluates the design parameters and predicts the transient performance. To verify the design, a prototype 1-hp four-pole three-phase line-start AFPM synchronous motor is built and is used to test the performance in real time. There is a good agreement between experimental and FEA-based computed results. It is found that the prototype motor maintains high starting torque and good synchronization.", "title": "" }, { "docid": "242ddf57f190424f966e258ed02dd7a2", "text": "The purpose of this article is to identify and rank factors associated with sudden death of individuals requiring restraint for excited delirium. Eighteen cases of such deaths witnessed by emergency medical service (EMS) personnel are reported. The 18 cases reported were restrained with the wrists and ankles bound and attached behind the back. This restraint technique was also used for all 196 surviving excited delirium victims encountered during the study period. Unique to these data is a description of the initial cardiopulmonary arrest rhythm in 72% of the sudden death cases. Associated with all sudden death cases was struggle by the victim with forced restraint and cessation of struggling with labored or agonal breathing immediately before cardiopulmonary arrest. Also associated was stimulant drug use (78%), chronic disease (56%), and obesity (56%). The primary cardiac arrest rhythm of ventricular tachycardia was found in 1 of 13 victims with confirmed initial cardiac rhythms, with none found in ventricular fibrillation. Our findings indicate that unexpected sudden death when excited delirium victims are restrained in the out-of-hospital setting is not infrequent and can be associated with multiple predictable but usually uncontrollable factors.", "title": "" }, { "docid": "d68147bf8637543adf3053689de740c3", "text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.", "title": "" }, { "docid": "385c36442d3f8e7367d5b951154e8c67", "text": "Connectivism has been offered as a new learning theory for a digital age, with four key principles for learning: autonomy, connectedness, diversity, and openness. The testing ground for this theory has been massive open online courses (MOOCs). As the number of MOOC offerings increases, interest in how people interact and develop as individual learners in these complex, diverse, and distributed environments is growing. In their work in these environments the authors have observed a growing tension between the elements of connectivity believed to be necessary for effective learning and the variety of individual perspectives both revealed and concealed during interactions with these elements. In this paper we draw on personality and self-determination theories to gain insight into the dimensions of individual experience in connective environments and to further explore the meaning of autonomy, connectedness, diversity, and openness. The authors suggest that definitions of all four principles can be expanded to recognize individual and psychological diversity within connective environments. They also suggest that such expanded definitions have implications for learners’ experiences of MOOCs, recognizing that learners may vary greatly in their desire for and interpretation of connectivity, autonomy, openness, and diversity.", "title": "" }, { "docid": "d36c169c6c28e9e00c00aaee0277cc6e", "text": "The Internet of Things (IoT) system proposed in this paper is an advanced solution for monitoring the temperature at different points of location in a data centre, making this temperature data visible over internet through cloud based dashboard and sending SMS and email alerts to predefined recipients when temperature rises above the safe operating zone and reaches certain high values. This helps the datacenter management team to take immediate action to rectify this temperature deviation. Also this can be monitored from anywhere anytime over online dashboard by the senior level professionals who are not present in the data centre at any point in time. This Wireless Sensor Network (WSN) based monitoring system consists of temperature sensors, ESP8266 and Wi-Fi router. ESP8266 is a low power, highly integrated Wi-Fi solution from Espress if. The ESP8266 here, in this prototype, connects to ‘Ubidots’ cloud through its API for posting temperature data to the cloud dashboard on real time and the cloud event management system generates alerts whenever the high temperature alert event is fired. Cloud events need to be configured for different alerts beforehand through the user friendly user interface of the platform. It's to be noted that the sensor used here can be leveraged to monitor the relative humidity of the data center environment as well along with the temperature of the data center. But for this prototype solution focus is kept entirely on the temperature monitoring.", "title": "" }, { "docid": "4f6fc6635f661de7dd7081f3fd6e0a29", "text": "Wirelessly networked systems of implantable medical devices endowed with sensors and actuators will be the basis of many innovative, sometimes revolutionary therapies. The biggest obstacle in realizing this vision of networked implantable devices is posed by the dielectric nature of the human body, which strongly attenuates radio-frequency (RF) electromagnetic waves. In this paper we present the first hardware and software architecture of an Internet of Medical Things (IoMT) platform with ultrasonic connectivity for intra-body communications that can be used as a basis for building future IoT-ready medical implantable and wearable devices. We show that ultrasonic waves can be efficiently generated and received with low-power and mm-sized components, and that despite the conversion loss introduced by ultrasonic transducers the gap in attenuation between 2.4GHz RF and ultrasonic waves is still substantial, e.g., ultrasounds offer 70dB less attenuation over 10cm. We show that the proposed IoMT platform requires much lower transmission power compared to 2.4 GHz RF with equal reliability in tissues, e.g., 35 dBm lower over 12 cm for 10−3 Bit Error Rate (BEr) leading to lower energy per bit and longer device lifetime. Finally, we show experimentally that 2.4 GHz RF links are not functional at all above 12 cm, while ultrasonic links achieve a reliability of 10−6 up to 20 cm with less than 0 dBm transmission power.", "title": "" }, { "docid": "feeb51ad0c491c86a6018e92e728c3f0", "text": "This paper discusses why traditional reinforcement learning methods, and algorithms applied to those models, result in poor performance in situated domains characterized by multiple goals, noisy state, and inconsistent reinforcement. We propose a methodology for designing reinforcement functions that take advantage of implicit domain knowledge in order to accelerate learning in such domains. The methodology involves the use of heterogeneous reinforcement functions and progress estimators, and applies to learning in domains with a single agent or with multiple agents. The methodology is experimentally validated on a group of mobile robots learning a foraging task.", "title": "" }, { "docid": "a2ea1e604b484758ec2316aeb6b93338", "text": "Virtual customer communities enable firms to establish distributed innovation models that involve varied customer roles in new product development. In this article I use a multitheorotic lens to examine the design of such virtual customer environments, focusing on four underlying theoretical themes (interaction pattern, knowledge creation, customer motivation, and virtual customer community-new product development team integration) and deriving their implications for virtual customer environment design. I offer propositions that relate specific virtual customer environment design elements to successful customer value creation, and thereby to new product development success.", "title": "" }, { "docid": "1fb87bc370023dc3fdfd9c9097288e71", "text": "Protein is essential for living organisms, but digestibility of crude protein is poorly understood and difficult to predict. Nitrogen is used to estimate protein content because nitrogen is a component of the amino acids that comprise protein, but a substantial portion of the nitrogen in plants may be bound to fiber in an indigestible form. To estimate the amount of crude protein that is unavailable in the diets of mountain gorillas (Gorilla beringei) in Bwindi Impenetrable National Park, Uganda, foods routinely eaten were analyzed to determine the amount of nitrogen bound to the acid-detergent fiber residue. The amount of fiber-bound nitrogen varied among plant parts: herbaceous leaves 14.5+/-8.9% (reported as a percentage of crude protein on a dry matter (DM) basis), tree leaves (16.1+/-6.7% DM), pith/herbaceous peel (26.2+/-8.9% DM), fruit (34.7+/-17.8% DM), bark (43.8+/-15.6% DM), and decaying wood (85.2+/-14.6% DM). When crude protein and available protein intake of adult gorillas was estimated over a year, 15.1% of the dietary crude protein was indigestible. These results indicate that the proportion of fiber-bound protein in primate diets should be considered when estimating protein intake, food selection, and food/habitat quality.", "title": "" }, { "docid": "100c8fbe79112e2f7e12e85d7a1335f8", "text": "Staging and response criteria were initially developed for Hodgkin lymphoma (HL) over 60 years ago, but not until 1999 were response criteria published for non-HL (NHL). Revisions to these criteria for both NHL and HL were published in 2007 by an international working group, incorporating PET for response assessment, and were widely adopted. After years of experience with these criteria, a workshop including representatives of most major international lymphoma cooperative groups and cancer centers was held at the 11(th) International Conference on Malignant Lymphoma (ICML) in June, 2011 to determine what changes were needed. An Imaging Task Force was created to update the relevance of existing imaging for staging, reassess the role of interim PET-CT, standardize PET-CT reporting, and to evaluate the potential prognostic value of quantitative analyses using PET and CT. A clinical task force was charged with assessing the potential of PET-CT to modify initial staging. A subsequent workshop was help at ICML-12, June 2013. Conclusions included: PET-CT should now be used to stage FDG-avid lymphomas; for others, CT will define stage. Whereas Ann Arbor classification will still be used for disease localization, patients should be treated as limited disease [I (E), II (E)], or extensive disease [III-IV (E)], directed by prognostic and risk factors. Since symptom designation A and B are frequently neither recorded nor accurate, and are not prognostic in most widely used prognostic indices for HL or the various types of NHL, these designations need only be applied to the limited clinical situations where they impact treatment decisions (e.g., stage II HL). PET-CT can replace the bone marrow biopsy (BMBx) for HL. A positive PET of bone or bone marrow is adequate to designate advanced stage in DLBCL. However, BMBx can be considered in DLBCL with no PET evidence of BM involvement, if identification of discordant histology is relevant for patient management, or if the results would alter treatment. BMBx remains recommended for staging of other histologies, primarily if it will impact therapy. PET-CT will be used to assess response in FDG-avid histologies using the 5-point scale, and included in new PET-based response criteria, but CT should be used in non-avid histologies. The definition of PD can be based on a single node, but must consider the potential for flare reactions seen early in treatment with newer targeted agents which can mimic disease progression. Routine surveillance scans are strongly discouraged, and the number of scans should be minimized in practice and in clinical trials, when not a direct study question. Hopefully, these recommendations will improve the conduct of clinical trials and patient management.", "title": "" }, { "docid": "74d2d780291e9dbf2e725b55ccadd278", "text": "Organizational climate and organizational culture theory and research are reviewed. The article is first framed with definitions of the constructs, and preliminary thoughts on their interrelationships are noted. Organizational climate is briefly defined as the meanings people attach to interrelated bundles of experiences they have at work. Organizational culture is briefly defined as the basic assumptions about the world and the values that guide life in organizations. A brief history of climate research is presented, followed by the major accomplishments in research on the topic with regard to levels issues, the foci of climate research, and studies of climate strength. A brief overview of the more recent study of organizational culture is then introduced, followed by samples of important thinking and research on the roles of leadership and national culture in understanding organizational culture and performance and culture as a moderator variable in research in organizational behavior. The final section of the article proposes an integration of climate and culture thinking and research and concludes with practical implications for the management of effective contemporary organizations. Throughout, recommendations are made for additional thinking and research.", "title": "" }, { "docid": "ab57df7702fa8589f7d462c80d9a2598", "text": "The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.", "title": "" }, { "docid": "adb9eaaf50a43d637bf59ce38d7e8f99", "text": "In response to a stressor, physiological changes are set into motion to help an individual cope with the stressor. However, chronic activation of these stress responses, which include the hypothalamic–pituitary–adrenal axis and the sympathetic–adrenal–medullary axis, results in chronic production of glucocorticoid hormones and catecholamines. Glucocorticoid receptors expressed on a variety of immune cells bind cortisol and interfere with the function of NF-kB, which regulates the activity of cytokine-producing immune cells. Adrenergic receptors bind epinephrine and norepinephrine and activate the cAMP response element binding protein, inducing the transcription of genes encoding for a variety of cytokines. The changes in gene expression mediated by glucocorticoid hormones and catecholamines can dysregulate immune function. There is now good evidence (in animal and human studies) that the magnitude of stress-associated immune dysregulation is large enough to have health implications.", "title": "" }, { "docid": "7b806cbde7cd0c2682402441a578ec9c", "text": "We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to diierent classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the diierent classes of basis functions correspond to diierent classes of prior probabilities on the approximating function spaces, and therefore to diierent types of smoothness assumptions. In summary, diierent multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to diierent classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer.", "title": "" }, { "docid": "78ec5db757e26ce5cd1f594839169573", "text": "Thailand and an additional Australian study Synthesis report by Vittorio di Martino 2002-Workplace violence in the health sector.doc iii Foreword Violence at work has become an alarming phenomenon worldwide. The real size of the problem is largely unknown and recent information shows that the current knowledge is only the tip of the iceberg. The enormous cost of violence at work for the individual, the workplace and the community at large is becoming more and more apparent. Although incidents of violence are known to occur in all work environments, some employment sectors are particularly exposed to it. Violence includes both physical and non-physical violence. Violence is defined as being destructive towards another person. It finds its expression in physical assault, homicide, verbal abuse, bullying, sexual harassment and threat. Violence at work is often considered to be just a reflection of the more general and increasing phenomenon of violence in many areas of social life which has to be dealt with at the level of the whole society. Its prevalence has, however, increased at the workplace, traditionally viewed as a violence-free environment. Employers and workers are equally interested in the prevention of violence at the workplace. Society at large has a stake in preventing violence spreading to working life and recognizing the potential of the workplace by removing such obstacles to productivity, development and peace. Violence is common to such an extent among workers who have direct contact with people in distress, that it may be considered an inevitable part of the job. This is often the case in the health sector (violence in this sector may constitute almost a quarter of all violence at work). 1 While ambulance staff are reported to be at greatest risk, nurses are three times more likely on average to experience violence in the workplace than other occupational groups. Since the large majority of the health workforce is female, the gender dimension of the problem is very evident. Besides concern about the human right of health workers to have a decent work environment, there is concern about the consequences of violence at work. These have a significant impact on the effectiveness of health systems, particularly in developing countries. The equal access of people to primary health care is endangered if a scarce human resource, the health workers, feel under threat in certain geographical and social environments, in situations of general conflict, in work situations where transport …", "title": "" }, { "docid": "238a0364d3b15ba7cb851a4478d5605c", "text": "In this contribution, a Frequency Modulated Continuous Wave (FMCW) radar system, working in the 75 to 85 GHz frequency range, for level measurement of liquids in tanks is presented. The realized radar antenna has been specially designed to cope with the given harsh environmental conditions (high pressure, high temperature, explosion protection requirements, etc.) in industrial applications. The basic design concept of the antenna is to utilize a dielectric lens as `barrier element' between the tank and the radar electronics module, and especially for high-pressure applications, a cylindrical glass element is included into the lens system. Different configurations of dielectric lens antennas (different biconvex and plano-convex lenses with 45 mm diameter), also in versions including a glass barrier element with 20 mm thickness, have been evaluated by means of electromagnetic field simulations and measurements. A circular waveguide with 2.6 mm inner diameter equipped with a conical horn with 4.8 mm diameter and 3 mm length is used as feeding structure. Results demonstrate a range pulse width (-6 dB) of 30 mm with the full bandwidth of 10 GHz of the system. As expected, the best antenna performance is given with the biconvex lens antenna without the glass barrier element. Nevertheless, a reasonably good performance in the case of equipping the glass barrier with plano-convex dielectric lenses at its front and rear sides can be achieved.", "title": "" }, { "docid": "517916f4c62bc7b5766efa537359349d", "text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.", "title": "" }, { "docid": "2b98fd7a61fd7c521758651191df74d0", "text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.", "title": "" }, { "docid": "0b0043590ee170957353141ef8ca42b7", "text": "The OWL Reasoner Evaluation competition is an annual competition (with an associated workshop) that pits OWL 2 compliant reasoners against each other on various standard reasoning tasks over naturally occurring problems. The 2015 competition was the third of its sort and had 14 reasoners competing in six tracks comprising three tasks (consistency, classification, and realisation) over two profiles (OWL 2 DL and EL). In this paper, we discuss the design, execution and results of the 2015 competition with particular attention to lessons learned for benchmarking, comparative experiments, and future competitions.", "title": "" }, { "docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a", "text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.", "title": "" } ]
scidocsrr
abfe426e77cc82d4a9f91c04e99a7f37
Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors
[ { "docid": "0a0f826f1a8fa52d61892632fd403502", "text": "We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word’s grammatical role in a sentence and thus tells of the word’s meaning. Jones and Mewhort (2007) show that word order can be included in the semantic vectors using holographic reduced representation and convolution. We show here that the order information can be captured also by permuting of vector coordinates, thus providing a general and computationally light alternative to convolution.", "title": "" } ]
[ { "docid": "b1313b777c940445eb540b1e12fa559e", "text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.", "title": "" }, { "docid": "498c217fb910a5b4ca6bcdc83f98c11b", "text": "Theodor Wilhelm Engelmann (1843–1909), who had a creative life in music, muscle physiology, and microbiology, developed a sensitive method for tracing the photosynthetic oxygen production of unicellular plants by means of bacterial aerotaxis (chemotaxis). He discovered the absorption spectrum of bacteriopurpurin (bacteriochlorophyll a) and the scotophobic response, photokinesis, and photosynthesis of purple bacteria.", "title": "" }, { "docid": "cf074f806c9b78947c54fb7f41167d9e", "text": "Applications of Machine Learning to Support Dementia Care through Commercially Available O↵-the-Shelf Sensing", "title": "" }, { "docid": "93e33f175a989962467a6c553affa4c8", "text": "Holoprosencephaly is a congenital abnormality of the prosencephalon associated with median facial defects. Its frequency is 1 in 250 pregnancies and 1 in 16,000 live births. The degree of facial deformity usually correlates with the severity of brain malformation. Early mortality is prevalent in severe forms. This report presents a child with lobar holoprosencephaly accompanied by median cleft lip and palate. The treatment and 9 months' follow-up are presented. This unique case shows that holoprosencephaly may present different manifestations of craniofacial malformations, which are not always parallel to the severity of brain abnormalities. Patients with mild to moderate brain abnormalities may survive into childhood and beyond.", "title": "" }, { "docid": "10f2726026dbe1deac859715f57b15b6", "text": "Monte-Carlo Tree Search, especially UCT and its POMDP version POMCP, have demonstrated excellent performance on many problems. However, to efficiently scale to large domains one should also exploit hierarchical structure if present. In such hierarchical domains, finding rewarded states typically requires to search deeply; covering enough such informative states very far from the root becomes computationally expensive in flat non-hierarchical search approaches. We propose novel, scalable MCTS methods which integrate a task hierarchy into the MCTS framework, specifically leading to hierarchical versions of both, UCT and POMCP. The new method does not need to estimate probabilistic models of each subtask, it instead computes subtask policies purely sample-based. We evaluate the hierarchical MCTS methods on various settings such as a hierarchical MDP, a Bayesian model-based hierarchical RL problem, and a large hierarchi-", "title": "" }, { "docid": "5f50c2872e381da8ef170b5d4864ec99", "text": "Gender is an important demographic attribute of people. This paper provides a survey of human gender recognition in computer vision. A review of approaches exploiting information from face and whole body (either from a still image or gait sequence) is presented. We highlight the challenges faced and survey the representative methods of these approaches. Based on the results, good performance have been achieved for datasets captured under controlled environments, but there is still much work that can be done to improve the robustness of gender recognition under real-life environments.", "title": "" }, { "docid": "c5fbbdc6da326b08c734ac1f5daf76d1", "text": "Sentiment classification in Chinese microblogs is more challenging than that of Twitter for numerous reasons. In this paper, two kinds of approaches are proposed to classify opinionated Chinesemicroblog posts: 1) lexicon-based approaches combining Simple Sentiment Word-Count Method with 3 Chinese sentiment lexicons, 2) machine learning models with multiple features. According to our experiment, lexicon-based approaches can yield relatively fine results and machine learning classifiers outperform both the majority baseline and lexicon-based approaches. Among all the machine learning-based approaches, Random Forests works best and the results are satisfactory.", "title": "" }, { "docid": "fe768628129dd1e7256c57f81c638cdc", "text": "With the wide deployment of face recognition systems in applications from de-duplication to mobile device unlocking, security against face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks of a face. We address the problem of facial spoof detection against print (photo) and replay (photo or video) attacks based on the analysis of image aliasing (e.g., surface reflection, moiré pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is mobile phone unlock, given that growing number of phones have face unlock and mobile payment capabilities. We build a mobile spoof face database (MSU MSF) containing more than 1, 000 subjects, which is, to our knowledge, the largest spoof face database in terms of the number of subjects. Both print and replay attacks are captured using the front and rear cameras of a Nexus 5 phone. We analyze the aliasing of print and replay attacks using (i) different intensity channels (R, G, B and grayscale), (ii) different image regions (entire image, detected face, and facial component between the nose and chin), and (iii) different feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on three public-domain face spoof databases (Idiap Print-Attack and Replay-Attack, and CASIA), and the MSU MSF show that the proposed approach is effective in face spoof detection for both cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants’ show that the proposed approach works very well in real application scenarios.", "title": "" }, { "docid": "2c6d36c2c7309da8bb714c50b49caf45", "text": "Although a significant amount of work has be done on the subject of IT governance there still appears to be some disjoint and confusion about what IT governance really is and how it may be realized in practice. This research-in-progress paper draws on existing research related to IT governance to provide a more in-depth understanding of the concept. It describes the current understanding of IT governance and argues for the extension and further development of the various dimensions of governance. An extended model of IT governance is proposed. A research agenda is outlined.", "title": "" }, { "docid": "8d070d8506d8a83ce78bde0e19f28031", "text": "Although amyotrophic lateral sclerosis and its variants are readily recognised by neurologists, about 10% of patients are misdiagnosed, and delays in diagnosis are common. Prompt diagnosis, sensitive communication of the diagnosis, the involvement of the patient and their family, and a positive care plan are prerequisites for good clinical management. A multidisciplinary, palliative approach can prolong survival and maintain quality of life. Treatment with riluzole improves survival but has a marginal effect on the rate of functional deterioration, whereas non-invasive ventilation prolongs survival and improves or maintains quality of life. In this Review, we discuss the diagnosis, management, and how to cope with impaired function and end of life on the basis of our experience, the opinions of experts, existing guidelines, and clinical trials. We highlight the need for research on the effectiveness of gastrostomy, access to non-invasive ventilation and palliative care, communication between the care team, the patient and his or her family, and recognition of the clinical and social effects of cognitive impairment. We recommend that the plethora of evidence-based guidelines should be compiled into an internationally agreed guideline of best practice.", "title": "" }, { "docid": "1bd534ac1b715bac85d681204e1ace07", "text": "Understanding the effects of saturation on the acoustic properties of porous media is paramount for using amplitude versus offset (AVO) technique and 4-D seismic. Most laboratory research on saturation effects has been carried out in sandstones, despite the fact that about half of the world’s oil and gas reserves are in carbonates. We conducted saturation experiments in carbonates with the intention to fill this gap. These experimental data are used to test theoretical assumptions in AVO and seismic analysis in general. Earlier studies have shown that the complex pore structures of carbonates produce poorly defined porosity-velocity trends. Although porosity is the most important factor to control sonic velocity, our data document that pore type, pore fluid compressibility and variations in shear modulus due to saturation are also important factors for velocities in carbonate rocks. Complete saturation of the pore space separated our samples into two groups: one group showed decreases in shear bulk modulus of the rock by up to 2 GPa, the other group showed increase by up to 3 GPa. This change in shear modulus questions Gassmann's assumption of constant shear modulus in dry and saturated rocks. It also explains our observation that velocities predicted with by the Gassmann equation underand overestimates the measured velocities of saturated carbonate samples. In addition, the Vp/Vs ratio shows an overall increase with saturation. In particular, rocks displaying shear weakening have distinct higher Vp/Vs ratios.", "title": "" }, { "docid": "0d51dc0edc9c4e1c050b536c7c46d49d", "text": "MOTIVATION\nThe identification of risk-associated genetic variants in common diseases remains a challenge to the biomedical research community. It has been suggested that common statistical approaches that exclusively measure main effects are often unable to detect interactions between some of these variants. Detecting and interpreting interactions is a challenging open problem from the statistical and computational perspectives. Methods in computing science may improve our understanding on the mechanisms of genetic disease by detecting interactions even in the presence of very low heritabilities.\n\n\nRESULTS\nWe have implemented a method using Genetic Programming that is able to induce a Decision Tree to detect interactions in genetic variants. This method has a cross-validation strategy for estimating classification and prediction errors and tests for consistencies in the results. To have better estimates, a new consistency measure that takes into account interactions and can be used in a genetic programming environment is proposed. This method detected five different interaction models with heritabilities as low as 0.008 and with prediction errors similar to the generated errors.\n\n\nAVAILABILITY\nInformation on the generated data sets and executable code is available upon request.", "title": "" }, { "docid": "1e6583ec7a290488cd8e672ab59158b9", "text": "Evidence-based guidelines for the management of patients with Lyme disease, human granulocytic anaplasmosis (formerly known as human granulocytic ehrlichiosis), and babesiosis were prepared by an expert panel of the Infectious Diseases Society of America. These updated guidelines replace the previous treatment guidelines published in 2000 (Clin Infect Dis 2000; 31[Suppl 1]:1-14). The guidelines are intended for use by health care providers who care for patients who either have these infections or may be at risk for them. For each of these Ixodes tickborne infections, information is provided about prevention, epidemiology, clinical manifestations, diagnosis, and treatment. Tables list the doses and durations of antimicrobial therapy recommended for treatment and prevention of Lyme disease and provide a partial list of therapies to be avoided. A definition of post-Lyme disease syndrome is proposed.", "title": "" }, { "docid": "d16c09bb4c082ad5b82c595fc1d17509", "text": "The neural mechanisms behind active and passive touch are not yet fully understood. Using fMRI we investigated the brain correlates of these exploratory procedures using a roughness categorization task. Participants either actively explored a surface (active touch) or the surface was moved under the participant's stationary finger (passive touch). The stimuli consisted of three different grades of sandpaper which participants were required to categorize as either coarse, medium, or fine. Exploratory procedure did not affect performance although the coarse and fine surfaces were more easily categorized than the medium surface. An initial whole brain analysis revealed activation of sensory and cognitive areas, including post-central gyrus and prefrontal cortical areas, in line with areas reported in previous studies. Our main analysis revealed greater activation during active than passive touch in the contralateral primary somatosensory region but no effect of stimulus roughness. In contrast, activation in the parietal operculum (OP) was significantly affected by stimulus roughness but not by exploration procedure. Active touch also elicited greater and more distributed brain activity compared with passive touch in areas outside the somatosensory region, possibly due to the motor component of the task. Our results reveal that different cortical areas may be involved in the processing of surface exploration and surface texture, with exploration procedures affecting activations in the primary somatosensory cortex and stimulus properties affecting relatively higher cortical areas within the somatosensory system.", "title": "" }, { "docid": "4c1e240af3543473e6f08beda06f8245", "text": "As the worlds of commerce, entertainment, travel, and Internet technology become more inextricably linked, new types of business data become available for creative use and formal analysis. Indeed, this paper provides a study of exploiting online travel information for personalized travel package recommendation. A critical challenge along this line is to address the unique characteristics of travel data, which distinguish travel packages from traditional items for recommendation. To this end, we first analyze the characteristics of the travel packages and develop a Tourist-Area-Season Topic (TAST) model, which can extract the topics conditioned on both the tourists and the intrinsic features (i.e. locations, travel seasons) of the landscapes. Based on this TAST model, we propose a cocktail approach on personalized travel package recommendation. Finally, we evaluate the TAST model and the cocktail approach on real-world travel package data. The experimental results show that the TAST model can effectively capture the unique characteristics of the travel data and the cocktail approach is thus much more effective than traditional recommendation methods for travel package recommendation.", "title": "" }, { "docid": "871298644bc8b7187a20a4803ec7e723", "text": "Intrinsic video decomposition refers to the fundamentally ambiguous task of separating a video stream into its constituent layers, in particular reflectance and shading layers. Such a decomposition is the basis for a variety of video manipulation applications, such as realistic recoloring or retexturing of objects. We present a novel variational approach to tackle this underconstrained inverse problem at real-time frame rates, which enables on-line processing of live video footage. The problem of finding the intrinsic decomposition is formulated as a mixed variational ℓ2-ℓp-optimization problem based on an objective function that is specifically tailored for fast optimization. To this end, we propose a novel combination of sophisticated local spatial and global spatio-temporal priors resulting in temporally coherent decompositions at real-time frame rates without the need for explicit correspondence search. We tackle the resulting high-dimensional, non-convex optimization problem via a novel data-parallel iteratively reweighted least squares solver that runs on commodity graphics hardware. Real-time performance is obtained by combining a local-global solution strategy with hierarchical coarse-to-fine optimization. Compelling real-time augmented reality applications, such as recoloring, material editing and retexturing, are demonstrated in a live setup. Our qualitative and quantitative evaluation shows that we obtain high-quality real-time decompositions even for challenging sequences. Our method is able to outperform state-of-the-art approaches in terms of runtime and result quality -- even without user guidance such as scribbles.", "title": "" }, { "docid": "719783be7139d384d24202688f7fc555", "text": "Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.", "title": "" }, { "docid": "87949c3616f14711fe0eb6f7cc9f95b3", "text": "Three hydroponic systems (aeroponics, aerohydroponics, and deep-water culture) were compared for the production of potato (Solanum tuberosum) seed tubers. Aerohydroponics was designed to improve the root zone environment of aeroponics by maintaining root contact with nutrient solution in the lower part of the beds, while intermittently spraying roots in the upper part. Root vitality, shoot fresh and dry weight, and total leaf area were significantly highest when cv. Superior, a medium early-maturing cultivar, was grown in the aeroponic system. This better plant growth in the aeroponic system was accompanied by rapid changes of solution pH and EC, and early tuberization. However, with cv. Atlantic, a mid-late maturing cultivar, there were no significant differences in shoot weight and leaf area among the hydroponic systems. The first tuberization was observed in aeroponics on 26–30 and 43–53 days after transplanting for cvs Superior and Atlantic, respectively. Tuberization in aerohydroponics and deep-water culture system occurred about 3–4 and 6–8 days later, respectively. The number of tubers produced was greatest in the deep-water culture system, but the total tuber weight per plant was the least in this system. For cv. Atlantic, the number of tubers <30 g weight was higher in aerohydroponics than in aeroponics, whereas there was no difference in the number of tubers >30 g between aerohydroponics and aeroponics. For cv. Superior, there was no difference in the size distribution of tubers between the two aeroponic systems. It could be concluded that deep-water culture system could be used to produce many small tubers (1–5 g) for plant propagation. However, the reduced number of large tubers above 5 g weight in the deep-water culture system, may favor use of either aeroponics or aerohydroponics. These two systems produced a similar number of tubers in each size group for the medium-early season cv. Superior, whereas aerohydroponics produced more tubers than aeroponics for the mid-late cultivar Atlantic.", "title": "" }, { "docid": "742fef70793920d2b96c0877a2a7f371", "text": "Cloud computing is an emerging technology and it allows users to pay as you need and has the high performance. Cloud computing is a heterogeneous system as well and it holds large amount of application data. In the process of scheduling some intensive data or computing an intensive application, it is acknowledged that optimizing the transferring and processing time is crucial to an application program. In this paper in order to minimize the cost of the processing we formulate a model for task scheduling and propose a particle swarm optimization (PSO) algorithm which is based on small position value rule. By virtue of comparing PSO algorithm with the PSO algorithm embedded in crossover and mutation and in the local research, the experiment results show the PSO algorithm not only converges faster but also runs faster than the other two algorithms in a large scale. The experiment results prove that the PSO algorithm is more suitable to cloud computing.", "title": "" }, { "docid": "488b0adfe43fc4dbd9412d57284fc856", "text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.", "title": "" } ]
scidocsrr
3686a0986312799f8cd3d2675d46027a
Low-Rank Common Subspace for Multi-view Learning
[ { "docid": "50c3e7855f8a654571a62a094a86c4eb", "text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "title": "" }, { "docid": "3a7dca2e379251bd08b32f2331329f00", "text": "Canonical correlation analysis (CCA) is a method for finding linear relations between two multidimensional random variables. This paper presents a generalization of the method to more than two variables. The approach is highly scalable, since it scales linearly with respect to the number of training examples and number of views (standard CCA implementations yield cubic complexity). The method is also extended to handle nonlinear relations via kernel trick (this increases the complexity to quadratic complexity). The scalability is demonstrated on a large scale cross-lingual information retrieval task.", "title": "" }, { "docid": "14a2a003117d2bca8cb5034e09e8ea05", "text": "The regularization principals [31] lead approximation schemes to deal with various learning problems, e.g., the regularization of the norm in a reproducing kernel Hilbert space for the ill-posed problem. In this paper, we present a family of subspace learning algorithms based on a new form of regularization, which transfers the knowledge gained in training samples to testing samples. In particular, the new regularization minimizes the Bregman divergence between the distribution of training samples and that of testing samples in the selected subspace, so it boosts the performance when training and testing samples are not independent and identically distributed. To test the effectiveness of the proposed regularization, we introduce it to popular subspace learning algorithms, e.g., principal components analysis (PCA) for cross-domain face modeling; and Fisher's linear discriminant analysis (FLDA), locality preserving projections (LPP), marginal Fisher's analysis (MFA), and discriminative locality alignment (DLA) for cross-domain face recognition and text categorization. Finally, we present experimental evidence on both face image data sets and text data sets, suggesting that the proposed Bregman divergence-based regularization is effective to deal with cross-domain learning problems.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" } ]
[ { "docid": "18e2871b4414b3d002f079916958cb97", "text": "By appending some bits to the original signature, a trapdoor hash function converts any signature scheme into secure signature scheme with very efficient online computation. Many of them have been proposed. The paper presents three new trapdoor hash functions with promotion in offline computation, online computation, appended bits, and the size of trapdoor key (but with longer public key). While the appended bits is shortened half (or more), the efficiency of online computation is doubled or more. These features make it suitable for mobile environment and real-time application.", "title": "" }, { "docid": "6e850eb5664a633ac711b32de5baf01c", "text": "The privacy of users must be considered as the utmost priority in distributed networks. To protect the identities of users, attribute-based encryption (ABE) was presented by Sahai et al. ABE has been widely used in many scenarios, particularly in cloud computing. In this paper, public key encryption with equality test is concatenated with key-policy ABE (KP-ABE) to present KP-ABE with equality test (KP-ABEwET). The proposed scheme not only offers fine-grained authorization of ciphertexts but also protects the identities of users. In contrast to ABE with keyword search, KP-ABEwET can test whether the ciphertexts encrypted by different public keys contain the same information. Moreover, the authorization process of the presented scheme is more flexible than that of Ma et al.’s scheme. Furthermore, the proposed scheme achieves one-way against chosen-ciphertext attack based on the bilinear Diffie-Hellman (BDH) assumption. In addition, a new computational problem called the twin-decision BDH problem (tDBDH) is proposed in this paper. tDBDH is proved to be as hard as the decisional BDH problem. Finally, for the first time, the security model of authorization is provided, and the security of authorization based on the tDBDH assumption is proven in the random oracle model.", "title": "" }, { "docid": "13f869e3e0cd604465c0226583320791", "text": "An emerging consensus exists in the school reform literature about what conditions contribute to student success.'** Conditions include high standards for academic learning and conduct, meaningful and engaging pedagogy and curriculum, professional learning communities among staff, and personalized learning environments. Schools providing such supports are more likely to have students who are engaged in and connected to school. Professionals and parents readily understand the need for high standards and quality curriculum and pedagogy in school. Similarly, the concept of teachers working together as professionals to ensure student success is not an issue. But the urgency to provide a personalized learning environment for students especially with schools struggling to provide textbooks to all students, hot meals, security, and janitorial services is not as great in many quarters. While parents would prefer their children experience a caring school environment, does such an environment influence student academic performance? Research suggests it does. For students to take advantage of high expectations and more advanced curricula, they need support from the people with whom they interact in school.^''", "title": "" }, { "docid": "97dfc2b23b527a05f7de443f10a89543", "text": "Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users’ quality of experience (QoE). Developing models that can accurately predict users’ QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer’s recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events factors that interact in a complex way to affect a user’s QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.", "title": "" }, { "docid": "c2c056ae22c22e2a87b9eca39d125cc2", "text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.", "title": "" }, { "docid": "875c0b5832eaf827e04f5d253133a912", "text": "Firefly algorithm (FA) is a new optimization technique based on swarm intelligence. It simulates the social behavior of fireflies. The search pattern of FA is determined by the attractions among fireflies, whereby a less bright firefly moves toward a brighter firefly. In FA, each firefly can be attracted by all other brighter fireflies in the population. However, too many attractions may result in oscillations during the search process and high computational time complexity. To overcome these problems, we propose a new FA variant called FA with neighborhood attraction (NaFA). In NaFA, each firefly is attracted by other brighter fireflies selected from a predefined neighborhood rather than those from the entire population. Experiments are conducted using several well-known benchmark functions. The results show that the proposed strategy can efficiently improve the accuracy of solutions and reduce the computational time complexity.", "title": "" }, { "docid": "c2900b230b0202fc5138ed4c662e0185", "text": "We have designed an intelligent emergency response system to detect falls in the home. It uses image-based sensors. A pilot study was conducted using 21 subjects to evaluate the efficacy and performance of the fall-detection component of the system. Trials were conducted in a mock-up bedroom setting, with a bed, a chair and other typical bedroom furnishings. A small digital videocamera was installed in the ceiling at a height of approximately 2.6 m. The digital camera covered an area of approximately 5.0 m x 3.8 m. The subjects were asked to assume a series of postures, namely walking/standing, sitting/lying down in an inactive zone, stooping, lying down in a 'stretched' position, and lying down in a 'tucked' position. These five scenarios were repeated three times by each subject in a random order. These test positions totalled 315 tasks with 126 fall-simulated tasks and 189 non-fall-simulated tasks. The system detected a fall on 77% of occasions and missed a fall on 23%. False alarms occurred on only 5% of occasions. The results encourage the potential use of a vision-based system to provide safety and security in the homes of the elderly.", "title": "" }, { "docid": "2bef3728a0444da6e108c8908d5b5c04", "text": "An important problem in Question Answering over Knowledge Bases is to interpret a question into a database query. This problem can be formulated as an instance of semantic parsing where a natural language utterance is analyzed into a (possibly executable) meaning representation. Most semantic parsing strategies for Question Answering use models with limited expressiveness because it is difficult to characterize it and systematically control it. In this work we use tree-to-tree transducers which are very general and solid models to transform the syntactic tree of a question into the executable semantic tree of a database query. When designing these tree transducers, we identify two parameters that influence the construction cost and their expressive capabilities, namely the tree fragment depth and number of variables of the rules. We characterize the search space of tree transducer construction in terms of these parameters and show considerable improvements in accuracy as we increase the expressive power.", "title": "" }, { "docid": "2a2bc3ccb5217c16f278011cbe7dcf2a", "text": "The problem of Big Data in cyber security (i.e., too much network data to analyze) compounds itself every day. Our approach is based on a fundamental characteristic of Big Data: an overwhelming majority of the network traffic in a traditionally secured enterprise (i.e., using defense-in-depth) is non-malicious. Therefore, one way of eliminating the Big Data problem in cyber security is to ignore the overwhelming majority of an enterprise's non-malicious network traffic and focus only on the smaller amounts of suspicious or malicious network traffic. Our approach uses simple clustering along with a dataset enriched with known malicious domains (i.e., anchors) to accurately and quickly filter out the non-suspicious network traffic. Our algorithm has demonstrated the predictive ability to accurately filter out approximately 97% (depending on the algorithm used) of the non-malicious data in millions of Domain Name Service (DNS) queries in minutes and identify the small percentage of unseen suspicious network traffic. We demonstrate that the resulting network traffic can be analyzed with traditional reputation systems, blacklists, or in-house threat tracking sources (we used virustotal.com) to identify harmful domains that are being accessed from within the enterprise network. Specifically, our results show that the method can reduce a dataset of 400k query-answer domains (with complete malicious domain ground truth) down to only 3% containing 99% of all malicious domains. Further, we demonstrate that this capability scales to 10 million query-answer pairs, which it can reduce by 97% in less than an hour.", "title": "" }, { "docid": "c7048e00cdb56e2f1085d23b9317c147", "text": "`Design-for-Assembly (DFA)\" is an engineering concept concerned with improving product designs for easier and less costly assembly operations. Much of academic or industrial eeorts in this area have been devoted to the development of analysis tools for measuring the \\assemblability\" of a design. On the other hand, little attention has been paid to the actual redesign process. The goal of this paper is to develop a computer-aided tool for assisting designers in redesigning a product for DFA. One method of redesign, known as the \\replay and modify\" paradigm, is to replay a previous design plan, and modify the plan wherever necessary and possible, in accordance to the original design intention, for newly speciied design goals 24]. The \\replay and modify\" paradigm is an eeective redesign method because it ooers a more global solution than simple local patch-ups. For such a paradigm, design information, such as the design plan and design rationale, must be recorded during design. Unfortunately, such design information is not usually available in practice. To handle the potential absence of the required design information and support the \\replay and modify\" paradigm, the redesign process is modeled as a reverse engineering activity. Reverse engineering roughly refers to an activity of inferring the process, e.g. the design plan, used in creating a given design, and using the inferred knowledge for design recreation or redesign. In this paper, the development of an interactive computer-aided redesign tool for Design-for-Assembly, called REVENGE (REVerse ENGineering), is presented. The architecture of REVENGE is composed of mainly four activities: design analysis, knowledge acquisition, design plan reconstruction, and case-based design modiication. First a DFA analysis is performed to uncover any undesirable aspects of the design with respect to its assemblability. REVENGE , then, interactively solicits designers for useful design information that might not be available from standard design documents such as design rationale. Then, a heuristic algorithm reconstructs a default design plan. A default design plan is a sequence of probable design actions that might have led to the original design. DFA problems identiied during the analysis stage are mapped to the portion of the design plan from which they might have originated. Problems that originate from the earlier portion of the design plan are attacked rst. A case-based approach is used to solve each problem by retrieving a similar redesign case and adapting it to the current situation. REVENGE has been implemented, and has been tested …", "title": "" }, { "docid": "1cc5ab9bd552e6399c6cf5a06e0ca235", "text": "Fake identities and Sybil accounts are pervasive in today’s online communities. They are responsible for a growing number of threats, including fake product reviews, malware and spam on social networks, and astroturf political campaigns. Unfortunately, studies show that existing tools such as CAPTCHAs and graph-based Sybil detectors have not proven to be effective defenses. In this paper, we describe our work on building a practical system for detecting fake identities using server-side clickstream models. We develop a detection approach that groups “similar” user clickstreams into behavioral clusters, by partitioning a similarity graph that captures distances between clickstream sequences. We validate our clickstream models using ground-truth traces of 16,000 real and Sybil users from Renren, a large Chinese social network with 220M users. We propose a practical detection system based on these models, and show that it provides very high detection accuracy on our clickstream traces. Finally, we worked with collaborators at Renren and LinkedIn to test our prototype on their server-side data. Following positive results, both companies have expressed strong interest in further experimentation and possible internal deployment.", "title": "" }, { "docid": "feb34f36aed8e030f93c0adfbe49ee8b", "text": "Complex queries containing outer joins are, for the most part, executed by commercial DBMS products in an \"as written\" manner. Only a very few reorderings of the operations are considered and the benefits of considering comprehensive reordering schemes are not exploited. This is largely due to the fact there are no readily usable results for reordering such operations for relations with duplicates and/or outer join predicates that are other than \"simple.\" Most previous approaches have ignored duplicates and complex predicates; the very few that have considered these aspects have suggested approaches that lead to a possibly exponential number of, and redundant intermediate joins. Since traditional query graph models are inadequate for modeling outer join queries with complex predicates, we present the needed hypergraph abstraction and algorithms for reordering such queries with joins and outer joins. As a result, the query optimizer can explore a significantly larger space of execution plans, and choose one with a low cost. Further, these algorithms are easily incorporated into well known and widely used enumeration methods such as dynamic programming.", "title": "" }, { "docid": "4ed598ad44ca86cb9af8067605adfbb8", "text": "Face detection has witnessed immense progress in the last few years, with new milestones being surpassed every year. While many challenges such as large variations in scale, pose, appearance are successfully addressed, there still exist several issues which are not specifically captured by existing methods or datasets. In this work, we identify the next set of challenges that requires attention from the research community and collect a new dataset of face images that involve these issues such as weather-based degradations, motion blur, focus blur and several others. We demonstrate that there is a considerable gap in the performance of state-of-the-art detectors and real-world requirements. Hence, in an attempt to fuel further research in unconstrained face detection, we present a new annotated Unconstrained Face Detection Dataset (UFDD) with several challenges and benchmark recent methods. Additionally, we provide an in-depth analysis of the results and failure cases of these methods. The UFDD dataset as well as baseline results, evaluation code and image source are available at: www.ufdd.info/", "title": "" }, { "docid": "3f5097b33aab695678caca712b649a8f", "text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.", "title": "" }, { "docid": "ca64effff681149682be21b512f0e3c9", "text": "In this paper, a grip-force control of an elastic object is proposed based on a visual slip margin feedback. When an elastic object is pressed and slid slightly on a rigid plate, a partial slip, called \"incipient slip\" occurs on the contact surface. The slip margin between an elastic object and a rigid plate is estimated based on the analytic solution of Hertzian contact model. A 1 DOF gripper consists of a camera and a force sensor is developed. The slip margin can be estimated from the tangential force measured by a force sensor, the deformation of the elastic object and the radius on the contact area both measured by a camera. In the proposed method, the friction coefficient is not explicitly needed. The grip force is controlled by a direct feedback of the estimated slip margin, whose stability is analytically guaranteed. As a result, the slip margin is maintained to a desired value without occurring the gross slip against a disturbance load force to the object.", "title": "" }, { "docid": "b27862cd75c2dd58ccca1826122e89f2", "text": "Smart grids consist of suppliers, consumers, and other parts. The main suppliers are normally supervised by industrial control systems. These systems rely on programmable logic controllers (PLCs) to control industrial processes and communicate with the supervisory system. Until recently, industrial operators relied on the assumption that these PLCs are isolated from the online world and hence cannot be the target of attacks. Recent events, such as the infamous Stuxnet attack [15] directed the attention of the security and control system community to the vulnerabilities of control system elements, such as PLCs. In this paper, we design and implement the Crysys PLC honeypot (CryPLH) system to detect targeted attacks against industrial control systems. This PLC honeypot can be implemented as part of a larger security monitoring system. Our honeypot implementation improves upon existing solutions in several aspects: most importantly in level of interaction and ease of configuration. Results of an evaluation show that our honeypot is largely indistinguishable from a real device from the attacker’s perspective. As a collateral of our analysis, we were able to identify some security issues in the real PLC device we tested and implemented specific firewall rules to protect the device from targeted attacks.", "title": "" }, { "docid": "191ee9ac8934ed6430a0425ba6bc1550", "text": "Health care has become a major expenditure in the US since 1980. Both the size of the health care sector and the enormous volume of money involved make it an attractive fraud target. Therefore, effective fraud detection is important for reducing the cost of health care services. In order to achieve more effective fraud detection, many researchers have attempted to develop sophisticated antifraud approaches incorporating data mining, machine learning or other methods. This introduce some preliminary knowledge of U.S. health care system and its fraudulent behaviors, analyzes the characteristics of health care data, and reviews and compares currently proposed fraud detection approaches using health care data in the literature as well as their corresponding data preprocess methods. Also a novel health care fraud detection method including geo-location information is proposed.", "title": "" }, { "docid": "a90f865e053b9339052a4d00281dbd03", "text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output &#x2013; point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "title": "" }, { "docid": "028be19d9b8baab4f5982688e41bfec8", "text": "The activation function for neurons is a prominent element in the deep learning architecture for obtaining high performance. Inspired by neuroscience findings, we introduce and define two types of neurons with different activation functions for artificial neural networks: excitatory and inhibitory neurons, which can be adaptively selected by selflearning. Based on the definition of neurons, in the paper we not only unify the mainstream activation functions, but also discuss the complementariness among these types of neurons. In addition, through the cooperation of excitatory and inhibitory neurons, we present a compositional activation function that leads to new state-of-the-art performance comparing to rectifier linear units. Finally, we hope that our framework not only gives a basic unified framework of the existing activation neurons to provide guidance for future design, but also contributes neurobiological explanations which can be treated as a window to bridge the gap between biology and computer science.", "title": "" }, { "docid": "b1c0351af515090e418d59a4b553b866", "text": "BACKGROUND\nThe dermatoscopic examination of the nail plate has been recently introduced for the evaluation of pigmented nail lesions. There is, however, no evidence that this technique improves diagnostic accuracy of in situ melanoma.\n\n\nOBJECTIVE\nTo establish and validate patterns for intraoperative dermatoscopy of the nail matrix.\n\n\nMETHODS\nIntraoperative nail matrix dermatoscopy was performed in 100 consecutive bands of longitudinal melanonychia that were excised and submitted to histopathologic examination.\n\n\nRESULTS\nWe identified 4 dermatoscopic patterns: regular gray pattern (hypermelanosis), regular brown pattern (benign melanocytic hyperplasia), regular brown pattern with globules or blotch (melanocytic nevi), and irregular pattern (melanoma).\n\n\nLIMITATIONS\nNail matrix dermatoscopy is an invasive procedure that can not routinely be performed in all cases of melanonychia.\n\n\nCONCLUSION\nThe patterns described present high sensitivity and specificity for intraoperative differential diagnosis of pigmented nail lesions.", "title": "" } ]
scidocsrr
95713ad4aa91dc8f91f691b76c1eb1ca
Practical Dynamic Searchable Encryption with Small Leakage
[ { "docid": "c0a05cad5021b1e779682b50a53f25fd", "text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.", "title": "" } ]
[ { "docid": "69d42340c09303b69eafb19de7170159", "text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.", "title": "" }, { "docid": "92d271da0c5dff6e130e55168c64d2b0", "text": "New therapeutic targets for noncognitive reductions in energy intake, absorption, or storage are crucial given the worldwide epidemic of obesity. The gut microbial community (microbiota) is essential for processing dietary polysaccharides. We found that conventionalization of adult germ-free (GF) C57BL/6 mice with a normal microbiota harvested from the distal intestine (cecum) of conventionally raised animals produces a 60% increase in body fat content and insulin resistance within 14 days despite reduced food intake. Studies of GF and conventionalized mice revealed that the microbiota promotes absorption of monosaccharides from the gut lumen, with resulting induction of de novo hepatic lipogenesis. Fasting-induced adipocyte factor (Fiaf), a member of the angiopoietin-like family of proteins, is selectively suppressed in the intestinal epithelium of normal mice by conventionalization. Analysis of GF and conventionalized, normal and Fiaf knockout mice established that Fiaf is a circulating lipoprotein lipase inhibitor and that its suppression is essential for the microbiota-induced deposition of triglycerides in adipocytes. Studies of Rag1-/- animals indicate that these host responses do not require mature lymphocytes. Our findings suggest that the gut microbiota is an important environmental factor that affects energy harvest from the diet and energy storage in the host. Data deposition: The sequences reported in this paper have been deposited in the GenBank database (accession nos. AY 667702--AY 668946).", "title": "" }, { "docid": "32a2bfb7a26631f435f9cb5d825d8da2", "text": "An important aspect for the task of grammatical error correction (GEC) that has not yet been adequately explored is adaptation based on the native language (L1) of writers, despite the marked influences of L1 on second language (L2) writing. In this paper, we adapt a neural network joint model (NNJM) using L1-specific learner text and integrate it into a statistical machine translation (SMT) based GEC system. Specifically, we train an NNJM on general learner text (not L1-specific) and subsequently train on L1-specific data using a Kullback-Leibler divergence regularized objective function in order to preserve generalization of the model. We incorporate this adapted NNJM as a feature in an SMT-based English GEC system and show that adaptation achieves significant F0.5 score gains on English texts written by L1 Chinese, Russian, and Spanish writers.", "title": "" }, { "docid": "a1d9742feb9f2a5dcf2322b00daf4151", "text": "We tackle the problem of predicting the future popularity level of micro-reviews, focusing on Foursquare tips, whose high degree of informality and briefness offer extra difficulties to the design of effective popularity prediction methods. Such predictions can greatly benefit the future design of content filtering and recommendation methods. Towards our goal, we first propose a rich set of features related to the user who posted the tip, the venue where it was posted, and the tip’s content to capture factors that may impact popularity of a tip. We evaluate different regression and classification based models using this rich set of proposed features as predictors in various scenarios. As fas as we know, this is the first work to investigate the predictability of micro-review popularity (or helpfulness) exploiting spatial, temporal, topical and, social aspects that are rarely exploited conjointly in this domain. © 2015 Published by Elsevier Inc.", "title": "" }, { "docid": "8c07232e73849116c070ffa2367e3c6f", "text": "Childhood apraxia of speech (CAS) is a motor speech disorder characterized by distorted phonemes, segmentation (increased segment and intersegment durations), and impaired production of lexical stress. This study investigated the efficacy of Treatment for Establishing Motor Program Organization (TEMPO) in nine participants (ages 5 to 8) using a delayed treatment group design. Children received four weeks of intervention for four days each week. Experimental probes were administered at baseline and posttreatment—both immediately and one month after treatment—for treated and untreated stimuli. Significant improvements in specific acoustic measures of segmentation and lexical stress were demonstrated following treatment for both the immediate and delayed treatment groups. Treatment effects for all variables were maintained at one-month post-treatment. These results support the demonstrated efficacy of TEMPO in improving the speech of children with CAS.", "title": "" }, { "docid": "45e1a424ad0807ce49cd4e755bdd9351", "text": "Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend towards deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.", "title": "" }, { "docid": "08a1da753730a8c39ef6e98277939f9c", "text": "One of the most important issues in the operation of a photovoltaic (PV) system is extracting maximum power from the PV array, especially in partial shading condition (PSC). Under PSC, P-V characteristic of PV arrays will have multiple peak points, only one of which is global maximum. Conventional maximum power point tracking (MPPT) methods are not able to extract maximum power in this condition. In this paper, a novel two-stage MPPT method is presented to overcome this drawback. In the first stage, a method is proposed to determine the occurrence of PSC, and in the second stage, using a new algorithm that is based on ramp change of the duty cycle and continuous sampling from the P-V characteristic of the array, global maximum power point (MPP) of array is reached. Perturb and observe algorithm is then re-activated to trace small changes of the new MPP. Open-loop operation of the proposed method makes its implementation cheap and simple. The method is robust in the face of changing environmental conditions and array characteristics, and has minimum negative impact on the connected power system. Simulations in Matlab/Simulink and experimental results validate the performance of the proposed methods.", "title": "" }, { "docid": "2d6d33cbbf69cc864c2a65c30f60e5ec", "text": "This article provides a framework for actuaries to think about cyber risk. We propose a differentiated view on cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the literature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences. As a result, we identify possible ways forward to improve rigorous modeling of cyber risk, including its driving factors. This is a prerequisite for establishing a deep and stable market for cyber risk insurance.", "title": "" }, { "docid": "64828addebd6e9b1773e5d8e2e1668af", "text": "Named entity typing is the task of detecting the types of a named entity in context. For instance, given “Eric is giving a presentation”, our goal is to infer that ‘Eric’ is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.", "title": "" }, { "docid": "91e574a20ad41b1725da02d125977fd3", "text": "We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes of faces, such that we can conveniently recombine different identities and attributes for identity preserving face synthesis in open domains. Previous identity preserving face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the training dataset, our framework requires one input image of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing, e.g., pose, emotion, illumination, and even the background. We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an asymmetric loss function to better preserve the identity and stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects that are not presented in the labeled training face dataset. Our experiments demonstrate the efficacy of the proposed framework. We also present its usage in a much broader set of applications including face frontalization, face attribute morphing, and face adversarial example detection.", "title": "" }, { "docid": "dc53e2bf9576fd3fb7670b0860eae754", "text": "In the field of ADAS and self-driving car, lane and drivable road detection play an essential role in reliably accomplishing other tasks, such as objects detection. For monocular vision based semantic segmentation of lane and road, we propose a dilated feature pyramid network (FPN) with feature aggregation, called DFFA, where feature aggregation is employed to combine multi-level features enhanced with dilated convolution operations and FPN under the framework of ResNet. Experimental results validate effectiveness and efficiency of the proposed deep learning model for semantic segmentation of lane and drivable road. Our DFFA achieves the best performance both on Lane Estimation Evaluation and Behavior Evaluation tasks in KITTI-ROAD and take the second place on UU ROAD task.", "title": "" }, { "docid": "5aef75aead029333a2e47a5d1ba52f2e", "text": "Although we appreciate Kinney and Atwal’s interest in equitability and maximal information coefficient (MIC), we believe they misrepresent our work. We highlight a few of our main objections below. Regarding our original paper (1), Kinney and Atwal (2) state “MIC is said to satisfy not just the heuristic notion of equitability, but also the mathematical criterion of R equitability,” the latter being their formalization of the heuristic notion that we introduced. This statement is simply false. We were explicit in our paper that our claims regarding MIC’s performance were based on large-scale simulations: “We tested MIC’s equitability through simulations. . ..[These] show that, for a large collection of test functions with varied sample sizes, noise levels, and noise models, MIC roughly equals the coefficient of determination R relative to each respective noiseless function.” Although we mathematically proved several things about MIC, none of our claims imply that it satisfies Kinney and Atwal’s R equitability, which would require that MIC exactly equal R in the infinite data limit. Thus, their proof that no dependence measure can satisfy R equitability, although interesting, does not uncover any error in our work, and their suggestion that it does is a gross misrepresentation. Kinney and Atwal seem ready to toss out equitability as a useful criterion based on their theoretical result. We argue, however, that regardless of whether “perfect” equitability is possible, approximate notions of equitability remain the right goal for many data exploration settings. Just as the theory of NP completeness does not suggest we stop thinking about NP complete problems, but instead that we look for approximations and solutions in restricted cases, an impossibility result about perfect equitability provides focus for further research, but does not mean that useful solutions are unattainable. Similarly, as others have noted (3), Kinney and Atwal’s proof requires a highly permissive noise model, and so the attainability of R equitability under more limited noise models such as those in our work remains an open question. Finally, the authors argue that mutual information is more equitable than MIC. However, they provide as justification only a single noise model, only at limiting sample sizes ðn≥ 5;000Þ. As we’ve shown in followup work (4), which they themselves cite but fail to address, MIC is more equitable than mutual information estimation under many other realistic noise models even at a sample size of 5,000. Kinney and Atwal have stated, “. . .it matters how one defines noise” (5), and a useful statistic must indeed be robust to a wide range of noise models. Equally importantly, we’ve established in both our original and follow-up work that at sample size regimes less than 5,000, MIC is more equitable than mutual information estimates across all noise models tested. MIC’s superior equitability in these settings is not an “artifact” we neglected—as Kinney and Atwal suggest—but rather a weakness of mutual information estimation and an important consideration for practitioners. We expect that the understanding of equitability and MIC will improve over time and that better methods may arise. However, accurate representations of the work thus far will allow researchers in the area to most productively and collectively move forward.", "title": "" }, { "docid": "333fd7802029f38bda35cd2077e7de59", "text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.", "title": "" }, { "docid": "134cde769a3faeeac80746b85313bd0b", "text": "Adrenocortical carcinoma (ACC) in pediatric and adolescent patients is rare, and it is associated with various clinical symptoms. We introduce the case of an 8-year-old boy with ACC who presented with peripheral precocious puberty at his first visit. He displayed penis enlargement with pubic hair and facial acne. His serum adrenal androgen levels were elevated, and abdominal computed tomography revealed a right suprarenal mass. After complete surgical resection, the histological diagnosis was ACC. Two months after surgical removal of the mass, he subsequently developed central precocious puberty. He was treated with a gonadotropin-releasing hormone agonist to delay further pubertal progression. In patients with functioning ACC and surgical removal, clinical follow-up and hormonal marker examination for the secondary effects of excessive hormone secretion may be a useful option at least every 2 or 3 months after surgery.", "title": "" }, { "docid": "69c8cd29d23d64ba36df376cc7a0c174", "text": "In recent years, due to its strong nonlinear mapping and research capacities, the convolutional neural network (CNN) has been widely used in the field of hyperspectral image (HSI) processing. Recently, pixel pair features (PPFs) and spatial PPFs (SPPFs) for HSI classification have served as the new tools for feature extraction. In this paper, on top of PPF, improved subtraction pixel pair features (subtraction-PPFs) are applied for HSI target detection. Unlike original PPF and SPPF, the subtraction-PPF considers target classes to afford the CNN, a target detection function. Using subtraction-PPF, a sufficiently large number of samples are obtained to ensure the excellent performance of the multilayer CNN. For a testing pixel, the input of the trained CNN is the spectral difference between the central pixel and its adjacent pixels. When a test pixel belongs to the target, the output score will be close to the target label. To verify the effectiveness of the proposed method, aircrafts and vehicles are used as targets of interest, while another 27 objects are chosen as background classes (e.g., vegetation and runways). Our experimental results on four images indicate that the proposed detector outperforms classic hyperspectral target detection algorithms.", "title": "" }, { "docid": "55f677c0f55d5ba93507e3b4113c09f3", "text": "In modern power electronic systems, DC-DC converter is one of the main controlled power sources for driving DC systems. But the inherent nonlinear and time-varying characteristics often result in some difficulties mostly related to the control issue. This paper presents a robust nonlinear adaptive controller design with a recursive methodology based on the pulse width modulation (PWM) to drive a DC-DC buck converter. The proposed controller is designed based on the dynamical model of the buck converter where all parameters within the model are assumed as unknown. These unknown parameters are estimated through the adaptation laws and the stability of these laws are ensured by formulating suitable control Lyapunov functions (CLFs) at different stages. The proposed control scheme also provides robustness against external disturbances as these disturbances are considered within the model. One of the main features of the proposed scheme is that it overcomes the over-parameterization problems of unknown parameters which usually appear in some conventional adaptive methods. Finally, the effectiveness of the proposed control scheme is verified through the simulation results and compared to that of an existing adaptive backstepping controller. Simulation results clearly indicate the performance improvement in terms of a faster output voltage tracking response.", "title": "" }, { "docid": "796dc233bbf4e9e063485f26ab7b5b64", "text": "Anomaly detection refers to identifying the patterns in data that deviate from expected behavior. These non-conforming patterns are often termed as outliers, malwares, anomalies or exceptions in different application domains. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study, we have decided to detect anomaly for multi-source VMware-based cloud data center. The framework monitors VMware performance stream data (e.g., CPU load, memory usage, etc.) continuously. It collects these data simultaneously from all the VMwares connected to the network. It notifies the resource manager to reschedule its resources dynamically when it identifies any abnormal behavior of its collected data. We have used Apache Spark, a distributed framework for processing performance stream data and making prediction without any delay. Spark is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout, etc.) that is not ideal for stream data processing. We have implemented a flat incremental clustering algorithm to model the benign characteristics in our distributed Spark based framework. We have compared the average processing latency of a tuple during clustering and prediction in Spark with Storm, another distributed framework for stream data processing. We experimentally find that Spark processes a tuple much quicker than Storm on average.", "title": "" }, { "docid": "db3c5c93daf97619ad927532266b3347", "text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.", "title": "" }, { "docid": "51a2d48f43efdd8f190fd2b6c9a68b3c", "text": "Textual passwords are often the only mechanism used to authenticate users of a networked system. Unfortunately, many passwords are easily guessed or cracked. In an attempt to strengthen passwords, some systems instruct users to create mnemonic phrase-based passwords. A mnemonic password is one where a user chooses a memorable phrase and uses a character (often the first letter) to represent each word in the phrase.In this paper, we hypothesize that users will select mnemonic phrases that are commonly available on the Internet, and that it is possible to build a dictionary to crack mnemonic phrase-based passwords. We conduct a survey to gather user-generated passwords. We show the majority of survey respondents based their mnemonic passwords on phrases that can be found on the Internet, and we generate a mnemonic password dictionary as a proof of concept. Our 400,000-entry dictionary cracked 4% of mnemonic passwords; in comparison, a standard dictionary with 1.2 million entries cracked 11% of control passwords. The user-generated mnemonic passwords were also slightly more resistant to brute force attacks than control passwords. These results suggest that mnemonic passwords may be appropriate for some uses today. However, mnemonic passwords could become more vulnerable in the future and should not be treated as a panacea.", "title": "" }, { "docid": "263488a376e419cbbd6cd7c4ecc70a4f", "text": "This paper discusses the ethical issues related to hemicorporectomy surgery, a radical procedure that removes the lower half of the body in order to prolong life. The literature on hemicorporectomy (HC), also called translumbar amputation, has been nearly silent on the ethical considerations relevant to this rare procedure. We explore five aspects of the complex landscape of hemicorporectomy to illustrate the broader ethical questions related to this extraordinary procedure: benefits, risks, informed consent, resource allocation and justice, and loss and the lived body.", "title": "" } ]
scidocsrr
5b2b84da54d3f4ac0b3e1dcb10c87d4c
Learning interaction for collaborative tasks with probabilistic movement primitives
[ { "docid": "4b886b3ee8774a1e3110c12bdbdcbcdf", "text": "To engage in cooperative activities with human partners, robots have to possess basic interactive abilities and skills. However, programming such interactive skills is a challenging task, as each interaction partner can have different timing or an alternative way of executing movements. In this paper, we propose to learn interaction skills by observing how two humans engage in a similar task. To this end, we introduce a new representation called Interaction Primitives. Interaction primitives build on the framework of dynamic motor primitives (DMPs) by maintaining a distribution over the parameters of the DMP. With this distribution, we can learn the inherent correlations of cooperative activities which allow us to infer the behavior of the partner and to participate in the cooperation. We will provide algorithms for synchronizing and adapting the behavior of humans and robots during joint physical activities.", "title": "" } ]
[ { "docid": "93347ca2b0e76b442b39ea518eebf551", "text": "For tackling thewell known cold-start user problem inmodel-based recommender systems, one approach is to recommend a few items to a cold-start user and use the feedback to learn a pro€le. Œe learned pro€le can then be used to make good recommendations to the cold user. In the absence of a good initial pro€le, the recommendations are like random probes, but if not chosen judiciously, both bad recommendations and too many recommendations may turn o‚ a user. We formalize the cold-start user problem by asking what are the b best items we should recommend to a cold-start user, in order to learn her pro€le most accurately, where b , a given budget, is typically a small number. We formalize the problem as an optimization problem and present multiple non-trivial results, including NP-hardness as well as hardness of approximation. We furthermore show that the objective function, i.e., the least square error of the learned pro€lew.r.t. the true user pro€le, is neither submodular nor supermodular, suggesting ecient approximations are unlikely to exist. Finally, we discuss several scalable heuristic approaches for identifying the b best items to recommend to the user and experimentally evaluate their performance on 4 real datasets. Our experiments show that our proposed accelerated algorithms signi€cantly outperform the prior art in runnning time, while achieving similar error in the learned user pro€le as well as in the rating predictions. ACM Reference format: Sampoorna Biswas, Laks V.S. Lakshmanan, and Senjuti Basu Ray. 2016. Combating the Cold Start User Problem in Model Based Collaborative Filtering. In Proceedings of ACM Conference, Washington, DC, USA, July 2017 (Conference’17), 11 pages. DOI: 10.1145/nnnnnnn.nnnnnnn", "title": "" }, { "docid": "70d4545496bfd3b68e092d0ce11be299", "text": "This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.", "title": "" }, { "docid": "fbc53c95275a92b19cca4be0aaa7e7fd", "text": "Musicologists and linguists have often suggested that the prosody of a culture's spoken language can influence the structure of its instrumental music. However, empirical data supporting this idea have been lacking. This has been partly due to the difficulty of developing and applying comparable quantitative measures to melody and rhythm in speech and music. This study uses a recently-developed measure for the study of speech rhythm to compare rhythmic patterns in English and French language and classical music. We find that English and French musical themes are significantly different in this measure of rhythm, which also differentiates the rhythm of spoken English and French. Thus, there is an empirical basis for the claim that spoken prosody leaves an imprint on the music of a culture.", "title": "" }, { "docid": "45233b0580decd90135922ee8991791c", "text": "In this paper, we present an object recognition and pose estimation framework consisting of a novel global object descriptor, so called Viewpoint oriented Color-Shape Histogram (VCSH), which combines object's color and shape information. During the phase of object modeling and feature extraction, the whole object's color point cloud model is built by registration from multi-view color point clouds. VCSH is trained using partial-view object color point clouds generated from different synthetic viewpoints. During the recognition phase, the object is identified and the closest viewpoint is extracted using the built feature database and object's features from real scene. The estimated closest viewpoint provides a good initialization for object pose estimation optimization using the iterative closest point strategy. Finally, objects in real scene are recognized and their accurate poses are retrieved. A set of experiments is realized where our proposed approach is proven to outperform other existing methods by guaranteeing highly accurate object recognition, fast and accurate pose estimation as well as exhibiting the capability of dealing with environmental illumination changes.", "title": "" }, { "docid": "9a86609ecefc5780a49ca638be4de64c", "text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.", "title": "" }, { "docid": "ce5cedb2341294105d41614a2aa80ca1", "text": "This dissertation addresses the topic of manufacturing strategy, especially the manufacturing capabilities and operational performance of manufacturing plants. Manufacturing strategy research aims at providing a structured decision making approach to improve the economics of manufacturing and to make companies more competitive. The overall objective of this thesis is to investigate how manufacturing companies make use of different manufacturing practices or bundles of manufacturing practices to develop certain sets of capabilities, with the ultimate goal of supporting the market requirements. The thesis aims to increase the understanding of the role of operations management and its immediate impact on manufacturing performance. Following the overall research objective three areas are identified to be of particular interest; to investigate (i) the relationship among different dimensions of operational performance, (ii) the way different performance dimensions are affected by manufacturing practices or bundles of manufacturing practices, (iii) whether there are contingencies that may help explain the relationships between dimensions of manufacturing capabilities or the effects of manufacturing practices or bundles of manufacturing practices on operational performance. The empirical elements in this thesis use data from the High Performance Manufacturing (HPM) project. The HPM project is an international study of manufacturing plants involving seven countries and three industries. The research contributes to several insights to the research area of manufacturing strategy and to practitioners in manufacturing operations. The thesis develops measurements for and tests the effects of several manufacturing practices on operational performance. The results are aimed at providing guidance for decision making in manufacturing companies. The most prominent implication for researchers is the manifestation of the customer order decoupling point as an important contingency variable to consider when studying manufacturing operations.", "title": "" }, { "docid": "25793a93fec7a1ccea0869252a8a0141", "text": "Condition monitoring of induction motors is a fast emerging technology for online detection of incipient faults. It avoids unexpected failure of a critical system. Approximately 30-40% of faults of induction motors are stator faults. This work presents a comprehensive review of various stator faults, their causes, detection parameters/techniques, and latest trends in the condition monitoring technology. It is aimed at providing a broad perspective on the status of stator fault monitoring to researchers and application engineers using induction motors. A list of 183 research publications on the subject is appended for quick reference.", "title": "" }, { "docid": "2798217f6e2d9194a9a30834ed9af47a", "text": "The main obstacle to transmit images in wireless sensor networks is the lack of an appropriate strategy for processing the large volume of data such as images. The high rate packets errors because of what numbers very high packets carrying the data of the captured images and the need for retransmission in case of errors, and more, the energy reserve and band bandwidth is insufficient to accomplish these tasks. This paper presents new effective technique called “Background subtraction” to compress, process and transmit the images in a wireless sensor network. The practical results show the effectiveness of this approach to make the image compression in the networks of wireless sensors achievable, reliable and efficient in terms of energy and the minimization of amount of image data.", "title": "" }, { "docid": "c7160e93c9cce017adc1200dc7d597f2", "text": "The transcription factor, nuclear factor erythroid 2 p45-related factor 2 (Nrf2), acts as a sensor of oxidative or electrophilic stresses and plays a pivotal role in redox homeostasis. Oxidative or electrophilic agents cause a conformational change in the Nrf2 inhibitory protein Keap1 inducing the nuclear translocation of the transcription factor which, through its binding to the antioxidant/electrophilic response element (ARE/EpRE), regulates the expression of antioxidant and detoxifying genes such as heme oxygenase 1 (HO-1). Nrf2 and HO-1 are frequently upregulated in different types of tumours and correlate with tumour progression, aggressiveness, resistance to therapy, and poor prognosis. This review focuses on the Nrf2/HO-1 stress response mechanism as a promising target for anticancer treatment which is able to overcome resistance to therapies.", "title": "" }, { "docid": "172216abbcb7acb25d5cdb8d65c2becf", "text": "In this paper, design of a planar wideband waveguide to microstrip transition for the 60 GHz frequency band is presented. The designed transition is fabricated using standard high frequency multilayer printed circuit board technology RO4003C. The waveguide to microstrip transition provides low production cost and allows for simple integration of the WR-15 rectangular waveguide without any modifications in the waveguide structure. Results of electromagnetic simulation and experimental investigation of the designed waveguide to microstrip transition are presented. The transmission bandwidth of the transition is equal to the full bandwidth of the WR-15 waveguide (50–75 GHz) for the −3 dB level of the insertion loss that was achieved by special modifications in the general aperture coupled transition structure. The transition loss is lower than 1 dB at the central frequency of 60 GHz.", "title": "" }, { "docid": "d9830a56f7d743cdf0f148cc551a7dcf", "text": "Transcranial focused ultrasound (FUS) is capable of modulating the neural activity of specific brain regions, with a potential role as a non-invasive computer-to-brain interface (CBI). In conjunction with the use of brain-to-computer interface (BCI) techniques that translate brain function to generate computer commands, we investigated the feasibility of using the FUS-based CBI to non-invasively establish a functional link between the brains of different species (i.e. human and Sprague-Dawley rat), thus creating a brain-to-brain interface (BBI). The implementation was aimed to non-invasively translate the human volunteer's intention to stimulate a rat's brain motor area that is responsible for the tail movement. The volunteer initiated the intention by looking at a strobe light flicker on a computer display, and the degree of synchronization in the electroencephalographic steady-state-visual-evoked-potentials (SSVEP) with respect to the strobe frequency was analyzed using a computer. Increased signal amplitude in the SSVEP, indicating the volunteer's intention, triggered the delivery of a burst-mode FUS (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 msec duration) to excite the motor area of an anesthetized rat transcranially. The successful excitation subsequently elicited the tail movement, which was detected by a motion sensor. The interface was achieved at 94.0±3.0% accuracy, with a time delay of 1.59±1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities, which may confer unexplored opportunities in the study of neuroscience with potential implications for therapeutic applications.", "title": "" }, { "docid": "494b25495ac467d3f57e171345ab2f6d", "text": "We propose a semantic segmentation model for histopathology that exploits rotation and reflection symmetries inherent in histopathology images. We demonstrate significant performance gains due to increased weight sharing, as well as improvements in predictive stability. The group-equivariant CNN framework is extended for segmentation by introducing a new (G → Z)-convolution that transforms feature maps on a group to planar feature maps. Also, equivariant transposed convolution is formulated for up-sampling in an encoder-decoder network. We further show the importance of exploiting more symmetries by varying the size of the group.", "title": "" }, { "docid": "1495ed50a24703566b2bda35d7ec4931", "text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot", "title": "" }, { "docid": "6a6063c05941c026b083bfcc573520f8", "text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.", "title": "" }, { "docid": "695a0e8ba9556afde6b22f29399616ba", "text": "Microstrip lines (MSL) are widely used in microwave systems because of its low cost, light weight, and easy integration with other components. Substrate integrated waveguides (SIW), which inherit the advantages from traditional rectangular waveguides without their bulky configuration, aroused recently in low loss and high power planar applications. This chapter proposed the design and modeling of transitions between these two common structures. Research motives will be described firstly in the next subsection, followed by a literature survey on the proposed MSL to SIW transition structures. Outlines of the following sections in this chapter will also be given in the end of this section.", "title": "" }, { "docid": "7e5d59b658893a36903ae4e65d9c1c4e", "text": "The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for Metropolis– Hastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevin algorithms.This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain.The methodology proposed exploits the Riemann geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density. The performance of these Riemann manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes, stochastic volatility models and Bayesian estimation of dynamic systems described by non-linear differential equations. Substantial improvements in the time-normalized effective sample size are reported when compared with alternative sampling approaches. MATLAB code that is available from the authors allows replication of all the results reported.", "title": "" }, { "docid": "b0ea0b7e3900b440cb4e1d5162c6830b", "text": "Product Lifecycle Management (PLM) solutions have been serving as the basis for collaborative product definition, manufacturing, and service management in many industries. They capture and provide access to product and process information and preserve integrity of information throughout the lifecycle of a product. Efficient growth in the role of Building Information Modeling (BIM) can benefit vastly from unifying solutions to acquire, manage and make use of information and processes from various project and enterprise level systems, selectively adapting functionality from PLM systems. However, there are important differences between PLM’s target industries and the Architecture, Engineering, and Construction (AEC) industry characteristics that require modification and tailoring of some aspects of current PLM technology. In this study we examine the fundamental PLM functionalities that create synergy with the BIM-enabled AEC industry. We propose a conceptual model for the information flow and integration between BIM and PLM systems. Finally, we explore the differences between the AEC industry and traditional scope of service for PLM solutions.", "title": "" }, { "docid": "f95f77f81f5a4838f9f3fa2538e9d132", "text": "Learning analytics tools should be useful, i.e., they should be usable and provide the functionality for reaching the goals attributed to learning analytics. This paper seeks to unite learning analytics and action research. Based on this, we investigate how the multitude of questions that arise during technology-enhanced teaching and learning systematically can be mapped to sets of indicators. We examine, which questions are not yet supported and propose concepts of indicators that have a high potential of positively influencing teachers' didactical considerations. Our investigation shows that many questions of teachers cannot be answered with currently available research tools. Furthermore, few learning analytics studies report about measuring impact. We describe which effects learning analytics should have on teaching and discuss how this could be evaluated.", "title": "" }, { "docid": "2c4a2d41653f05060ff69f1c9ad7e1a6", "text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.", "title": "" }, { "docid": "99ffaa3f845db7b71a6d1cbc62894861", "text": "There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines), automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade and dedicated to documents of historical interest.", "title": "" } ]
scidocsrr
a81d7a3273fdc528f879a8a00f52ddfd
Policy Optimization as Wasserstein Gradient Flows
[ { "docid": "5d05addd1cac2ea4ca5008950a21bd06", "text": "We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.", "title": "" } ]
[ { "docid": "8d6a33661e281516433df5caa1f35c3a", "text": "The main contribution of this work is the comparison of three user modeling strategies based on job titles, educational fields and skills in LinkedIn profiles, for personalized MOOC recommendations in a cold start situation. Results show that the skill-based user modeling strategy performs best, followed by the job- and edu-based strategies.", "title": "" }, { "docid": "47578ab46497d9cf2da9efd8e8a75b85", "text": "In this paper, a single-layer compensated Marchand balun with arbitrarily chosen connecting line segment has been presented. It has been shown that the utilization of connecting line compensation technique together with the method of capacitive and inductive coupling coefficients' equalization in coupled-lines allows for reduction of the required number of compensating elements. The theoretical investigation has been verified by electromagnetic simulations and measurements of Marchand balun designed in a microstrip technique, operating at f0 = 1.3 GHz and terminated at the input and output with 70 Ω and 100 Ω, respectively.", "title": "" }, { "docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06", "text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.", "title": "" }, { "docid": "6852f8ac938bcccd457436658e3a9dd8", "text": "Research on knowledge management success often focuses on aggregate concepts of knowledge management capabilities when assessing their impact on organizational effectiveness. As such, little is known about the role of the individual resources that make up an organization's knowledge management capability and their impact on organizational effectiveness. To better understand these relationships, this study investigates a component model of knowledge management capabilities. Data collected from 189 managers and structural equation modeling are used to assess the research model. The results show that individual resources are differentially related to organizational effectiveness, with only some resources (e.g. organizational structure) having significant relationships vis-à-vis organizational effectiveness and others exhibiting null effects. Implications for practice and future research are discussed.", "title": "" }, { "docid": "26df3de10cdfe22eacec7b49be959790", "text": "INTRODUCTION\nCurrently, only topical minoxidil (MNX) and oral finasteride (FNS) are approved by the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of androgenetic alopecia. Although FNS is efficacious for hair regrowth, its systemic use is associated with side effects limiting long-term utilization. Exploring topical FNS as an alternative treatment regimen may prove promising.\n\n\nMETHODS\nA search was conducted to identify studies regarding human in vivo topical FNS treatment efficacy including clinically relevant case reports, randomized controlled trials (RCTs), and prospective studies.\n\n\nRESULTS\nSeven articles were included in this systematic review. In all studies, there was significant decrease in the rate of hair loss, increase in total and terminal hair counts, and positive hair growth assessment with topical FNS. Both scalp and plasma DHT significantly decreased with application of topical FNS; no changes in serum testosterone were noted.\n\n\nCONCLUSION\nPreliminary results on the use of topical FNS are limited, but safe and promising. Continued research into drug-delivery, ideal topical concentration and application frequency, side effects, and use for other alopecias will help to elucidate the full extent of topical FNS' use. J Drugs Dermatol. 2018;17(4):457-463..", "title": "" }, { "docid": "82e1fa35686183ebd9ad4592d6ba599e", "text": "We propose a method for model-based control of building air conditioning systems that minimizes energy costs while maintaining occupant comfort. The method uses a building thermal model in the form of a thermal circuit identified from collected sensor data, and reduces the building thermal dynamics to a Markov decision process (MDP) whose decision variables are the sequence of temperature set-points over a suitable horizon, for example one day. The main advantage of the resulting MDP model is that it is completely discrete, which allows for a very fast computation of the optimal sequence of temperature set-points. Experiments on thermal models demonstrate savings that can exceed 50% with respect to usual control strategies in buildings such as night setup. 2013 REHVA World Congress (CLIMA) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2013 201 Broadway, Cambridge, Massachusetts 02139 A Method for Computing Optimal Set-Point Schedules for HVAC Systems Daniel Nikovski#1, Jingyang Xu#2, and Mio Nonaka∗3 #Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139, USA {1nikovski,2jxu}@merl.com ∗Mitsubishi Electric Corporation, 8-1-1, Tsukaguchi-Honmachi, Hyogo 661-8661, Japan 3nonaka.mio@dc.mitsubishielectric.co.jp Abstract We propose a method for model-based control of building air conditioning systems that minimizes energy costs while maintaining occupant comfort. The method uses a building thermal model in the form of a thermal circuit identified from collected sensor data, and reduces the building thermal dynamics to a Markov decision process (MDP) whose decision variables are the sequence of temperature set-points over a suitable horizon, for example one day. The main advantage of the resulting MDP model is that it is completely discrete, which allows for a very fast computation of the optimal sequence of temperature set-points. Experiments on thermal models demonstrate savings that can exceed 50% with respect to usual control strategies in buildings such as night setup.", "title": "" }, { "docid": "c8d9ec6aa63b783e4c591dccdbececcf", "text": "The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object’s relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearancebased model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba’s proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems.", "title": "" }, { "docid": "f32cfe5e4f781f3ef0da302506f4d65a", "text": "In this work, we estimate the deterioration of NLP processing given an estimate of the amount and nature of grammatical errors in a text. From a corpus of essays written by English-language learners, we extract ungrammatical sentences, controlling the number and types of errors in each sentence. We focus on six categories of errors that are commonly made by English-language learners, and consider sentences containing one or more of these errors. To evaluate the effect of grammatical errors, we measure the deterioration of ungrammatical dependency parses using the labeled F-score, an adaptation of the labeled attachment score. We find notable differences between the influence of individual error types on the dependency parse, as well as interactions between multiple errors.", "title": "" }, { "docid": "b02c718acfab40a33840eec013a09bda", "text": "Smartphones today are ubiquitous source of sensitive information. Information leakage instances on the smartphones are on the rise because of exponential growth in smartphone market. Android is the most widely used operating system on smartphones. Many information flow tracking and information leakage detection techniques are developed on Android operating system. Taint analysis is commonly used data flow analysis technique which tracks the flow of sensitive information and its leakage. This paper provides an overview of existing Information flow tracking techniques based on the Taint analysis for android applications. It is observed that static analysis techniques look at the complete program code and all possible paths of execution before its run, whereas dynamic analysis looks at the instructions executed in the program-run in the real time. We provide in depth analysis of both static and dynamic taint analysis approaches.", "title": "" }, { "docid": "2f32cdb4c5622ace69979e90fda0b6d9", "text": "Mobile cloud computing is a new paradigm that uses cloud computing resources to overcome the limitations of mobile computing. Due to its complexity, dependability and performance studies of mobile clouds may require composite modeling techniques, using distinct models for each subsystem and combining state-based and non–state-based formalisms. This paper uses hierarchical modeling and four different sensitivity analysis techniques to determine the parameters that cause the greatest impact on the availability of a mobile cloud. The results show that distinct approaches provide similar results regarding the sensitivity ranking, with specific exceptions. A combined evaluation indicates that system availability may be improved effectively by focusing on a reduced set of factors that produce large variation on the measure of interest. The time needed to replace a fully discharged battery in the mobile device is a parameter with high impact on steady-state availability, as well as the coverage factor for the failures of some cloud servers. This paper also shows that a sensitivity analysis through partial derivatives may not capture the real level of impact for some parameters in a discrete domain, such as the number of active servers. The analysis through percentage differences, or the factorial design of experiments, fulfills such a gap.", "title": "" }, { "docid": "3255b89b7234595e7078a012d4e62fa7", "text": "Virtual assistants such as IFTTT and Almond support complex tasks that combine open web APIs for devices and web services. In this work, we explore semantic parsing to understand natural language commands for these tasks and their compositions. We present the ThingTalk dataset, which consists of 22,362 commands, corresponding to 2,681 distinct programs in ThingTalk, a language for compound virtual assistant tasks. To improve compositionality of multiple APIs, we propose SEQ2TT, a Seq2Seq extension using a bottom-up encoding of grammar productions for programs and a maxmargin loss. On the ThingTalk dataset, SEQ2TT obtains 84% accuracy on trained programs and 67% on unseen combinations, an improvement of 12% over a basic sequence-to-sequence model with attention.", "title": "" }, { "docid": "6fcfbe651d6c4f3a47bf07ee7d38eee2", "text": "\"People-nearby applications\" (PNAs) are a form of ubiquitous computing that connect users based on their physical location data. One example is Grindr, a popular PNA that facilitates connections among gay and bisexual men. Adopting a uses and gratifications approach, we conducted two studies. In study one, 63 users reported motivations for Grindr use through open-ended descriptions. In study two, those descriptions were coded into 26 items that were completed by 525 Grindr users. Factor analysis revealed six uses and gratifications: social inclusion, sex, friendship, entertainment, romantic relationships, and location-based search. Two additional analyses examine (1) the effects of geographic location (e.g., urban vs. suburban/rural) on men's use of Grindr and (2) how Grindr use is related to self-disclosure of information. Results highlight how the mixed-mode nature of PNA technology may change the boundaries of online and offline space, and how gay and bisexual men navigate physical environments.", "title": "" }, { "docid": "cf707baa5b30bcdb75d2f3a6e01862e8", "text": "Engagement in the arts1 is an important component of participation in cultural activities, but remains a largely unaddressed challenge for people with sensory disabilities. Visual arts are generally inaccessible to people with visual impairments due to their inherently visual nature. To address this, we present Eyes-Free Art, a design probe to explore the use of proxemic audio for interactive sonic experiences with 2D art work. The proxemic audio interface allows a user to move closer and further away from a painting to experience background music, a novel sonification, sound effects, and a detailed verbal description. We conducted a lab study by creating interpretations of five paintings with 13 people with visual impairments and found that participants enjoyed interacting with the artwork. We then created a live installation with a visually impaired artist to iterate on this concept to account for multiple users and paintings. We learned that a proxemic audio interface allows for people to feel immersed in the artwork. Proxemic audio interfaces are similar to visual because they increase in detail with closer proximity, but are different because they need a descriptive verbal overview to give context. We present future research directions in the space of proxemic audio interactions.", "title": "" }, { "docid": "b1d34115a1da9cfb349fc4690f54a82e", "text": "There are several theories available to describe how managers choose a medium for communication. However, current technology can affect not only how we communicate but also what we communicate. As a result, the issue for designers of communication support systems has become broader: how should technology be designed to make communication more effective by changing the medium and the attributes of the message itself? The answer to this question requires a shift from current preoccupations with the medium of 1Richard Watson was the accepting senior editor for this paper. 2MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html The associated web site for this paper is located at http://misq.org/misreview/teeni.shtml communication to a view that assesses the balance between medium and message form. There is also a need to look more closely at the process of communication in order to identify more precisely any potential areas of computer", "title": "" }, { "docid": "430adc54605031a5dcc2658bd6a24462", "text": "Using the example of a failed software implementation, we discuss the role of artifacts in shaping organizational routines. We argue that artifact-centered assumptions about design are not well suited to designing organizational routines, which are generative systems that produce recognizable, repetitive patterns of interdependent actions, carried out by multiple actors. Artifact-centered assumptions about design not only reinforce a widespread misunderstanding of routines as things, they implicitly embody a rather strong form of technological determinism. As an alternative perspective, we discuss the use of narrative networks as a way to conceptualize the role of human and non-human actants, and to represent the variable patterns of action that are characteristic of ‘‘live” routines. Using this perspective, we conclude with some suggestions on how to design organizational routines that are more consistent with their nature as generative systems. 2008 Published by Elsevier Ltd.", "title": "" }, { "docid": "b678ca4c649a2e69637b84c3e35f88f5", "text": "Induced expression of the Flock House virus in the soma of C. elegans results in the RNAi-dependent production of virus-derived, small-interfering RNAs (viRNAs), which in turn silence the viral genome. We show here that the viRNA-mediated viral silencing effect is transmitted in a non-Mendelian manner to many ensuing generations. We show that the viral silencing agents, viRNAs, are transgenerationally transmitted in a template-independent manner and work in trans to silence viral genomes present in animals that are deficient in producing their own viRNAs. These results provide evidence for the transgenerational inheritance of an acquired trait, induced by the exposure of animals to a specific, biologically relevant physiological challenge. The ability to inherit such extragenic information may provide adaptive benefits to an animal.", "title": "" }, { "docid": "7998670588bee1965fd5a18be9ccb0d9", "text": "In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.", "title": "" }, { "docid": "15fa73633d6ec7539afc91bb1f45098f", "text": "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.", "title": "" }, { "docid": "03fa5f5f6b6f307fc968a2b543e331a1", "text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.", "title": "" }, { "docid": "dd2cb96ed215b5ee050ca4c16d61e1bc", "text": "The goal of this chapter is to give fundamental knowledge on solving multi-objective optimization problems. The focus is on the intelligent metaheuristic approaches (evolutionary algorithms or swarm-based techniques). The focus is on techniques for efficient generation of the Pareto frontier. A general formulation of MO optimization is given in this chapter, the Pareto optimality concepts introduced, and solution approaches with examples of MO problems in the power systems field are given", "title": "" } ]
scidocsrr
f6e429126158fa743b162448c8428514
Group-wise Deep Co-saliency Detection
[ { "docid": "94160496e0a470dc278f71c67508ae21", "text": "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.", "title": "" }, { "docid": "9fc47eca91c72afbc6875ef71f22de30", "text": "We propose a principled probabilistic formulation of object saliency as a sampling problem. This novel formulation allows us to learn, from a large corpus of unlabelled images, which patches of an image are of the greatest interest and most likely to correspond to an object. We then sample the object saliency map to propose object locations. We show that using only a single object location proposal per image, we are able to correctly select an object in over 42% of the images in the Pascal VOC 2007 dataset, substantially outperforming existing approaches. Furthermore, we show that our object proposal can be used as a simple unsupervised approach to the weakly supervised annotation problem. Our simple unsupervised approach to annotating objects of interest in images achieves a higher annotation accuracy than most weakly supervised approaches.", "title": "" }, { "docid": "789202b969866d2f8bdbcfc3bcf4bfbb", "text": "As an interesting and emerging topic, co-saliency detection aims at simultaneously extracting common salient objects in a group of images. Traditional co-saliency detection approaches rely heavily on human knowledge for designing hand-crafted metrics to explore the intrinsic patterns underlying co-salient objects. Such strategies, however, always suffer from poor generalization capability to flexibly adapt various scenarios in real applications, especially due to their lack of insightful understanding of the biological mechanisms of human visual co-attention. To alleviate this problem, we propose a novel framework for this task, by naturally reformulating it as a multiple-instance learning (MIL) problem and further integrating it into a self-paced learning (SPL) regime. The proposed framework on one hand is capable of fitting insightful metric measurements and discovering common patterns under co-salient regions in a self-learning way by MIL, and on the other hand tends to promise the learning reliability and stability by simulating the human learning process through SPL. Experiments on benchmark datasets have demonstrated the effectiveness of the proposed framework as compared with the state-of-the-arts.", "title": "" } ]
[ { "docid": "76efa42a492d8eb36b82397e09159c30", "text": "attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup’s final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.", "title": "" }, { "docid": "f67e221a12e0d8ebb531a1e7c80ff2ff", "text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.", "title": "" }, { "docid": "588fcb80381f75efca073438c3eda7fb", "text": "Nowadays, parents are perturbed about school going children because of the increasing number of cases of missing students. On occasion, students need to wait a much longer time for arrival of their school bus. There exist some communication technologies that are used to ensure the safety of students. But these are incapable of providing efficient services to parents. This paper presents the development of a school bus monitoring system, capable of providing productive services through emerging technologies like Internet of Things (Iota). The proposed IoT based system tracks students in a school bus using a combination of RFID/GPS/GSM/GPRS technologies. In addition to the tracking, a prediction algorithm is implemented for computation of the arrival time of a school-bus. Through an Android application, parents can continuously monitor the bus route and forecast arrival time of the bus.", "title": "" }, { "docid": "a008e9f817c6c4658c9c739d0d7fb6a4", "text": "BI (Business Intelligence) is an important discipline for companies and the challenges it faces are strategic. A central concept in BI is the data warehouse, which is a set of consolidated data from heterogeneous sources (usually databases in 3NF). To model the data warehouse, the Inmon and Kimball approaches are the most used. Both solutions monopolize the BI market However, a third modeling approach called “Data Vault” of its creator Linstedt, is gaining ground from year to year. It allows building a data warehouse of raw (unprocessed) data from heterogeneous sources. The purpose of this paper is to present a comparative study of the three precedent approaches. First, we study each approach separately and then we draw a comparison between them. Finally, we include recommendations for selecting the best approach before concluding this paper.", "title": "" }, { "docid": "324dc3f410eb89f096dd72bffe9616bc", "text": "The use of the Internet by older adults is growing at a substantial rate. They are becoming an increasingly important potential market for electronic commerce. However, previous researchers and practitioners have focused mainly on the youth market and paid less attention to issues related to the online behaviors of older consumers. To bridge the gap, the purpose of this study is to increase a better understanding of the drivers and barriers affecting older consumers’ intention to shop online. To this end, this study is developed by integrating the Unified Theory of Acceptance and Use of Technology (UTAUT) and innovation resistance theory. By comparing younger consumers with their older counterparts, in terms of gender the findings indicate that the major factors driving older adults toward online shopping are performance expectation and social influence which is the same with younger. On the other hand, the major barriers include value, risk, and tradition which is different from younger. Consequently, it is notable that older adults show no gender differences in regards to the drivers and barriers. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a72c9cd8bdf4aec0d265dd4a5fff2826", "text": "We propose a robust quantization-based image watermarking scheme, called the gradient direction watermarking (GDWM), based on the uniform quantization of the direction of gradient vectors. In GDWM, the watermark bits are embedded by quantizing the angles of significant gradient vectors at multiple wavelet scales. The proposed scheme has the following advantages: 1) increased invisibility of the embedded watermark because the watermark is embedded in significant gradient vectors, 2) robustness to amplitude scaling attacks because the watermark is embedded in the angles of the gradient vectors, and 3) increased watermarking capacity as the scheme uses multiple-scale embedding. The gradient vector at a pixel is expressed in terms of the discrete wavelet transform (DWT) coefficients. To quantize the gradient direction, the DWT coefficients are modified based on the derived relationship between the changes in the coefficients and the change in the gradient direction. Experimental results show that the proposed GDWM outperforms other watermarking methods and is robust to a wide range of attacks, e.g., Gaussian filtering, amplitude scaling, median filtering, sharpening, JPEG compression, Gaussian noise, salt & pepper noise, and scaling.", "title": "" }, { "docid": "54442326e3007cf276f505b0da9a149a", "text": "We introduce a visual analytics method to analyze eye movement data recorded for dynamic stimuli such as video or animated graphics. The focus lies on the analysis of data of several viewers to identify trends in the general viewing behavior, including time sequences of attentional synchrony and objects with strong attentional focus. By using a space-time cube visualization in combination with clustering, the dynamic stimuli and associated eye gazes can be analyzed in a static 3D representation. Shot-based, spatiotemporal clustering of the data generates potential areas of interest that can be filtered interactively. We also facilitate data drill-down: the gaze points are shown with density-based color mapping and individual scan paths as lines in the space-time cube. The analytical process is supported by multiple coordinated views that allow the user to focus on different aspects of spatial and temporal information in eye gaze data. Common eye-tracking visualization techniques are extended to incorporate the spatiotemporal characteristics of the data. For example, heat maps are extended to motion-compensated heat maps and trajectories of scan paths are included in the space-time visualization. Our visual analytics approach is assessed in a qualitative users study with expert users, which showed the usefulness of the approach and uncovered that the experts applied different analysis strategies supported by the system.", "title": "" }, { "docid": "806ae85b278c98a9107adeb1f55b8808", "text": "The present studies report the effects on neonatal rats of oral exposure to genistein during the period from birth to postnatal day (PND) 21 to generate data for use in assessing human risk following oral ingestion of genistein. Failure to demonstrate significant exposure of the newborn pups via the mothers milk led us to subcutaneously inject genistein into the pups over the period PND 1-7, followed by daily gavage dosing to PND 21. The targeted doses throughout were 4 mg/kg/day genistein (equivalent to the average exposure of infants to total isoflavones in soy milk) and a dose 10 times higher than this (40 mg/kg genistein). The dose used during the injection phase of the experiment was based on plasma determinations of genistein and its major metabolites. Diethylstilbestrol (DES) at 10 micro g/kg was used as a positive control agent for assessment of changes in the sexually dimorphic nucleus of the preoptic area (SDN-POA). Administration of 40 mg/kg genistein increased uterus weights at day 22, advanced the mean day of vaginal opening, and induced permanent estrus in the developing female pups. Progesterone concentrations were also decreased in the mature females. There were no effects in females dosed with 4 mg/kg genistein, the predicted exposure level for infants drinking soy-based infant formulas. There were no consistent effects on male offspring at either dose level of genistein. Although genistein is estrogenic at 40 mg/kg/day, as illustrated by the effects described above, this dose does not have the same repercussions as DES in terms of the organizational effects on the SDN-POA.", "title": "" }, { "docid": "e64caf71b75ac93f0426b199844f319b", "text": "INTRODUCTION\nVaginismus is mostly unknown among clinicians and women. Vaginismus causes women to have fear, anxiety, and pain with penetration attempts.\n\n\nAIM\nTo present a large cohort of patients based on prior published studies approved by an institutional review board and the Food and Drug Administration using a comprehensive multimodal vaginismus treatment program to treat the physical and psychologic manifestations of women with vaginismus and to record successes, failures, and untoward effects of this treatment approach.\n\n\nMETHODS\nAssessment of vaginismus included a comprehensive pretreatment questionnaire, the Female Sexual Function Index (FSFI), and consultation. All patients signed a detailed informed consent. Treatment consisted of a multimodal approach including intravaginal injections of onabotulinumtoxinA (Botox) and bupivacaine, progressive dilation under conscious sedation, indwelling dilator, follow-up and support with office visits, phone calls, e-mails, dilation logs, and FSFI reports.\n\n\nMAIN OUTCOME MEASURES\nLogs noting dilation progression, pain and anxiety scores, time to achieve intercourse, setbacks, and untoward effects. Post-treatment FSFI scores were compared with preprocedure scores.\n\n\nRESULTS\nOne hundred seventy-one patients (71%) reported having pain-free intercourse at a mean of 5.1 weeks (median = 2.5). Six patients (2.5%) were unable to achieve intercourse within a 1-year period after treatment and 64 patients (26.6%) were lost to follow-up. The change in the overall FSFI score measured at baseline, 3 months, 6 months, and 1 year was statistically significant at the 0.05 level. Three patients developed mild temporary stress incontinence, two patients developed a short period of temporary blurred vision, and one patient developed temporary excessive vaginal dryness. All adverse events resolved by approximately 4 months. One patient required retreatment followed by successful coitus.\n\n\nCONCLUSION\nA multimodal program that treated the physical and psychologic aspects of vaginismus enabled women to achieve pain-free intercourse as noted by patient communications and serial female sexual function studies. Further studies are indicated to better understand the individual components of this multimodal treatment program. Pacik PT, Geletta S. Vaginismus Treatment: Clinical Trials Follow Up 241 Patients. Sex Med 2017;5:e114-e123.", "title": "" }, { "docid": "0a3eaf68a3f1f2587f2456cbb29e1f06", "text": "OBJECTIVE\nTo develop a single trial motor imagery (MI) classification strategy for the brain-computer interface (BCI) applications by using time-frequency synthesis approach to accommodate the individual difference, and using the spatial patterns derived from electroencephalogram (EEG) rhythmic components as the feature description.\n\n\nMETHODS\nThe EEGs are decomposed into a series of frequency bands, and the instantaneous power is represented by the envelop of oscillatory activity, which forms the spatial patterns for a given electrode montage at a time-frequency grid. Time-frequency weights determined by training process are used to synthesize the contributions from the time-frequency domains.\n\n\nRESULTS\nThe present method was tested in nine human subjects performing left or right hand movement imagery tasks. The overall classification accuracies for nine human subjects were about 80% in the 10-fold cross-validation, without rejecting any trials from the dataset. The loci of MI activity were shown in the spatial topography of differential-mode patterns over the sensorimotor area.\n\n\nCONCLUSIONS\nThe present method does not contain a priori subject-dependent parameters, and is computationally efficient. The testing results are promising considering the fact that no trials are excluded due to noise or artifact.\n\n\nSIGNIFICANCE\nThe present method promises to provide a useful alternative as a general purpose classification procedure for MI classification.", "title": "" }, { "docid": "31abfd6e4f6d9e56bc134ffd7c7b7ffc", "text": "Online social networks like Facebook recommend new friends to users based on an explicit social network that users build by adding each other as friends. The majority of earlier work in link prediction infers new interactions between users by mainly focusing on a single network type. However, users also form several implicit social networks through their daily interactions like commenting on people’s posts or rating similarly the same products. Prior work primarily exploited both explicit and implicit social networks to tackle the group/item recommendation problem that recommends to users groups to join or items to buy. In this paper, we show that auxiliary information from the useritem network fruitfully combines with the friendship network to enhance friend recommendations. We transform the well-known Katz algorithm to utilize a multi-modal network and provide friend recommendations. We experimentally show that the proposed method is more accurate in recommending friends when compared with two single source path-based algorithms using both synthetic and real data sets.", "title": "" }, { "docid": "06104f7f43133230eb79b86c195e4206", "text": "This paper describes the WiLI-2018 benchmark dataset for monolingual written natural language identification. WiLI-2018 is a publicly available,1 free of charge dataset of short text extracts from Wikipedia. It contains 1000 paragraphs of 235 languages, totaling in 235 000 paragraphs. WiLI is a classification dataset: Given an unknown paragraph written in one dominant language, it has to be decided which language it is.", "title": "" }, { "docid": "7374e16190e680669f76fc7972dc3975", "text": "Open-plan office layout is commonly assumed to facilitate communication and interaction between co-workers, promoting workplace satisfaction and team-work effectiveness. On the other hand, open-plan layouts are widely acknowledged to be more disruptive due to uncontrollable noise and loss of privacy. Based on the occupant survey database from Center for the Built Environment (CBE), empirical analyses indicated that occupants assessed Indoor Environmental Quality (IEQ) issues in different ways depending on the spatial configuration (classified by the degree of enclosure) of their workspace. Enclosed private offices clearly outperformed open-plan layouts in most aspects of IEQ, particularly in acoustics, privacy and the proxemics issues. Benefits of enhanced ‘ease of interaction’ were smaller than the penalties of increased noise level and decreased privacy resulting from open-plan office configuration.", "title": "" }, { "docid": "517a88f2aeb4d2884edfb6a9a64b1e8b", "text": "Metformin (dimethylbiguanide) features as a current first-line pharmacological treatment for type 2 diabetes (T2D) in almost all guidelines and recommendations worldwide. It has been known that the antihyperglycemic effect of metformin is mainly due to the inhibition of hepatic glucose output, and therefore, the liver is presumably the primary site of metformin function.However, in this issue of Diabetes Care, Fineman and colleagues (1) demonstrate surprising results from their clinical trials that suggest the primary effect of metformin resides in the human gut. Metformin is an orally administered drug used for lowering blood glucose concentrations in patients with T2D, particularly in those overweight and obese as well as those with normal renal function. Pharmacologically, metformin belongs to the biguanide class of antidiabetes drugs. The history of biguanides can be traced from the use of Galega officinalis (commonly known as galega) for treating diabetes in medieval Europe (2). Guanidine, the active component of galega, is the parent compound used to synthesize the biguanides. Among three main biguanides introduced for diabetes therapy in late 1950s, metformin (Fig. 1A) has a superior safety profile and is well tolerated. The other two biguanides, phenformin and buformin, were withdrawn in the early 1970s due to the risk of lactic acidosis and increased cardiac mortality. The incidence of lactic acidosis with metformin at therapeutic doses is rare (less than three cases per 100,000 patient-years) and is not greater than with nonmetformin therapies (3). Major clinical advantages of metformin include specific reduction of hepatic glucose output, with subsequent improvement of peripheral insulin sensitivity, and remarkable cardiovascular safety, but without increasing islet insulin secretion, inducingweight gain, or posing a risk of hypoglycemia. Moreover, metformin has also shown benefits in reducing cancer risk and improving cancer prognosis (4,5), as well as counteracting the cardiovascular complications associated with diabetes (6). Although metformin has been widely prescribed to patients with T2D for over 50 years and has been found to be safe and efficacious both as monotherapy and in combination with other oral antidiabetes agents and insulin, the mechanism of metformin action is only partially explored and remains controversial. In mammals, oral bioavailability of metformin is;50% and is absorbed through the upper small intestine (duodenum and jejunum) (7) and then is delivered to the liver, circulates unbound essentially, and finally is eliminated by the kidneys. Note that metformin is not metabolized and so is unchanged throughout the journey in the body. The concentration of metformin in the liver is threeto fivefold higher than that in the portal vein (40–70 mmol/L) after single therapeutic dose (20 mg/kg/day in humans or 250 mg/kg/day in mice) (3,8), and metformin in general circulation is 10–40 mmol/L (8). As the antihyperglycemic effect of metformin is mainly due to the inhibition of hepatic glucose output and the concentration ofmetformin in the hepatocytes is much higher than in the blood, the liver is therefore presumed to be the primary site of metformin function. Indeed, the liver has been the focus of themajority of metformin research by far, and hepatic mechanisms of metformin that have been suggested include the activation of AMPK through liver kinase B1 and decreased energy charge (9,10), the inhibition of glucagon-induced cAMP production by blocking adenylyl cyclase (11), the increase of the AMP/ATP ratio by restricting NADHcoenzymeQ oxidoreductase (complex I) in themitochondrial electron transport chain (12) (albeit at high metformin concentrations,;5mmol/L), and,more recently, the reduction of lactate and glycerol metabolism to glucose through a redox change by inhibitingmitochondrial glycerophosphate dehydrogenase (13). It is noteworthy that the remaining ;50%ofmetformin,which is unabsorbed, accumulates in the gut mucosa of the distal small intestine at concentrations 30to", "title": "" }, { "docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4", "text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.", "title": "" }, { "docid": "f7c73ca2b6cd6da6fec42076910ed3ec", "text": "The goal of rating-based recommender systems is to make personalized predictions and recommendations for individual users by leveraging the preferences of a community of users with respect to a collection of items like songs or movies. Recommender systems are often based on intricate statistical models that are estimated from data sets containing a very high proportion of missing ratings. This work describes evidence of a basic incompatibility between the properties of recommender system data sets and the assumptions required for valid estimation and evaluation of statistical models in the presence of missing data. We discuss the implications of this problem and describe extended modelling and evaluation frameworks that attempt to circumvent it. We present prediction and ranking results showing that models developed and tested under these extended frameworks can significantly outperform standard models.", "title": "" }, { "docid": "a134708edc1879699a4643933f3b0f9f", "text": "Embodied Cognition is an approach to cognition that departs from traditional cognitive science in its reluctance to conceive of cognition as computational and in its emphasis on the significance of an organism’s body in how and what the organism thinks. Three lines of embodied cognition research are described and some thoughts on the future of embodied cognition offered. The embodied cognition research programme, hereafter EC, departs from more traditional cognitive science in the emphasis it places on the role the body plays in an organism’s cognitive processes. Saying more beyond this vague claim is difficult, but this is perhaps not surprising given the diversity of fields – phenomenology, robotics, ecological psychology, artificial life, ethology – from which EC has emerged. Indeed, the point of labelling EC a research programme, rather than a theory, is to indicate that the commitments and subject matters of EC remain fairly nebulous. Yet, much of the flavour of EC becomes evident when considering three prominent directions that researchers in this programme have taken. Before turning to these lines of research, it pays to have in sight the traditional view of cognitive science against which EC positions itself. I.Traditional Cognitive Science Unifying traditional cognitive science is the idea that thinking is a process of symbol manipulation, where symbols lead both a syntactic and a semantic life (Haugeland, ‘Semantic Engines’). The syntax of a symbol comprises those properties in virtue of which the symbol undergoes rule-dictated transformations. The semantics of a symbol constitute the symbols’ meaning or representational content. Thought consists in the syntactically determined manipulation of symbols, but in a way that respects their semantics. Thus, for instance, a calculating computer sensitive only to the shape of symbols might produce the symbol ‘5’ in response to the inputs ‘2’, ‘+’, and ‘3’. As far as the computer is concerned, these symbols have no meaning, but because of its programme it will produce outputs that, to the user, ‘make sense’ given the meanings the user attributes to the symbols.", "title": "" }, { "docid": "78fecd65b909fbdfeb4b3090b2dadc01", "text": "Advances in antenna technologies for cellular hand-held devices have been synchronous with the evolution of mobile phones over nearly 40 years. Having gone through four major wireless evolutions [1], [2], starting with the analog-based first generation to the current fourth-generation (4G) mobile broadband, technologies from manufacturers and their wireless network capacities today are advancing at unprecedented rates to meet our unrelenting service demands. These ever-growing demands, driven by exponential growth in wireless data usage around the globe [3], have gone hand in hand with major technological milestones achieved by the antenna design community. For instance, realizing the theory regarding the physical limitation of antennas [4]-[6] was paramount to the elimination of external antennas for mobile phones in the 1990s. This achievement triggered a variety of revolutionary mobile phone designs and the creation of new wireless services, establishing the current cycle of cellular advances and advances in mobile antenna technologies.", "title": "" }, { "docid": "159610206e175126fa07f87a5fb28ab2", "text": "BACKGROUND\nThe aim of this review was to further define the clinical condition triquetrohamate (TH) impaction syndrome (THIS), an entity underreported and missed often. Its presentation, physical findings, and treatment are presented.\n\n\nMETHODS\nBetween 2009 and 2014, 18 patients were diagnosed with THIS. The age, sex, hand involved, activity responsible for symptoms, and defining characteristics were recorded. The physical findings, along with ancillary studies, were reviewed. Delay in diagnosis and misdiagnoses were assessed. Treatment, either conservative or surgical, is presented. Follow-up outcomes are presented.\n\n\nRESULTS\nThere were 15 male and 3 females, average age of 42 years. Two-handed sports such as golf and baseball accounted for more than 60% of the cases, and these cases were the only ones that involved the lead nondominant hand, pain predominantly at impact. Delay in diagnosis averaged greater than 7 months, with triangular fibrocartilage (TFCC) and extensor carpi ulnaris (ECU) accounting for more than 50% of misdiagnoses. Physical findings of note included pain over the TH joint, worse with passive dorsiflexion and ulnar deviation. Radiographic findings are described. Instillation of lidocaine with the wrist in radial deviation under fluoroscopic imaging with relief of pain helped to confirm the diagnosis. Conservative treatment was successful in 9 of 18 patients (50%), whereas in the remaining, surgical intervention allowed approximately 80% return to full activities without limitation.\n\n\nCONCLUSION\nTriquetrohamate impaction syndrome remains an underreported and often unrecognized cause of ulnar-sided wrist pain. In this report, the largest series to date, its presentation, defining characteristics, and treatment options are further elucidated.", "title": "" }, { "docid": "a316280dc7f50689015d54021101eb34", "text": "Taddy (2013) proposed multinomial inverse regression (MNIR) as a new model of annotated text based on the influence of metadata and response variables on the distribution of words in a document. While effective, MNIR has no way to exploit structure in the corpus to improve its predictions or facilitate exploratory data analysis. On the other hand, traditional probabilistic topic models (like latent Dirichlet allocation) capture natural heterogeneity in a collection but do not account for external variables. In this paper, we introduce the inverse regression topic model (IRTM), a mixed-membership extension of MNIR that combines the strengths of both methodologies. We present two inference algorithms for the IRTM: an efficient batch estimation algorithm and an online variant, which is suitable for large corpora. We apply these methods to a corpus of 73K Congressional press releases and another of 150K Yelp reviews, demonstrating that the IRTM outperforms both MNIR and supervised topic models on the prediction task. Further, we give examples showing that the IRTM enables systematic discovery of in-topic lexical variation, which is not possible with previous supervised topic models.", "title": "" } ]
scidocsrr
5dae0e0462d67f7ef9cdd106fe45cda7
Capturing braided hairstyles
[ { "docid": "bdfb3a761d7d9dbb96fa4f07bc2c1f89", "text": "We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.", "title": "" }, { "docid": "4f61e9cd234a5f6e6b9886cf4ab1cc22", "text": "We introduce a data-driven hair capture framework based on example strands generated through hair simulation. Our method can robustly reconstruct faithful 3D hair models from unprocessed input point clouds with large amounts of outliers. Current state-of-the-art techniques use geometrically-inspired heuristics to derive global hair strand structures, which can yield implausible hair strands for hairstyles involving large occlusions, multiple layers, or wisps of varying lengths. We address this problem using a voting-based fitting algorithm to discover structurally plausible configurations among the locally grown hair segments from a database of simulated examples. To generate these examples, we exhaustively sample the simulation configurations within the feasible parameter space constrained by the current input hairstyle. The number of necessary simulations can be further reduced by leveraging symmetry and constrained initial conditions. The final hairstyle can then be structurally represented by a limited number of examples. To handle constrained hairstyles such as a ponytail of which realistic simulations are more difficult, we allow the user to sketch a few strokes to generate strand examples through an intuitive interface. Our approach focuses on robustness and generality. Since our method is structurally plausible by construction, we ensure an improved control during hair digitization and avoid implausible hair synthesis for a wide range of hairstyles.", "title": "" } ]
[ { "docid": "36ed684e39877873407efb809f3cd1dc", "text": "A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process.", "title": "" }, { "docid": "80d920f1f886b81e167d33d5059b8afe", "text": "Agriculture is one of the most important aspects of human civilization. The usages of information and communication technologies (ICT) have significantly contributed in the area in last two decades. Internet of things (IOT) is a technology, where real life physical objects (e.g. sensor nodes) can work collaboratively to create an information based and technology driven system to maximize the benefits (e.g. improved agricultural production) with minimized risks (e.g. environmental impact). Implementation of IOT based solutions, at each phase of the area, could be a game changer for whole agricultural landscape, i.e. from seeding to selling and beyond. This article presents a technical review of IOT based application scenarios for agriculture sector. The article presents a brief introduction to IOT, IOT framework for agricultural applications and discusses various agriculture specific application scenarios, e.g. farming resource optimization, decision support system, environment monitoring and control systems. The article concludes with the future research directions in this area.", "title": "" }, { "docid": "8dceabb16ef38fdd5f9669fc6a5457d7", "text": "0164-1212/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.jss.2008.05.009 q Work funded by the Ministerio de Educación y C research project TSI2007-61599, by the Consellería Universitaria (Xunta de Galicia) incentives file 2007/00 de Promoción Xeral da Investigación de Consellería Comercio (Xunta de Galicia) PGIDIT05PXIC32204PN. * Corresponding author. E-mail address: yolanda@det.uvigo.es (Y. Blanco-F Current recommender systems attempt to identify appealing items for a user by applying syntactic matching techniques, which suffer from significant limitations that reduce the quality of the offered suggestions. To overcome this drawback, we have developed a domain-independent personalization strategy that borrows reasoning techniques from the Semantic Web, elaborating recommendations based on the semantic relationships inferred between the user’s preferences and the available items. Our reasoningbased approach improves the quality of the suggestions offered by the current personalization approaches, and greatly reduces their most severe limitations. To validate these claims, we have carried out a case study in the Digital TV field, in which our strategy selects TV programs interesting for the viewers from among the myriad of contents available in the digital streams. Our experimental evaluation compares the traditional approaches with our proposal in terms of both the number of TV programs suggested, and the users’ perception of the recommendations. Finally, we discuss concerns related to computational feasibility and scalability of our approach. 2008 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e6c1747e859f64517e7dddb6c1fd900e", "text": "More and more mobile objects are now equipped with sensors allowing real time monitoring of their movements. Nowadays, the data produced by these sensors can be stored in spatio-temporal databases. The main goal of this article is to perform a data mining on a huge quantity of mobile object’s positions moving in an open space in order to deduce its behaviour. New tools must be defined to ease the detection of outliers. First of all, a zone graph is set up in order to define itineraries. Then, trajectories of mobile objects following the same itinerary are extracted from the spatio-temporal database and clustered. A statistical analysis on this set of trajectories lead to spatio-temporal patterns such as the main route and spatio-temporal channel followed by most of trajectories of the set. Using these patterns, unusual situations can be detected. Furthermore, a mobile object’s behaviour can be defined by comparing its positions with these spatio-temporal patterns. In this article, this technique is applied to ships’ movements in an open maritime area. Unusual behaviours such as being ahead of schedule or delayed or veering to the left or to the right of the main route are detected. A case study illustrates these processes based on ships’ positions recorded during two years around the Brest area. This method can be extended to almost all kinds of mobile objects (pedestrians, aircrafts, hurricanes, ...) moving in an open area.", "title": "" }, { "docid": "656cea13943b061673eaa46656221354", "text": "Internet of things is an accumulation of physical objects which are able to send information over the network. Nowaday, an association utilizes IoT devices to gather real time and continuous data from sensors. This information can be utilized to enhance the customer satisfaction and to make better business decisions. The cloud has various advantages over on-premises storage for storing IoT information. But there are apprehensions with using the cloud for IoT data storage. The real one is security. To exchange the information over the cloud IoT devices utilizes WiFi technology. However, it has a few limits that confine the potential outcomes of the Internet of Things. On the off chance that more devices or clients that will be associated with the internet utilizing WiFi, the transfer speed gets separated among the clients hence the outcome will be slower network. Consequently, there is a necessity of a speedier and a solid internet administration to the Internet of Things to be completely operational. This paper shows the strategy which permits exchanging gathered loT information over the cloud safely utilizing LiFi innovation by applying role based access control approaches and the cryptography techniques.", "title": "" }, { "docid": "4c12d10fd9c2a12e56b56f62f99333f3", "text": "The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders.", "title": "" }, { "docid": "ab769c707f7690fd28a60f58abd56b35", "text": "Buildings are the focus of European (EU) policies aimed at a sustainable and competitive low-carbon economy by 2020. Reducing energy consumption of existing buildings and achieving nearly zero energy buildings (NZEBs) are the core of the Energy Efficiency Directive (EED) and the recast of the Energy Performance of Building Directive (EPBD). To comply with these requirements, Member States have to adopt actions to exploit energy savings from the building sector. This paper describes the differences between deep, major and NZEB renovation and then it provides an overview of best practice policies and measures to target retrofit and investment related to non-residential buildings. Energy requirements defined by Member States for NZEB levels are reported comparing both new and existing residential and non-residential buildings. The paper shows how the attention given to refurbishment of NZEBs increased over the last decade, but the achievement of a comprehensive implementation of retrofit remains one of main challenges that Europe is facing.", "title": "" }, { "docid": "64d4776be8e2dbb0fa3b30d6efe5876c", "text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.", "title": "" }, { "docid": "d69b8c991e66ff274af63198dba2ee01", "text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.", "title": "" }, { "docid": "719c945e9f45371f8422648e0e81178f", "text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey", "title": "" }, { "docid": "0012f70ed83e001aa074a9c4d1a41a61", "text": "In this paper, instead of multilayered notch antenna, the ridged tapered slot antenna (RTSA) is chosen as an element of wideband phased array antenna (PAA) since it has rigid body and can be easily manufactured by mechanical wire-cutting. In addition, because the RTSA is made of conductor, it doesn't need via-holes which are required to avoid the blind angles out of the operation frequency band. Theses blind angles come from the self resonance of the dielectric material of notch antenna. We developed wide band/wide scan PAA which has a bandwidth of 3:1 and scan volume of plusmn45deg. In order to determine the shape of the RTSA, the active VSWR (AVSWR) of the RTSA was optimized in the numerical waveguide simulator. And then using the E-plane/H-plane simulator, the AVSWR with beam scan angles in E-plane/H-plane are calculated respectively. On the basis of optimized design, numerical analysis of finite arrays was performed by commercial time domain solver. Through the simulation of 10 times 6 quad-element RTSA arrays, the AVSWR at the center element was computed and compared with the measured result. The active element pattern (AEP) of 10 times 6 quad-element RTSA arrays was also computed and had a good agreement with the measured AEP. From the result of the AEP, we can easily predict that 10 times 6 quad-element RTSA arrays have a good beam scanning capabilities", "title": "" }, { "docid": "32ce76009016ba30ce38524b7e9071c9", "text": "Error in medicine is a subject of continuing interest among physicians, patients, policymakers, and the general public. This article examines the issue of disclosure of medical errors in the context of emergency medicine. It reviews the concept of medical error; proposes the professional duty of truthfulness as a justification for error disclosure; examines barriers to error disclosure posed by health care systems, patients, physicians, and the law; suggests system changes to address the issue of medical error; offers practical guidelines to promote the practice of error disclosure; and discusses the issue of disclosure of errors made by another physician.", "title": "" }, { "docid": "d484c24551191360bc05b768e2fa9957", "text": "The paper aims to develop and design a cloud-based Quran portal using Drupal technology and make it available in multiple services. The portal can be hosted on cloud and users around the world can access it using any Internet enabled device. The proposed portal includes different features to become a center of learning resources for various users. The portal is further designed to promote research and development of new tools and applications includes Application Programming Interface (API) and Search API, which exposes the search to public, and make the searching Quran efficient and easy. The cloud application can request various surah or ayah using the API and by passing filter.", "title": "" }, { "docid": "b0815caebe9373220195ac3b143abeca", "text": "This paper presents the motivation, basis and a prototype implementation of an ethical adaptor capable of using a moral affective function, guilt, as a basis for altering a robot's ongoing behavior. While the research is illustrated in the context of the battlefield, the methods described are believed generalizable to other domains such as eldercare and are potentially extensible to a broader class of moral emotions, including compassion and empathy.", "title": "" }, { "docid": "2afca4f0497e57366af901eb393c93a4", "text": "On page 66 of this article, there were several errors in figure 1. The tract labelled ‘Temporoammonic path’ should have been labelled ‘Perforant path to CA1’, and the tract labelled ‘Perforant path’ should have been labelled ‘Perforant path to dentate gyrus’. The fibres from layer III cells in the lateral entorhinal cortex that were shown projecting to proximal CA1 cells should have been depicted projecting to the distal CA1 cells, and the fibres from layer III cells in the medial entorhinal cortex that were shown projecting to distal CA1 cells should have been depicted projecting to proximal CA1 cells. The layer II cells in the medial and lateral entorhinal cortex that project to the granule cells should have been depicted as continuing to the stratum lacunosum moleculare, where they contact CA3 pyramidal cells. These errors have been corrected in the online version. The authors thank J. Z. Young for pointing out the errors and M. Witter for advice in making the alterations to the figure.", "title": "" }, { "docid": "f116348b63bac101bfd5dde498eccc6f", "text": "Machine learning domain has grown quickly the last few years, in particular in the mobile eHealth domain. In the context of the DINAMO project, we aimed to detect hypoglycemia on Type 1 diabetes patients by using their ECG, recorded with a sport-like chest belt. In order to know if the data contain enough information for this classification task, we needed to apply and evaluate machine learning algorithms on several kinds of features. We have built a Python toolbox for this reason. It is built on top of the scikit-learn toolbox and it allows evaluating a defined set of machine learning algorithms on a defined set of features extractors, taking care of applying good machine learning techniques such as cross-validation or parameters grid-search. The resulting framework can be used as a first analysis toolbox to investigate the potential of the data. It can also be used to fine-tune parameters of machine learning algorithms or parameters of features extractors. In this paper we explain the motivation of such a framework, we present its structure and we show a case study presenting negative results that we could quickly spot using our toolbox.", "title": "" }, { "docid": "8165132bed6f74274c7a9aa3ba91767b", "text": "Pattern detection over streams of events is gaining more and more attention, especially in the field of eCommerce. Our industrial partner Cdiscount, which is one of the largest eCommerce companies in France, wants to use pattern detection for real-time customer behavior analysis. The main challenges to consider are efficiency and scalability, as the detection of customer behavior must be achieved within a few seconds, while millions of unique customers visit the website every day, each performing hundreds of actions. In this paper, we present our approach to large-scale and efficient pattern detection for eCommerce. It relies on a domain-specific language to define behavior patterns. Patterns are then compiled into deterministic finite automata, which are run on a Big Data streaming platform to carry out the detection work. Our evaluation shows that our approach is efficient and scalable, and fits the requirements of Cdiscount.", "title": "" }, { "docid": "dc549576475892f76f7ca4cd0b257d4e", "text": "This paper presents privileged multi-label learning (PrML) to explore and exploit the relationship between labels in multi-label learning problems. We suggest that for each individual label, it cannot only be implicitly connected with other labels via the low-rank constraint over label predictors, but also its performance on examples can receive the explicit comments from other labels together acting as an Oracle teacher. We generate privileged label feature for each example and its individual label, and then integrate it into the framework of low-rank based multi-label learning. The proposed algorithm can therefore comprehensively explore and exploit label relationships by inheriting all the merits of privileged information and low-rank constraints. We show that PrML can be efficiently solved by dual coordinate descent algorithm using iterative optimization strategy with cheap updates. Experiments on benchmark datasets show that through privileged label features, the performance can be significantly improved and PrML is superior to several competing methods in most cases.", "title": "" }, { "docid": "eaa2ed7e15a3b0a3ada381a8149a8214", "text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.", "title": "" }, { "docid": "35404fbbf92e7a995cdd6de044f2ec0d", "text": "The ball on plate system is the extension of traditional ball on beam balancing problem in control theory. In this paper the implementation of a proportional-integral-derivative controller (PID controller) to balance a ball on a plate has been demonstrated. To increase the system response time and accuracy multiple controllers are piped through a simple custom serial protocol to boost the processing power, and overall performance. A single HD camera module is used as a sensor to detect the ball's position and two RC servo motors are used to tilt the plate to balance the ball. The result shows that by implementing multiple PUs (Processing Units) redundancy and high resolution can be achieved in real-time control systems.", "title": "" } ]
scidocsrr
762f524812260d6affbb9f9efb4f24e3
Forcing Neurocontrollers to Exploit Sensory Symmetry Through Hard-wired Modularity in the Game of Cellz
[ { "docid": "bef6c1e237e52d9a40c78856126a9be8", "text": "An approach to robotics called layered evolution and merging features from the subsumption architecture into evolutionary robotics is presented, and its advantages are discussed. This approach is used to construct a layered controller for a simulated robot that learns which light source to approach in an environment with obstacles. The evolvability and performance of layered evolution on this task is compared to (standard) monolithic evolution, incremental and modularised evolution. To corroborate the hypothesis that a layered controller performs at least as well as an integrated one, the evolved layers are merged back into a single network. On the grounds of the test results, it is argued that layered evolution provides a superior approach for many tasks, and it is suggested that this approach may be the key to scaling up evolutionary robotics.", "title": "" }, { "docid": "8e1a65dd8bf9d8a4b67c46a0067ca42d", "text": "Reading Genetic Programming IE Automatic Discovery ofReusable Programs (GPII) in its entirety is not a task for the weak-willed because the book without appendices is about 650 pages. An entire previous book by the same author [1] is devoted to describing Genetic Programming (GP), while this book is a sequel extolling an extension called Automatically Defined Functions (ADFs). The author, John R. Koza, argues that ADFs can be used in conjunction with GP to improve its efficacy on large problems. \"An automatically defined function (ADF) is a function (i.e., subroutine, procedure, module) that is dynamically evolved during a run of genetic programming and which may be called by a calling program (e.g., a main program) that is simultaneously being evolved\" (p. 1). Dr. Koza recommends adding the ADF technique to the \"GP toolkit.\" The book presents evidence that it is possible to interpret GP with ADFs as performing either a top-down process of problem decomposition or a bottom-up process of representational change to exploit identified regularities. This is stated as Main Point 1. Main Point 2 states that ADFs work by exploiting inherent regularities, symmetries, patterns, modularities, and homogeneities within a problem, though perhaps in ways that are very different from the style of programmers. Main Points 3 to 7 are appropriately qualified statements to the effect that, with a variety of problems, ADFs pay off be-", "title": "" } ]
[ { "docid": "69f4e9818cc5b37f0ce6410cc970944c", "text": "In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying various gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accuracy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments.", "title": "" }, { "docid": "a6213ad508c996c0e62f71e6619654f0", "text": "Angiogenesis is essential for normal tissue and even more so for solid malignancies. At present, inhibition of tumor angiogenesis is a major focus of anticancer drug development. Bevacizumab, a humanized antibody against VEGF, was the first antiangiogenic agent to be approved for advanced non-small cell lung cancer, breast cancer and colorectal cancer. The most commonly observed adverse events are hypertension, proteinuria, bleeding and thrombosis. Sunitinib, a small molecule blocking intracellular VEGF, KIT, Flt3 and PDGF receptors, which regulate angiogenesis and cell growth, is approved for the treatment of advanced renal cell cancer (RCC) and malignant gastrointestinal stromal tumor. The most frequent adverse events include hand-foot syndrome, stomatitis, diarrhea, fatigue, hypothyroidism and hypertension. Sorafenib, an oral multikinase inhibitor, is approved for the second-line treatment of advanced RCC and upfront treatment of advanced hepatocellular carcinoma. Most common adverse events with sorafenib are dermatologic (hand-foot skin reaction, rash, desquamation), fatigue, diarrhea, nausea, hypothyroidism and hypertension. More recently, cardiovascular toxicity has increasingly been recognized as a potential adverse event associated with sunitinib and sorafenib treatment. Elderly patients are at increased risk of thromboembolic events when receiving bevacizumab, and potentially for cardiac dysfunction when receiving sunitinib or sorafenib. The safety of antiangiogenic drugs is of special concern when taking these agents for longer-term adjuvant or maintenance treatment. Furthermore, newer investigational antiangiogenic drugs are briefly reviewed.", "title": "" }, { "docid": "8e742ad9ccaac623fd4c09c87f4df30e", "text": "Pain research has uncovered important neuronal mechanisms that underlie clinically relevant pain states such as inflammatory and neuropathic pain. Importantly, both the peripheral and the central nociceptive system contribute significantly to the generation of pain upon inflammation and nerve injury. Peripheral nociceptors are sensitized during inflammation, and peripheral nerve fibres develop ectopic discharges upon nerve injury or disease. As a consequence a complex neuronal response is evoked in the spinal cord where neurons become hyperexcitable, and a new balance is set between excitation and inhibition. The spinal processes are significantly influenced by brain stem circuits that inhibit or facilitate spinal nociceptive processing. Numerous mechanisms are involved in peripheral and central nociceptive processes including rapid functional changes of signalling and long-term regulatory changes such as up-regulation of mediator/receptor systems. Conscious pain is generated by thalamocortical networks that produce both sensory discriminative and affective components of the pain response.", "title": "" }, { "docid": "287873a6428cfbf8fc9066c24d977d50", "text": "Deployment of embedded technologies is increasingly being examined in industrial supply chains as a means for improving efficiency through greater control over purchase orders, inventory and product related information. Central to this development has been the advent of technologies such as bar codes, Radio Frequency Identification (RFID) systems, and wireless sensors which when attached to a product, form part of the product’s embedded systems infrastructure. The increasing integration of these technologies dramatically contributes to the evolving notion of a “smart product”, a product which is capable of incorporating itself into both physical and information environments. The future of this revolution in objects equipped with smart embedded technologies is one in which objects can not only identify themselves, but can also sense and store their condition, communicate This work was partly funded as part of the BRIDGE project by the European Commission within the Sixth Framework Programme (2002-2006) IP Nr. IST-FP6-033546. T. Sánchez López (B) · B. Patkai · D. McFarlane Engineering Department, Institute for Manufacturing, University of Cambridge, 16 Mill Lane, Cambridge CB2 1RX, UK e-mail: tsl26@cam.ac.uk B. Patkai e-mail: bp282@cam.ac.uk D. McFarlane e-mail: dcm@cam.ac.uk D. C. Ranasinghe The School of Computer Science, The University of Adelaide, Adelaide, South Australia, 5005, Australia e-mail: damith@cs.adelaide.edu.au with other objects and distributed infrastructures, and take decisions related to managing their life cycle. The object can essentially “plug” itself into a compatible systems infrastructure owned by different partners in a supply chain. However, as in any development process that will involve more than one end user, the establishment of a common foundation and understanding is essential for interoperability, efficient communication among involved parties and for developing novel applications. In this paper, we contribute to creating that common ground by providing a characterization to aid the specification and construction of “smart objects” and their underlying technologies. Furthermore, our work provides an extensive set of examples and potential applications of different categories of smart objects.", "title": "" }, { "docid": "858acbd02250ff2f8325786475b4f3f3", "text": "One of the most important aspects of Grice’s theory of conversation is the drawing of a borderline between what is said and what is implicated. Grice’s views concerning this borderline have been strongly and influentially criticised by relevance theorists. In particular, it has become increasingly widely accepted that Grice’s notion of what is said is too limited, and that pragmatics has a far larger role to play in determining what is said than Grice would have allowed. (See for example Bezuidenhuit 1996; Blakemore 1987; Carston 1991; Recanati 1991, 1993, 2001; Sperber and Wilson 1986; Wilson and Sperber 1981.) In this paper, I argue that the rejection of Grice has moved too swiftly, as a key line of objection which has led to this rejection is flawed. The flaw, we will see, is that relevance theorists rely on a misunderstanding of Grice’s project in his theory of conversation. I am not arguing that Grice’s versions of saying and implicating are right in all details, but simply that certain widespread reasons for rejecting his theory are based on misconceptions.1 Relevance theorists, I will suggest, systematically misunderstand Grice by taking him to be engaged in the same project that they are: making sense of the psychological processes by which we interpret utterances. Notions involved with this project will need to be ones that are relevant to the psychology of utterance interpretation. Thus, it is only reasonable that relevance theorists will require that what is said and what is implicated should be psychologically real to the audience. (We will see that this requirement plays a crucial role in their arguments against Grice.) Grice, I will argue, was not pursuing this project. Rather, I will suggest that he was trying to make sense of quite a different notion of what is said: one on which both speaker and audience may be wrong about what is said. On this sort of notion, psychological reality is not a requirement. So objections to Grice based on a requirement of psychological reality will fail.", "title": "" }, { "docid": "73e169bb6ae0de166518ef55a997bfe6", "text": "Glycogen metabolism has important implications for the functioning of the brain, especially the cooperation between astrocytes and neurons. According to various research data, in a glycogen deficiency (for example during hypoglycemia) glycogen supplies are used to generate lactate, which is then transported to neighboring neurons. Likewise, during periods of intense activity of the nervous system, when the energy demand exceeds supply, astrocyte glycogen is immediately converted to lactate, some of which is transported to the neurons. Thus, glycogen from astrocytes functions as a kind of protection against hypoglycemia, ensuring preservation of neuronal function. The neuroprotective effect of lactate during hypoglycemia or cerebral ischemia has been reported in literature. This review goes on to emphasize that while neurons and astrocytes differ in metabolic profile, they interact to form a common metabolic cooperation.", "title": "" }, { "docid": "f35d0784dc7ae4140754b3d0ab2b9c8c", "text": "The future 5G wireless is triggered by the higher demand on wireless capacity. With Software Defined Network (SDN), the data layer can be separated from the control layer. The development of relevant studies about Network Function Virtualization (NFV) and cloud computing has the potential of offering a quicker and more reliable network access for growing data traffic. Under such circumstances, Software Defined Mobile Network (SDMN) is presented as a promising solution for meeting the wireless data demands. This paper provides a survey of SDMN and its related security problems. As SDMN integrates cloud computing, SDN, and NFV, and works on improving network functions, performance, flexibility, energy efficiency, and scalability, it is an important component of the next generation telecommunication networks. However, Yongfeng Qian yongfeng.hust@gmail.com Min Chen minchen@ieee.org Shiwen Mao smao@ieee.org Wan Tang tangwan@scuec.edu.cn Ximin Yang yangximin@scuec.edu.cn 1 Embedded and Pervasive Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China 2 Department of Electrical & Computer Engineering, Auburn University, 200 Broun Hall, Auburn, AL, 36849-5201, USA 3 College of Computer Science, South-Central University for Nationalities, Wuhan 430074, China the SDMN concept also raises new security concerns. We explore relevant security threats and their corresponding countermeasures with respect to the data layer, control layer, application layer, and communication protocols. We also adopt the STRIDE method to classify various security threats to better reveal them in the context of SDMN. This survey is concluded with a list of open security challenges in SDMN.", "title": "" }, { "docid": "fc50b185323c45e3d562d24835e99803", "text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.", "title": "" }, { "docid": "97b4de3dc73e0a6d7e17f94dff75d7ac", "text": "Evolution in cloud services and infrastructure has been constantly reshaping the way we conduct business and provide services in our day to day lives. Tools and technologies created to improve such cloud services can also be used to impair them. By using generic tools like nmap, hping and wget, one can estimate the placement of virtual machines in a cloud infrastructure with a high likelihood. Moreover, such knowledge and tools can also be used by adversaries to further launch various kinds of attacks. In this paper we focus on one such specific kind of attack, namely a denial of service (DoS), where an attacker congests a bottleneck network channel shared among virtual machines (VMs) coresident on the same physical node in the cloud infrastructure. We evaluate the behavior of this shared network channel using Click modular router on DETER testbed. We illustrate that game theoretic concepts can be used to model this attack as a two-player game and recommend strategies for defending against such attacks.", "title": "" }, { "docid": "63fe2b3c0dc9dfe4368ad328fd031de0", "text": "Client fingerprinting techniques enhance classical cookie-based user tracking to increase the robustness of tracking techniques. A unique identifier is created based on characteristic attributes of the client device, and then used for deployment of personalized advertisements or similar use cases. Whereas fingerprinting performs well for highly customized devices (especially desktop computers), these methods often lack in precision for highly standardized devices like mobile phones.\n In this paper, we show that widely used techniques do not perform well for mobile devices yet, but that it is possible to build a fingerprinting system for precise recognition and identification. We evaluate our proposed system in an online study and verify its robustness against misclassification.\n Fingerprinting of web clients is often seen as an offence to web users' privacy as it usually takes place without the users' knowledge, awareness, and consent. Thus, we also analyze whether it is possible to outrun fingerprinting of mobile devices. We investigate different scenarios in which users are able to circumvent a fingerprinting system and evade our newly created methods.", "title": "" }, { "docid": "dd62fd669d40571cc11d64789314dba1", "text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.", "title": "" }, { "docid": "691fcf418d6073f7681846b30a1753a8", "text": "Cognitive evaluation theory, which explains the effects of extrinsic motivators on intrinsic motivation, received some initial attention in the organizational literature. However, the simple dichotomy between intrinsic and extrinsic motivation made the theory difficult to apply to work settings. Differentiating extrinsic motivation into types that differ in their degree of autonomy led to self-determination theory, which has received widespread attention in the education, health care, and sport domains. This article describes self-determination theory as a theory of work motivation and shows its relevance to theories of organizational behavior. Copyright # 2005 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "aa2e16e6ed5d2610a567e358807834d4", "text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.", "title": "" }, { "docid": "5016ab74ebd9c1359e8dec80ee220bcf", "text": "The possibility of communication between plants was proposed nearly 20 years ago, although previous demonstrations have suffered from methodological problems and have not been widely accepted. Here we report the first rigorous, experimental evidence demonstrating that undamaged plants respond to cues released by neighbors to induce higher levels of resistance against herbivores in nature. Sagebrush plants that were clipped in the field released a pulse of an epimer of methyl jasmonate that has been shown to be a volatile signal capable of inducing resistance in wild tobacco. Wild tobacco plants with clipped sagebrush neighbors had increased levels of the putative defensive oxidative enzyme, polyphenol oxidase, relative to control tobacco plants with unclipped sagebrush neighbors. Tobacco plants near clipped sagebrush experienced greatly reduced levels of leaf damage by grasshoppers and cutworms during three field seasons compared to unclipped controls. This result was not caused by an altered light regime experienced by tobacco near clipped neighbors. Barriers to soil contact between tobacco and sagebrush did not reduce the difference in leaf damage although barriers that blocked air contact negated the effect.", "title": "" }, { "docid": "a759ddc24cebbbf0ac71686b179962df", "text": "Most proteins must fold into defined three-dimensional structures to gain functional activity. But in the cellular environment, newly synthesized proteins are at great risk of aberrant folding and aggregation, potentially forming toxic species. To avoid these dangers, cells invest in a complex network of molecular chaperones, which use ingenious mechanisms to prevent aggregation and promote efficient folding. Because protein molecules are highly dynamic, constant chaperone surveillance is required to ensure protein homeostasis (proteostasis). Recent advances suggest that an age-related decline in proteostasis capacity allows the manifestation of various protein-aggregation diseases, including Alzheimer's disease and Parkinson's disease. Interventions in these and numerous other pathological states may spring from a detailed understanding of the pathways underlying proteome maintenance.", "title": "" }, { "docid": "b0de8371b0f5bfcecd8370bb0fdac174", "text": "We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis. We show that both hinge on the question of understanding the complexity of the following problem, which we call PosSLP; given a division-free straight-line program producing an integer N, decide whether N > 0. We show that PosSLP lies in the counting hierarchy, and combining our results with work of Tiwari, we show that the Euclidean traveling salesman problem lies in the counting hierarchy - the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE", "title": "" }, { "docid": "4f43cd8225c70c0328ea4a971abc0e2f", "text": "Home security system is needed for convenience and safety. This system invented to keep home safe from intruder. In this work, we present the design and implementation of a GSM based wireless home security system. which take a very less power. The system is a wireless home network which contains a GSM modem and magnet with relay which are door security nodes. The system can response rapidly as intruder detect and GSM module will do alert home owner. This security system for alerting a house owner wherever he will. In this system a relay and magnet installed at entry point to a precedence produce a signal through a public telecom network and sends a message or redirect a call that that tells about your home update or predefined message which is embedded in microcontroller. Suspected activities are conveyed to remote user through SMS or Call using GSM technology.", "title": "" }, { "docid": "296705d6bfc09f58c8e732a469b17871", "text": "Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.", "title": "" }, { "docid": "4dca240e5073db9f09e6fdc3b022a29a", "text": "We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to three-dimensional physically simulated biped locomotion.", "title": "" } ]
scidocsrr
8f4730fcbf53c86911727aa2fc486187
Large-scale direct SLAM for omnidirectional cameras
[ { "docid": "229288405fbbc0779c42fb311754ca1d", "text": "We present a system for monocular simultaneous localization and mapping (mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is completely incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover one of the longest distance ever reported, up to 2.5 kilometers.", "title": "" } ]
[ { "docid": "7758fd29e1b59ef3edae06c00a33bad2", "text": "We demonstrate that state-of-the-art optical character recognition (OCR) based on deep learning is vulnerable to adversarial images. Minor modifications to images of printed text, which do not change the meaning of the text to a human reader, cause the OCR system to “recognize” a different text where certain words chosen by the adversary are replaced by their semantic opposites. This completely changes the meaning of the output produced by the OCR system and by the NLP applications that use OCR for preprocessing their inputs.", "title": "" }, { "docid": "45f75c8d642be90e45abff69b4c6fbcf", "text": "We describe a method for identifying the speakers of quoted speech in natural-language textual stories. We have assembled a corpus of more than 3,000 quotations, whose speakers (if any) are manually identified, from a collection of 19th and 20th century literature by six authors. Using rule-based and statistical learning, our method identifies candidate characters, determines their genders, and attributes each quote to the most likely speaker. We divide the quotes into syntactic classes in order to leverage common discourse patterns, which enable rapid attribution for many quotes. We apply learning algorithms to the remainder and achieve an overall accuracy of 83%.", "title": "" }, { "docid": "5634acd7da03dbce0e0df28917a9af69", "text": "In this paper, multi-port UHF RFID tag-based sensor for wireless identification and sensing applications is presented. Two RFID chips, one with attached sensor and the other without, are incorporated in a single tag antenna with two excitation ports. The chip with the integrated sensor (sensor port) transmits a signal impacted by the sensed temperature or humidity, while the other RFID chip serves as the reference signal (reference port) transmitter in the sensing process. The proposed tag-based sensor is fabricated and experimentally evaluated. The measured results demonstrate that the sensed data can be extracted using a commercial RFID reader by recording and comparing the difference in the reader output power required to power up the reference port and the power required to power the sensor ports. To improve the reading range of the proposed sensor, a dual-port solar powered RFID sensor is also presented. The reading range of the sensor is increased by two times compared to a similar prototype without solar energy harvesting. The experimental evaluation demonstrates that the proposed tag-based sensor can be easily integrated with a resistive humidity or temperature sensor for a low-cost solution to detect the heat or humidity exposure of sensitive items for several applications such as supply chains and construction structures.", "title": "" }, { "docid": "d0c8e58e06037d065944fc59b0bd7a74", "text": "We propose a new discrete choice model that generalizes the random utility model (RUM). We show that this model, called the Generalized Stochastic Preference (GSP) model can explain several choice phenomena that can’t be represented by a RUM. In particular, the model can easily (and also exactly) replicate some well known examples that are not RUM, as well as controlled choice experiments carried out since 1980’s that possess strong regularity violations. One of such regularity violation is the decoy effect in which the probability of choosing a product increases when a similar, but inferior product is added to the choice set. An appealing feature of the GSP is that it is non-parametric and therefore it has very high flexibility. The model has also a simple description and interpretation: it builds upon the well known representation of RUM as a stochastic preference, by allowing some additional consumer types to be non-rational.", "title": "" }, { "docid": "6d907d6d8729a8993da82bc5b9664c27", "text": "This paper investigates the joint Doppler and DOA (Direction-of-Arrival) estimation with a wideband phased array in presence of phase residual due to range-Doppler coupling appearing in pulse-Doppler radars. 2D MUSIC algorithm is applied and a compensation approach is developed to eliminate the influence of phase residual. Simulation data validate the improvement of joint Doppler and DOA estimation performance using the proposed method.", "title": "" }, { "docid": "0add9f22db24859da50e1a64d14017b9", "text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.", "title": "" }, { "docid": "e2f6434cf7acfa6bd722f893c9bd1851", "text": "Image Synthesis for Self-Supervised Visual Representation Learning", "title": "" }, { "docid": "6825c5294da2dfe7a26b6ac89ba8f515", "text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.", "title": "" }, { "docid": "71757d1cee002bb235a591cf0d5aafd5", "text": "There is an old Wall Street adage goes, ‘‘It takes volume to make price move”. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China’s stock market. Pacific-Basin Finace Journal, 12, 541–564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68–85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285–295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiplefactor models, Chen’s [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263–275] and Huarng and Yu’s [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445–462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0cc86e894165216fda1ff82c636272a1", "text": "In the era of globalization, concepts such as individualization and personalization become more and more important in virtual systems. With the goal of creating a more familiar interaction between human and machines, it makes sense to create a consistent and believable model of personality. This paper presents an explicit model of personality, based in the Five Factor Model, which aims at the creation of distinguishable personalities by using the personality traits to automatically influence cognitive processes: appraisal, planning,coping, and bodily expression.", "title": "" }, { "docid": "ad9b28c4f7b0d7e60296f20d54786559", "text": "An exact algorithm to compute an optimal 3D oriented bounding box was published in 1985 by Joseph O'Rourke, but it is slow and extremely hard to implement. In this article we propose a new approach, where the computation of the minimal-volume OBB is formulated as an unconstrained optimization problem on the rotation group SO(3,ℝ). It is solved using a hybrid method combining the genetic and Nelder-Mead algorithms. This method is analyzed and then compared to the current state-of-the-art techniques. It is shown to be either faster or more reliable for any accuracy.", "title": "" }, { "docid": "72778e59443066c01142cd0d48400490", "text": "Optimal load shedding (LS) design as an emergency plan is one of the main control challenges posed by emerging new uncertainties and numerous distributed generators including renewable energy sources in a modern power system. This paper presents an overview of the key issues and new challenges on optimal LS synthesis concerning the integration of wind turbine units into the power systems. Following a brief survey on the existing LS methods, the impact of power fluctuation produced by wind powers on system frequency and voltage performance is presented. The most LS schemas proposed so far used voltage or frequency parameter via under-frequency or under-voltage LS schemes. Here, the necessity of considering both voltage and frequency indices to achieve a more effective and comprehensive LS strategy is emphasized. Then it is clarified that this problem will be more dominated in the presence of wind turbines. Keywords— Load shedding, emergency control, voltage, frequency, wind turbine.", "title": "" }, { "docid": "9bd1b2a3c121b076786121231f779a27", "text": "Increasing world population and limited food resources, has made it inevitable to apply the benefits of modern technology to improve the efficiency of agricultural fields. Automatic plant type identification process is crucial not only to industries related to food production but also to environmentalists and related authorities. It increases productivity, contributes to a better understanding of the relationship between environmental factors and healthy crops. It is expected to reduce the labor costs for farmers and increase the operation speed and accuracy. In this paper, we propose a method to classify the type of plants in a video sequence. Our approach utilizes feature fusion together with color and texture features and support vector machine is used for classification. A variety of feature extraction techniques are employed in W-B, R-G and B-Y color spaces to extract color and textural features. Principal component analysis and t-distributed stochastic neighbor embedding methods are employed for dimension reduction. The performance of the approach is tested on dataset collected through a government supported project, TARBIL, for which over 1200 agro-stations are placed throughout Turkey. 5-fold cross validation technique as well as random test samples are used to test the accuracy of the system.", "title": "" }, { "docid": "526ac4f1148cc479556b8c1d4ddb0d26", "text": "Rating prediction is a key task of e-commerce recommendation mechanisms. Recent studies in social recommendation enhance the performance of rating predictors by taking advantage of user relationships. However, these prediction approaches mostly rely on user personal information which is a privacy threat. In this paper, we present dTrust, a simple social recommendation approach that avoids using user personal information. It relies uniquely on the topology of an anonymized trust-user-item network that combines user trust relations with user rating scores. This topology is fed into a deep feed-forward neural network. Experiments on real-world data sets showed that dTrust outperforms state-of-the-art in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) scores for both warm-start and cold-start problems.", "title": "" }, { "docid": "073a2c6743b95913b090dfc17204f880", "text": "Recent work has explored the problem of autonomous navigation by imitating a teacher and learning an end-toend policy, which directly predicts controls from raw images. However, these approaches tend to be sensitive to mistakes by the teacher and do not scale well to other environments or vehicles. To this end, we propose Observational Imitation Learning (OIL), a novel imitation learning variant that supports online training and automatic selection of optimal behavior by observing multiple imperfect teachers. We apply our proposed methodology to the challenging problems of autonomous driving and UAV racing. For both tasks, we utilize the Sim4CV simulator [18] that enables the generation of large amounts of synthetic training data and also allows for online learning and evaluation. We train a perception network to predict waypoints from raw image data and use OIL to train another network to predict controls from these waypoints. Extensive experiments demonstrate that our trained network outperforms its teachers, conventional imitation learning (IL) and reinforcement learning (RL) baselines and even humans in simulation.", "title": "" }, { "docid": "6660393ab06434a6f9831cfbbdefbb5b", "text": "Advocates of critical approaches to second language teaching are interested in relationships between language learning and social change. From this perspective, language is not simply a means of expression or communication; rather, it is a practice that constructs, and is constructed by, the ways language learners understand themselves, their social surroundings, their histories, and their possibilities for the future. This collection assembles the work of a variety of scholars interested in critical perspectives on language education in globally diverse sites of practice. All are interested in investigating the ways that social relationships are lived out in language and how issues of power, while often obscured in language research and educational practice (Kubota, this volume), are centrally important in developing critical language education pedagogies. Indeed, as Morgan (this volume) suggests, “politically engaged critiques of power in everyday life, communities, and institutions” are precisely what are needed to develop critical pedagogies in language education. The chapters have varying foci, seeking to better understand the relationships between writers and readers, teachers and students, test makers and test takers, teacher–educators and student teachers, and researchers and researched. The term critical pedagogy is often associated with the work of scholars such as Freire (1968/1970), Giroux (1992), Luke (1988), Luke and Gore (1992), McLaren (1989), and Simon (1992) in the field of education. Aware of myriad political and economic inequities in contemporary societies, advocates have explored the “social visions” that pedagogical practices support (Simon, 1992), and critiques of classroom practices in terms of their social visions have been common and longstanding in critical educational literature.1 Feminist critiques have also considered classroom practice and have identified ways in which the relationships and activities of classrooms contribute to patriarchal, hierarchical, and dominating practices in wider societies (e.g., Davies, 1989; Ellsworth, 1989; Gaskell, 1992; Spender, 1982; Walkerdine, 1989). In second language education, critiques of classroom practices in terms of the social visions such practices support are relatively recent but are increasingly being published in major venues.2", "title": "" }, { "docid": "10634117fd51d94f9b12b9f0ed034f65", "text": "Our corpus of descriptive text contains a significant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner’s theory of the attentional state, and in particular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to conclude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status.", "title": "" }, { "docid": "fa5c27d91feb3b392e2dba2b2121e184", "text": "Planned experiments are the gold standard in reliably comparing the causal effect of switching from a baseline policy to a new policy. One critical shortcoming of classical experimental methods, however, is that they typically do not take into account the dynamic nature of response to policy changes. For instance, in an experiment where we seek to understand the effects of a new ad pricing policy on auction revenue, agents may adapt their bidding in response to the experimental pricing changes. Thus, causal effects of the new pricing policy after such adaptation period, the long-term causal effects, are not captured by the classical methodology even though they clearly are more indicative of the value of the new policy. Here, we formalize a framework to define and estimate long-term causal effects of policy changes in multiagent economies. Central to our approach is behavioral game theory, which we leverage to formulate the ignorability assumptions that are necessary for causal inference. Under such assumptions we estimate long-term causal effects through a latent space approach, where a behavioral model of how agents act conditional on their latent behaviors is combined with a temporal model of how behaviors evolve over time.", "title": "" }, { "docid": "4b8f59d1b416d4869ae38dbca0eaca41", "text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.", "title": "" }, { "docid": "04ca679e58e1fed644d0bfafce930076", "text": "Music has always been used to elevate the mood in movies and poetry, adding emotions which might not have been without the music. Unfortunately only the most musical people are capable of creating music, let alone the appropriate music. This paper proposes a system that takes as input a piece of text, the representation of that text is consequently transformed into the latent space of a VAE capable of generating music. The latent space of the VAE contains representations of songs and the transformed vector can be decoded from it as a song. An experiment was performed to test this system by presenting a text to seven experts, along with two pieces of music from which one was created from the text. On average the music generated from the text was only recognized in half of the examples, but the poems gave significant results in their recognition, showing a relation between the poems and the generated music.", "title": "" } ]
scidocsrr
7b8f525c5d3cce9138a472cadfa0403a
Automatic Generation of Raven's Progressive Matrices
[ { "docid": "4b0eec16de82592d1f7c715ad25905a9", "text": "We present a computational model for solving Raven’s Progressive Matrices. This model combines qualitative spatial representations with analogical comparison via structuremapping. All representations are automatically computed by the model. We show that it achieves a level of performance on the Standard Progressive Matrices that is above that of most adults, and that the problems it fails on are also the hardest for people.", "title": "" } ]
[ { "docid": "fb7f079d104e81db41b01afe67cdf3b0", "text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.", "title": "" }, { "docid": "840919760f5cc4839fe027d3a744dbd3", "text": "This paper deals with the development and implementation of an on-line stator resistance and permanent magnet flux linkage identification approach devoted to three-phase and open-end winding permanent magnet synchronous motor drives. In particular, the stator resistance and the permanent magnet flux linkage variations are independently determined by exploiting a current vector control strategy, in which one of the phase currents is continuously maintained to zero while the others are suitably modified in order to establish the same rotating magnetomotive force. Moreover, other motor parameters can be evaluated after re-establishing the normal operation of the drive, under the same operating conditions. As will be demonstrated, neither additional sensors nor special tests are required in the proposed method; Motor electrical parameters can be “on-line” estimated in a wide operating range, avoiding any detrimental impact on the torque capability of the PMSM drive.", "title": "" }, { "docid": "d922dbcdd2fb86e7582a4fb78990990e", "text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.", "title": "" }, { "docid": "8ab53b0100ce36ace61660c9c8e208b4", "text": "A novel current-pumped battery charger (CPBC) is proposed in this paper to increase the Li-ion battery charging performance. A complete charging process, consisting of three subprocesses, namely: 1) the bulk current charging process; 2) the pulsed current charging process; and 3) the pulsed float charging process, can be automatically implemented by using the inherent characteristics of current-pumped phase-locked loop (CPLL). A design example for a 700-mA ldr h Li-ion battery is built to assess the CPBC's performance. In comparison with the conventional phase-locked battery charger, the battery available capacity and charging efficiency of the proposed CPBC are improved by about 6.9% and 1.5%, respectively. The results of the experiment show that a CPLL is really suitable for carrying out a Li-ion battery pulse charger.", "title": "" }, { "docid": "52786e9ad3d055a83cae13f422aefcdd", "text": "The lack of reliable sensory feedback has been one of the barriers in prosthetic hand development. Restoring sensory function from prosthetic hand to amputee remains a great challenge to neural engineering. In this paper, we present the development of a sensory feedback system based on the phenomenon of evoked tactile sensation (ETS) at the stump skin of residual limb induced by transcutaneous electrical nerve stimulation (TENS). The system could map a dynamic pattern of stimuli to an electrode placed on the corresponding projected finger areas on the stump skin. A pressure transducer placed at the tip of prosthetic fingers was used to sense contact pressure, and a high performance DSP processor sampled pressure signals, and calculated the amplitude of feedback stimulation in real-time. Biphasic and charge-balanced current pulses with amplitude modulation generated by a multi-channel laboratory stimulator were delivered to activate sensory nerves beneath the skin. We tested this sensory feedback system in amputee subjects. Preliminary results showed that the subjects could perceive different levels of pressure at the tip of prosthetic finger through evoked tactile sensation (ETS) with distinct grades and modalities. We demonstrated the feasibility to restore the perceptual sensation from prosthetic fingers to amputee based on the phenomenon of evoked tactile sensation (ETS) with TENS.", "title": "" }, { "docid": "3ff13bb873dd9a8deada0a7837c5eca4", "text": "This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: 1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; 2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; 3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.", "title": "" }, { "docid": "aa58c15e8b1a6f240c875739f3cd9a36", "text": "STATEMENT OF PROBLEM\nOutcomes of oral implant therapy have been described primarily in terms of implant survival rates and the durability of implant superstructures. Reports of patient-based outcomes of implant therapy have been sparse, and none of these studies have used oral-specific health status measures.\n\n\nPURPOSE\nThis study assessed the impact of implant-stabilized prostheses on the health status of complete denture wearers using patient-based, oral-specific health status measures. It also assessed the influence of preoperative expectations on outcome.\n\n\nMATERIAL AND METHODS\nThree experimental groups requesting replacement of their conventional complete dentures completed an Oral Health Impact Profile (OHIP) and a validated denture satisfaction scale before treatment. One group received an implant-stabilized prosthesis (IG), and 2 groups received new conventional complete dentures (CDG1 and CDG2). After treatment, all subjects completed the health status measures again; preoperative data were compared with postoperative data.\n\n\nRESULTS\nBefore treatment, satisfaction with complete dentures was low in all 3 groups. Subjects requesting implants (IG and CDG1) had high expectations for implant-stabilized prostheses. Improvement in denture satisfaction and OHIP scores was reported by all 3 groups after treatment. Subjects who received their preferred treatment (IG and CDG2 subjects) reported a much greater improvement than CDG1 subjects. Preoperative expectation levels did not appear to influence satisfaction with the outcomes of implant therapy in IG subjects.\n\n\nCONCLUSION\nSubjects who received implants (IG) that replaced conventional complete dentures reported significant improvement after treatment, as did subjects who requested conventional replacement dentures (CDG2). The OHIP appears useful in identifying patients likely to benefit from implant-stabilized prostheses.", "title": "" }, { "docid": "0d51dc0edc9c4e1c050b536c7c46d49d", "text": "MOTIVATION\nThe identification of risk-associated genetic variants in common diseases remains a challenge to the biomedical research community. It has been suggested that common statistical approaches that exclusively measure main effects are often unable to detect interactions between some of these variants. Detecting and interpreting interactions is a challenging open problem from the statistical and computational perspectives. Methods in computing science may improve our understanding on the mechanisms of genetic disease by detecting interactions even in the presence of very low heritabilities.\n\n\nRESULTS\nWe have implemented a method using Genetic Programming that is able to induce a Decision Tree to detect interactions in genetic variants. This method has a cross-validation strategy for estimating classification and prediction errors and tests for consistencies in the results. To have better estimates, a new consistency measure that takes into account interactions and can be used in a genetic programming environment is proposed. This method detected five different interaction models with heritabilities as low as 0.008 and with prediction errors similar to the generated errors.\n\n\nAVAILABILITY\nInformation on the generated data sets and executable code is available upon request.", "title": "" }, { "docid": "359d3e06c221e262be268a7f5b326627", "text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.", "title": "" }, { "docid": "717d1c31ac6766fcebb4ee04ca8aa40f", "text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.", "title": "" }, { "docid": "ea04dad2ac1de160f78fa79b33a93b6a", "text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.", "title": "" }, { "docid": "e4b9dc5b34863144d80bb48e1ab992a7", "text": "As developmental scientists seek to index the strengths of adolescents and adopt the positive youth development (PYD) perspective, psychometrically sound measurement tools will be needed to assess adolescents’ positive attributes. Using a series of exploratory factor analyses and CFA models, this research creates short and very short versions of the scale used to measure the Five Cs of PYD in the 4-H Study of Positive Youth Development. We created separate forms for earlier versus later adolescence and ensured that items displayed sufficient conceptual overlap across forms to support tests of factorial invariance. We discuss implications for further scale development and advocate for the use of these convenient tools, especially in research and applications pertinent to the Five Cs model of PYD. DOI: https://doi.org/10.1111/jora.12039 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-108444 Accepted Version Originally published at: Geldhof, G J; Bowers, Edmond P; Boyd, Michelle J; Mueller, Megan K; Napolitano, Christopher M; Schmid, Kristina L; Lerner, Jacqueline V; Lerner, Richard M (2013). Creation of Short and Very Short Measures of the Five Cs of Positive Youth Development. Journal of Research on Adolescence, 24(1):163176. DOI: https://doi.org/10.1111/jora.12039 Running head: DEVELOPMENT OF SHORT PYD SCALES Creation of Short and Very Short Measures of the Five Cs of Positive Youth Development G. John Geldhof, Edmond P. Bowers, Michelle J. Boyd, Megan Kiely Mueller, Christopher M. Napolitano, Kristina L. Schmid Jacqueline V. Lerner, Richard M. Lerner DEVELOPMENT OF SHORT PYD SCALES 2", "title": "" }, { "docid": "040329beb0f4688ced46d87a51dac169", "text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.", "title": "" }, { "docid": "8e0b16179aabf850c09633df600e6a4a", "text": "Impacts of Informal Caregiving on Caregiver Employment, Health, and Family As the aging population increases, the demand for informal caregiving is becoming an ever more important concern for researchers and policy-makers alike. To shed light on the implications of informal caregiving, this paper reviews current research on its impact on three areas of caregivers’ lives: employment, health, and family. Because the literature is inherently interdisciplinary, the research designs, sampling procedures, and statistical methods used are heterogeneous. Nevertheless, we are still able to draw several conclusions: first, despite the prevalence of informal caregiving and its primary association with lower levels of employment, the affected labor force is seemingly small. Second, such caregiving tends to lower the quality of the caregiver’s psychological health, which also has a negative impact on physical health outcomes. Third, the implications for family life remain under investigated. The research findings also differ strongly among subgroups, although they do suggest that female, spousal, and intense caregivers tend to be the most affected by caregiving. JEL Classification: E26, J14, J46", "title": "" }, { "docid": "279d6de6ed6ade25d5ac0ff3d1ecde49", "text": "This paper explores the relationship between TV viewership ratings for Scandinavian's most popular talk show, Skavlan and public opinions expressed on its Facebook page. The research aim is to examine whether the activity on social media affects the number of viewers per episode of Skavlan, how the viewers are affected by discussions on the Talk Show, and whether this creates debate on social media afterwards. By analyzing TV viewer ratings of Skavlan talk show, Facebook activity and text classification of Facebook posts and comments with respect to type of emotions and brand sentiment, this paper identifes patterns in the users' real-world and digital world behaviour.", "title": "" }, { "docid": "b8ed09081032a790b1c5c4bb3afebfff", "text": "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. There are two components: i) The face proposal component computes face proposals via estimating facial key-points and the 3D transformation parameters for each predicted keypoint w.r.t. the 3D mean face model. ii) The face verification component computes detection results by refining proposals based on configuration pooling.", "title": "" }, { "docid": "42fa545010569b71d1211c413326f869", "text": "Occupational therapists working with Mexican and Mexican American populations may encounter traditional healing practices associated with curanderismo within a variety of practice settings. Curanderismo is a term referring to the practice of traditional healing in Latin American (Hispanic) cultures. This article reviews from the literature the different types of traditional healers (curanderos/as), the remedies recommended by traditional healers and common traditional illnesses treated. Traditional healing practices among Mexican and Mexican Americans may be as high as 50-75% in some parts of the United States. Further research is needed to investigate the effectiveness of curanderismo and its impact on quality of life, activities of daily living and overall social participation.", "title": "" }, { "docid": "4b22eaf527842e0fa41a1cd740ad9b40", "text": "Music transcription is the process of creating a written score of music from an audio recording. Musicians and musicologists use transcription to better understand music that may not have a written form, from improvised jazz solos to traditional folk music. Automatic music transcription introduces signal-processing algorithms to extract pitch and rhythm information from recordings. This speeds up and automates the process of music transcription, which requires musical training and is very time consuming even for experts. This thesis explores the still unsolved problem of automatic music transcription through an in-depth analysis of the problem itself and an overview of different techniques to solve the hardest subtask of music transcription, multiple pitch estimation. It concludes with a close study of a typical multiple pitch estimation algorithm and highlights the challenges that remain unsolved.", "title": "" }, { "docid": "18f95e8a2251e7bd582536c841070961", "text": "This paper proposes and implements the concept of flexible induction heating based on the magnetic resonant coupling (MRC) mechanism. In conventional induction heating systems, the variation of the relative position between the heater and workpiece significantly deteriorates the heating performance. In particular, the heating effect dramatically reduces with the increase of vertical displacement or horizontal misalignment. This paper utilizes the MRC mechanism to effectuate flexible induction heating; thus, handling the requirements of varying vertical displacement and horizontal misalignment for various cooking styles. Differing from a conventional induction heating, the proposed induction heating adopts one resonant coil in the heater and one resonant coil in the workpiece, which can significantly strengthen the coupling effect, and, hence, the heating effect. Both the simulation and experimental results are given to validate the feasibility and flexibility of the proposed induction heating.", "title": "" }, { "docid": "5f2b4caef605ab07ca070552e308d6e6", "text": "The objective of CLEF is to promote research in the field of multilingual system development. This is done through the organisation of annual evaluation campaigns in which a series of tracks designed to test different aspects of monoand cross-language information retrieval (IR) are offered. The intention is to encourage experimentation with all kinds of multilingual information access – from the development of systems for monolingual retrieval operating on many languages to the implementation of complete multilingual multimedia search services. This has been achieved by offering an increasingly complex and varied set of evaluation tasks over the years. The aim is not only to meet but also to anticipate the emerging needs of the R&D community and to encourage the development of next generation multilingual IR systems. These Working Notes contain descriptions of the experiments conducted within CLEF 2006 – the sixth in a series of annual system evaluation campaigns. The results of the experiments will be presented and discussed in the CLEF 2006 Workshop, 20-22 September, Alicante, Spain. The final papers revised and extended as a result of the discussions at the Workshop together with a comparative analysis of the results will appear in the CLEF 2006 Proceedings, to be published by Springer in their Lecture Notes for Computer Science series. As from CLEF 2005, the Working Notes are published in electronic format only and are distributed to participants at the Workshop on CD-ROM together with the Book of Abstracts in printed form. All reports included in the Working Notes will also be inserted in the DELOS Digital Library, accessible at http://delos-dl.isti.cnr.it. Both Working Notes and Book of Abstracts are divided into eight sections, corresponding to the CLEF 2006 evaluation tracks. In addition appendices are included containing run statistics for the Ad Hoc, Domain-Specific, GeoCLEF and QA tracks, plus a list of all participating groups showing in which track they took part. The main features of the 2006 campaign are briefly outlined here below in order to provide the necessary background to the experiments reported in the rest of the Working Notes.", "title": "" } ]
scidocsrr
4215d45fc6656d2d735b2ec2866a549b
Cross-Domain Deep Learning Approach For Multiple Financial Market Prediction
[ { "docid": "be692c1251cb1dc73b06951c54037701", "text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.", "title": "" } ]
[ { "docid": "5d318e2df97f539e227f0aef60d0732b", "text": "The concept of intuition has, until recently, received scant scholarly attention within and beyond the psychological sciences, despite its potential to unify a number of lines of inquiry. Presently, the literature on intuition is conceptually underdeveloped and dispersed across a range of domains of application, from education, to management, to health. In this article, we clarify and distinguish intuition from related constructs, such as insight, and review a number of theoretical models that attempt to unify cognition and affect. Intuition's place within a broader conceptual framework that distinguishes between two fundamental types of human information processing is explored. We examine recent evidence from the field of social cognitive neuroscience that identifies the potential neural correlates of these separate systems and conclude by identifying a number of theoretical and methodological challenges associated with the valid and reliable assessment of intuition as a basis for future research in this burgeoning field of inquiry.", "title": "" }, { "docid": "cd4d874d0428a61c27bdcadc752c7d68", "text": "Recent advances in genome technologies and the ensuing outpouring of genomic information related to cancer have accelerated the convergence of discovery science and clinical medicine. Successful examples of translating cancer genomics into therapeutics and diagnostics reinforce its potential to make possible personalized cancer medicine. However, the bottlenecks along the path of converting a genome discovery into a tangible clinical endpoint are numerous and formidable. In this Perspective, we emphasize the importance of establishing the biological relevance of a cancer genomic discovery in realizing its clinical potential and discuss some of the major obstacles to moving from the bench to the bedside.", "title": "" }, { "docid": "72f7c13f21c047e4dcdf256fbbbe1b74", "text": "Programming by Examples (PBE) has the potential to revolutionize end-user programming by enabling end users, most of whom are non-programmers, to create small scripts for automating repetitive tasks. However, examples, though often easy to provide, are an ambiguous specification of the user's intent. Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of the program that was synthesized by the system. We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. One of these models allows the user to effectively navigate between the huge set of programs that are consistent with the examples provided by the user. The other model uses active learning to ask directed example-based questions to the user on the test input data over which the user intends to run the synthesized program. Our user studies show that each of these models significantly reduces the number of errors in the performed task without any difference in completion time. Moreover, both models are perceived as useful, and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.", "title": "" }, { "docid": "fda9db396d7c35ba64a7a5453aaa80dc", "text": "A novel dynamic latched comparator with offset voltage compensation is presented. The proposed comparator uses one phase clock signal for its operation and can drive a larger capacitive load with complementary version of the regenerative output latch stage. As it provides a larger voltage gain up to 22 V/V to the regenerative latch, the inputreferred offset voltage of the latch is reduced and metastability is improved. The proposed comparator is designed using 90 nm PTM technology and 1 V power supply voltage. It demonstrates up to 24.6% less offset voltage and 30.0% less sensitivity of delay to decreasing input voltage difference (17 ps/decade) than the conventional double-tail latched comparator at approximately the same area and power consumption. In addition, with a digitally controlled capacitive offset calibration technique, the offset voltage of the proposed comparator is further reduced from 6.03 to 1.10 mV at 1-sigma at the operating clock frequency of 3 GHz, and it consumes 54 lW/GHz after calibration.", "title": "" }, { "docid": "61b02ae1994637115e3baec128f05bd8", "text": "Ensuring reliability as the electrical grid morphs into the “smart grid” will require innovations in how we assess the state of the grid, for the purpose of proactive maintenance, rather than reactive maintenance – in the future, we will not only react to failures, but also try to anticipate and avoid them using predictive modeling (machine learning) techniques. To help in meeting this challenge, we present the Neutral Online Visualization-aided Autonomic evaluation framework (NOVA) for evaluating machine learning algorithms for preventive maintenance on the electrical grid. NOVA has three stages provided through a unified user interface: evaluation of input data quality, evaluation of machine learning results, and evaluation of the reliability improvement of the power grid. A prototype version of NOVA has been deployed for the power grid in New York City, and it is able to evaluate machine learning systems effectively and efficiently. Appearing in the ICML 2011 Workshop on Machine Learning for Global Challenges, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s).", "title": "" }, { "docid": "d2f6b3fee7f40eb580451d9cc29b8aa6", "text": "Compositional Distributional Semantic methods model the distributional behavior of a compound word by exploiting the distributional behavior of its constituent words. In this setting, a constituent word is typically represented by a feature vector conflating all the senses of that word. However, not all the senses of a constituent word are relevant when composing the semantics of the compound. In this paper, we present two different methods for selecting the relevant senses of constituent words. The first one is based on Word Sense Induction and creates a static multi prototype vectors representing the senses of a constituent word. The second creates a single dynamic prototype vector for each constituent word based on the distributional properties of the other constituents in the compound. We use these prototype vectors for composing the semantics of noun-noun compounds and evaluate on a compositionality-based similarity task. Our results show that: (1) selecting relevant senses of the constituent words leads to a better semantic composition of the compound, and (2) dynamic prototypes perform better than static prototypes.", "title": "" }, { "docid": "7897f052c891e330988296e3d6306c39", "text": "Sleep quality is an important factor for human physical and mental health, day-time performance, and safety. Sufficient sleep quality can reduce risk of chronic disease and mental depression. Sleep helps brain to work properly that can improve productivity and prevent accident because of falling asleep. In order to analyze the sleep quality, reliable continuous monitoring system is required. The emergence of internet-of-things technology has provided a promising opportunity to build a reliable sleep quality monitoring system by leveraging the rapid improvement of sensor and mobile technology. This paper presents the literature study about internet of things for sleep quality monitoring systems. The study is started from the review of sleep quality problem, the importance of sleep quality monitoring, the enabling internet of things technology, and the open issues in this field. Finally, our future research plan for sleep apnea monitoring is presented.", "title": "" }, { "docid": "3e0741fb69ee9bdd3cc455577aab4409", "text": "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6% over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.", "title": "" }, { "docid": "21c1be0458cc6908c3f7feb6591af841", "text": "Initial work on automatic emotion recognition concentrates mainly on audio-based emotion classification. Speech is the most important channel for the communication between humans and it may expected that emotional states are trans-fered though content, prosody or paralinguistic cues. Besides the audio modality, with the rapidly developing computer hardware and video-processing devices researches start exploring the video modality. Visual-based emotion recognition works focus mainly on the extraction and recognition of emotional information from the facial expressions. There are also attempts to classify emotional states from body or head gestures and to combine different visual modalities, for instance facial expressions and body gesture captured by two separate cameras [3]. Emotion recognition from psycho-physiological measurements, such as skin conductance, respiration, electro-cardiogram (ECG), electromyography (EMG), electroencephalography (EEG) is another attempt. In contrast to speech, gestures or facial expressions these biopotentials are the result of the autonomic nervous system and cannot be imitated [4]. Research activities in facial expression and speech based emotion recognition [6] are usually performed independently from each other. But in almost all practical applications people speak and exhibit facial expressions at the same time, and consequently both modalities should be used in order to perform robust affect recognition. Therefore, multimodal, and in particularly audiovisual emotion recognition has been emerging in recent times [11], for example multiple classifier systems have been widely investigated for the classification of human emotions [1, 9, 12, 14]. Combining classifiers is a promising approach to improve the overall classifier performance [13, 8]. In multiple classifier systems (MCS) it is assumed that the raw data X originates from an underlying source, but each classifier receives different subsets of (X) of the same raw input data X. Feature vector F j (X) are used as the input to the j−th classifier computing an estimate y j of the class membership of F j (X). This output y j might be a crisp class label or a vector of class memberships, e.g. estimates of posteriori probabilities. Based on the multiple classifier outputs y 1 ,. .. , y N the combiner produces the final decision y. Combiners used in this study are fixed transformations of the multiple classifier outputs y 1 ,. .. , y N. Examples of such combining rules are Voting, (weighted) Averaging, and Multiplying, just to mention the most popular types. 2 Friedhelm Schwenker In addition to a priori fixed combination rules the combiner can be a …", "title": "" }, { "docid": "aaf6ed732f2cb5ceff714f1d84dac9ed", "text": "Video caption refers to generating a descriptive sentence for a specific short video clip automatically, which has achieved remarkable success recently. However, most of the existing methods focus more on visual information while ignoring the synchronized audio cues. We propose three multimodal deep fusion strategies to maximize the benefits of visual-audio resonance information. The first one explores the impact on cross-modalities feature fusion from low to high order. The second establishes the visual-audio short-term dependency by sharing weights of corresponding front-end networks. The third extends the temporal dependency to long-term through sharing multimodal memory across visual and audio modalities. Extensive experiments have validated the effectiveness of our three cross-modalities fusion strategies on two benchmark datasets, including Microsoft Research Video to Text (MSRVTT) and Microsoft Video Description (MSVD). It is worth mentioning that sharing weight can coordinate visualaudio feature fusion effectively and achieve the state-of-art performance on both BELU and METEOR metrics. Furthermore, we first propose a dynamic multimodal feature fusion framework to deal with the part modalities missing case. Experimental results demonstrate that even in the audio absence mode, we can still obtain comparable results with the aid of the additional audio modality inference module.", "title": "" }, { "docid": "0c4f09c41c35690de71f106403d14223", "text": "This paper views Islamist radicals as self-interested political revolutionaries and builds on a general model of political extremism developed in a previous paper (Ferrero, 2002), where extremism is modelled as a production factor whose effect on expected revenue is initially positive and then turns negative, and whose level is optimally chosen by a revolutionary organization. The organization is bound by a free-access constraint and hence uses the degree of extremism as a means of indirectly controlling its level of membership with the aim of maximizing expected per capita income of its members, like a producer co-operative. The gist of the argument is that radicalization may be an optimal reaction to perceived failure (a widespread perception in the Muslim world) when political activists are, at the margin, relatively strongly averse to effort but not so averse to extremism, a configuration that is at odds with secular, Western-style revolutionary politics but seems to capture well the essence of Islamic revolutionary politics, embedded as it is in a doctrinal framework.", "title": "" }, { "docid": "af4518476ae2cadd264f7288768c99a7", "text": "In multivariate pattern analysis of neuroimaging data, 'second-level' inference is often performed by entering classification accuracies into a t-test vs chance level across subjects. We argue that while the random-effects analysis implemented by the t-test does provide population inference if applied to activation differences, it fails to do so in the case of classification accuracy or other 'information-like' measures, because the true value of such measures can never be below chance level. This constraint changes the meaning of the population-level null hypothesis being tested, which becomes equivalent to the global null hypothesis that there is no effect in any subject in the population. Consequently, rejecting it only allows to infer that there are some subjects in which there is an information effect, but not that it generalizes, rendering it effectively equivalent to fixed-effects analysis. This statement is supported by theoretical arguments as well as simulations. We review possible alternative approaches to population inference for information-based imaging, converging on the idea that it should not target the mean, but the prevalence of the effect in the population. One method to do so, 'permutation-based information prevalence inference using the minimum statistic', is described in detail and applied to empirical data.", "title": "" }, { "docid": "e917b6af07821cb834555fa7a19fca0c", "text": "Conversational interfaces recently gained a lot of attention. One of the reasons for the current hype is the fact that chatbots (one particularly popular form of conversational interfaces) nowadays can be created without any programming knowledge, thanks to different toolkits and socalled Natural Language Understanding (NLU) services. While these NLU services are already widely used in both, industry and science, so far, they have not been analysed systematically. In this paper, we present a method to evaluate the classification performance of NLU services. Moreover, we present two new corpora, one consisting of annotated questions and one consisting of annotated questions with the corresponding answers. Based on these corpora, we conduct an evaluation of some of the most popular NLU services. Thereby we want to enable both, researchers and companies to make more educated decisions about which service they should use.", "title": "" }, { "docid": "d7958df069d911c1431c0b7461fb0268", "text": "Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We also conduct a user study to validate the quality of the synthesized explanations . We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.", "title": "" }, { "docid": "79c9f10c5e6fb163b09e9b773af14a3e", "text": "Small RTTs (~tens of microseconds), bursty flow arrivals, and a large number of concurrent flows (thousands) in datacenters bring fundamental challenges to congestion control as they either force a flow to send at most one packet per RTT or induce a large queue build-up. The widespread use of shallow buffered switches also makes the problem more challenging with hosts generating many flows in bursts. In addition, as link speeds increase, algorithms that gradually probe for bandwidth take a long time to reach the fair-share. An ideal datacenter congestion control must provide 1) zero data loss, 2) fast convergence, 3) low buffer occupancy, and 4) high utilization. However, these requirements present conflicting goals.\n This paper presents a new radical approach, called ExpressPass, an end-to-end credit-scheduled, delay-bounded congestion control for datacenters. ExpressPass uses credit packets to control congestion even before sending data packets, which enables us to achieve bounded delay and fast convergence. It gracefully handles bursty flow arrivals. We implement ExpressPass using commodity switches and provide evaluations using testbed experiments and simulations. ExpressPass converges up to 80 times faster than DCTCP in 10 Gbps links, and the gap increases as link speeds become faster. It greatly improves performance under heavy incast workloads and significantly reduces the flow completion times, especially, for small and medium size flows compared to RCP, DCTCP, HULL, and DX under realistic workloads.", "title": "" }, { "docid": "cbf32934e275e8d95a584762b270a5c2", "text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.", "title": "" }, { "docid": "01fbdd81917cba76851cd566d6f0b1da", "text": "Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas.", "title": "" }, { "docid": "0df1a15c02c29d9462356641fbe78b43", "text": "Localization is an essential and important research issue in wireless sensor networks (WSNs). Most localization schemes focus on static sensor networks. However, mobile sensors are required in some applications such that the sensed area can be enlarged. As such, a localization scheme designed for mobile sensor networks is necessary. In this paper, we propose a localization scheme to improve the localization accuracy of previous work. In this proposed scheme, the normal nodes without location information can estimate their own locations by gathering the positions of location-aware nodes (anchor nodes) and the one-hop normal nodes whose locations are estimated from the anchor nodes. In addition, we propose a scheme that predicts the moving direction of sensor nodes to increase localization accuracy. Simulation results show that the localization error in our proposed scheme is lower than the previous schemes in various mobility models and moving speeds.", "title": "" }, { "docid": "6c0901414287be50ca19985f2c5403cb", "text": "Information technology has made possible the capture and accessing of a large number of data and knowledge bases, which in turn has brought about the problem of information overload. Text mining to turn textual information into knowledge has become a very active research area, but much of the research remains restricted to the English language. Due to the differences in linguistic characteristics and methods of natural language processing, many existing text analysis approaches have yet to be shown to be useful for the Chinese language. This research focuses on the automatic generation of a hierarchical knowledge map NewsMap, based on online Chinese news, particularly the finance and health sections. Whether in print or online, news still represents one important knowledge source that people produce and consume on a daily basis. The hierarchical knowledge map can be used as a tool for browsing business intelligence and medical knowledge hidden in news articles. In order to assess the quality of the map, an empirical study was conducted which shows that the categories of the hierarchical knowledge map generated by NewsMap are better than those generated by regular news readers, both in terms of recall and precision, on the sub-level categories but not on the top-level categories. NewsMap employs an improved interface combining a 1D alphabetical hierarchical list and a 2D Self-Organizing Map (SOM) island display. Another empirical study compared the two visualization displays and found that users’ performances can be improved by taking advantage of the visual cues of the 2D SOM display. D 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fb0fabb99d446e1edbb3fd581d16693b", "text": "Facial caricature is an art form of drawing faces in an exaggerated way to convey humor or sarcasm. In this paper, we propose the first Generative Adversarial Network (GAN) for unpaired photo-to-caricature translation, which we call \"CariGANs\". It explicitly models geometric exaggeration and appearance stylization using two components: CariGeoGAN, which only models the geometry-to-geometry transformation from face photos to caricatures, and CariStyGAN, which transfers the style appearance from caricatures to face photos without any geometry deformation. In this way, a difficult cross-domain translation problem is decoupled into two easier tasks. The perceptual study shows that caricatures generated by our CariGANs are closer to the hand-drawn ones, and at the same time better persevere the identity, compared to state-of-the-art methods. Moreover, our CariGANs allow users to control the shape exaggeration degree and change the color/texture style by tuning the parameters or giving an example caricature.", "title": "" } ]
scidocsrr
514bc8d2deaaaba5b1c654bbeb293a36
American Economic Association Why Have Americans Become More Obese ?
[ { "docid": "08823059d089c1e553af85d5768332ca", "text": "Hyperbolic discount functions induce dynamically inconsistent preferences, implying a motive for consumers to constrain their own future choices. This paper analyzes the decisions of a hyperbolic consumer who has access to an imperfect commitment technology: an illiquid asset whose sale must be initiated one period before the sale proceeds are received. The model predicts that consumption tracks income, and the model explains why consumers have asset-speciŽc marginal propensities to consume. The model suggests that Žnancial innovation may have caused the ongoing decline in U. S. savings rates, since Žnancial innovation increases liquidity, eliminating commitment opportunities. Finally, the model implies that Žnancial market innovation may reduce welfare by providing “too much” liquidity.", "title": "" } ]
[ { "docid": "f30caea55cb1800a569a2649d1f8e388", "text": "Naive Bayes (NB) is a popular machine learning tool for classification, due to its simplicity, high computational efficiency, and good classification accuracy, especially for high dimensional data such as texts. In reality, the pronounced advantage of NB is often challenged by the strong conditional independence assumption between attributes, which may deteriorate the classification performance. Accordingly, numerous efforts have been made to improve NB, by using approaches such as structure extension, attribute selection, attribute weighting, instance weighting, local learning and so on. In this paper, we propose a new Artificial Immune System (AIS) based self-adaptive attribute weighting method for Naive Bayes classification. The proposed method, namely AISWNB, uses immunity theory in artificial immune systems to search optimal attribute weight values, where self-adjusted weight values will alleviate the conditional independence assumption and help calculate the conditional probability in an accurate way. One noticeable advantage of AISWNB is that the unique immune system based evolutionary computation process, including initialization, clone, section, and mutation, ensures that AISWNB can adjust itself to the data without explicit specification of functional or distributional forms of the underlying model. As a result, AISWNB can obtain good attribute weight values during the learning process. Experiments and comparisons on 36 machine learning benchmark data sets and six image classification data sets demonstrate that AISWNB significantly outperforms its peers in classification accuracy, class probability estimation, and class ranking performance.", "title": "" }, { "docid": "6a1d534737dcbe75ff7a7ac975bcc5ec", "text": "Crime is one of the most important social problems in the country, affecting public safety, children development, and adult socioeconomic status. Understanding what factors cause higher crime is critical for policy makers in their efforts to reduce crime and increase citizens' life quality. We tackle a fundamental problem in our paper: crime rate inference at the neighborhood level. Traditional approaches have used demographics and geographical influences to estimate crime rates in a region. With the fast development of positioning technology and prevalence of mobile devices, a large amount of modern urban data have been collected and such big data can provide new perspectives for understanding crime. In this paper, we used large-scale Point-Of-Interest data and taxi flow data in the city of Chicago, IL in the USA. We observed significantly improved performance in crime rate inference compared to using traditional features. Such an improvement is consistent over multiple years. We also show that these new features are significant in the feature importance analysis.", "title": "" }, { "docid": "a61441a2e0a6100e1b91ea08ff312509", "text": "We discuss the evolution and state-of-the-art of the use of Building Information Modelling (BIM) in the field of culture heritage documentation. BIM is a hot theme involving different characteristics including principles, technology, even privacy rights for the cultural heritage objects. Modern documentation needs identified the potential of BIM in the recent years. Many architects, archaeologists, conservationists, engineers regard BIM as a disruptive force, changing the way professionals can document and manage a cultural heritage structure. The latest years, there are many developments in the BIM field while the developed technology and methods challenged the cultural heritage community in the documentation framework. In this review article, following a brief historic background for the BIM, we review the recent developments focusing in the cultural heritage documentation perspective.", "title": "" }, { "docid": "5e453defd762bb4ecfae5dcd13182b4a", "text": "We present a comprehensive lifetime prediction methodology for both intrinsic and extrinsic Time-Dependent Dielectric Breakdown (TDDB) failures to provide adequate Design-for-Reliability. For intrinsic failures, we propose applying the √E model and estimating the Weibull slope using dedicated single-via test structures. This effectively prevents lifetime underestimation, and thus relaxes design restrictions. For extrinsic failures, we propose applying the thinning model and Critical Area Analysis (CAA). In the thinning model, random defects reduce effective spaces between interconnects, causing TDDB failures. We can quantify the failure probabilities by using CAA for any design layouts of various LSI products.", "title": "" }, { "docid": "53007a9a03b7db2d64dd03973717dc0f", "text": "We present two children with hypoplasia of the left trapezius muscle and a history of ipsilateral transient neonatal brachial plexus palsy without documented trapezius weakness. Magnetic resonance imaging in these patients with unilateral left hypoplasia of the trapezius revealed decreased muscles in the left side of the neck and left supraclavicular region on coronal views, decreased muscle mass between the left splenius capitis muscle and the subcutaneous tissue at the level of the neck on axial views, and decreased size of the left paraspinal region on sagittal views. Three possibilities can explain the association of hypoplasia of the trapezius and obstetric brachial plexus palsy: increased vulnerability of the brachial plexus to stretch injury during delivery because of intrauterine trapezius weakness, a casual association of these two conditions, or an erroneous diagnosis of brachial plexus palsy in patients with trapezial weakness. Careful documentation of neck and shoulder movements can distinguish among shoulder weakness because of trapezius hypoplasia, brachial plexus palsy, or brachial plexus palsy with trapezius hypoplasia. Hence, we recommend precise documentation of neck movements in the initial description of patients with suspected neonatal brachial plexus palsy.", "title": "" }, { "docid": "3992975b4f218b7025e28e4ba52d0c14", "text": "We present ConfErr, a tool for testing and quantifying the resilience of software systems to human-induced configuration errors. ConfErr uses human error models rooted in psychology and linguistics to generate realistic configuration mistakes; it then injects these mistakes and measures their effects, producing a resilience profile of the system under test. The resilience profile, capturing succinctly how sensitive the target software is to different classes of configuration errors, can be used for improving the software or to compare systems to each other. ConfErr is highly portable, because all mutations are performed on abstract representations of the configuration files. Using ConfErr, we found several serious flaws in the MySQL and Postgres databases, Apache web server, and BIND and djbdns name servers; we were also able to directly compare the resilience of functionally-equivalent systems, such as MySQL and Postgres.", "title": "" }, { "docid": "20f8a5daa211a5461eaa166452aa1f89", "text": "Radio frequency identification (RFID) technology is considered as one of the most applicable wireless technologies in the present era. Readers and tags are two main components of this technology. Several adjacent readers are used in most cases of implementing RFID systems for commercial, industrial and medicinal applications. Collisions which come from readers’ simultaneous activities lead to a decrease in the performance of RFID systems. Therefore, a suitable solution to avoid collisions and minimize them in order to enhance the performance of these systems is necessary. Nowadays, several researches are done in this field, but most of them do not follow the rules and standards of RFID systems; and don’t use network resources proficiently. In this paper, a solution is provided to avoid collisions and readers’ simultaneous activities in dense passive RFID networks through the use of time division, CSMA techniques and measuring received signal power. The new anti-collision protocol provides higher throughput than other protocols without extra hardware in dense reader environment; in addition, the suggested method conforms to the European standards and rules.", "title": "" }, { "docid": "e3f870997517ba7e1754da6355bbc11d", "text": "Research suggests that contact with nature can be beneficial, for example leading to improvements in mood, cognition, and health. A distinct but related idea is the personality construct of subjective nature connectedness, a stable individual difference in cognitive, affective, and experiential connection with the natural environment. Subjective nature connectedness is a strong predictor of pro-environmental attitudes and behaviors that may also be positively associated with subjective well-being. This meta-analysis was conducted to examine the relationship between nature connectedness and happiness. Based on 30 samples (n = 8523), a fixed-effect meta-analysis found a small but significant effect size (r = 0.19). Those who are more connected to nature tended to experience more positive affect, vitality, and life satisfaction compared to those less connected to nature. Publication status, year, average age, and percentage of females in the sample were not significant moderators. Vitality had the strongest relationship with nature connectedness (r = 0.24), followed by positive affect (r = 0.22) and life satisfaction (r = 0.17). In terms of specific nature connectedness measures, associations were the strongest between happiness and inclusion of nature in self (r = 0.27), compared to nature relatedness (r = 0.18) and connectedness to nature (r = 0.18). This research highlights the importance of considering personality when examining the psychological benefits of nature. The results suggest that closer human-nature relationships do not have to come at the expense of happiness. Rather, this meta-analysis shows that being connected to nature and feeling happy are, in fact, connected.", "title": "" }, { "docid": "61e1f8bc251d40e4ebe4a3a8f2a20075", "text": "Approaches for measuring blood glucose levels using noninvasive techniques also known as non-invasive glucose monitoring and minimally-invasive glucose monitoring techniques can help support easier and more frequent measurement of blood glucose levels and also lend themselves to support continuous glucose monitoring. This paper focuses on reviewing the emerging technologies for such monitoring, and their interrelationship with data analytics. The paper describes how these two areas of development together are contributing to the field of diabetes informatics, a more data-rich approach to understanding and managing diabetes.", "title": "" }, { "docid": "d1475e197b300489acedf8c0cbe8f182", "text": "—The publication of IEC 61850-90-1 \" Use of IEC 61850 for the communication between substations \" and the draft of IEC 61850-90-5 \" Use of IEC 61850 to transmit synchrophasor information \" opened the possibility to study IEC 61850 GOOSE Message over WAN not only in the layer 2 (link layer) but also in the layer 3 (network layer) in the OSI model. In this paper we examine different possibilities to make feasible teleprotection in the network layer over WAN sharing the communication channel with automation, management and maintenance convergence services among electrical energy substations.", "title": "" }, { "docid": "17ff47bb9d2aae9c70906af5a22e5e1b", "text": "Machine learning has proven to be a powerful technique during the past decades. Artificial neural network (ANN), as one of the most popular machine learning algorithms, has been widely applied to various areas. However, their applications for catalysis were not well-studied until recent decades. In this review, we aim to summarize the applications of ANNs for catalysis research reported in the literature. We show how this powerful technique helps people address the highly complicated problems and accelerate the progress of the catalysis community. From the perspectives of both experiment and theory, this review shows how ANNs can be effectively applied for catalysis prediction, the design of new catalysts, and the understanding of catalytic structures.", "title": "" }, { "docid": "90316f6b23e4feec08be1783fa61826c", "text": "Mouse visual cortex is subdivided into multiple distinct, hierarchically organized areas that are interconnected through feedforward (FF) and feedback (FB) pathways. The principal synaptic targets of FF and FB axons that reciprocally interconnect primary visual cortex (V1) with the higher lateromedial extrastriate area (LM) are pyramidal cells (Pyr) and parvalbumin (PV)-expressing GABAergic interneurons. Recordings in slices of mouse visual cortex have shown that layer 2/3 Pyr cells receive excitatory monosynaptic FF and FB inputs, which are opposed by disynaptic inhibition. Most notably, inhibition is stronger in the FF than FB pathway, suggesting pathway-specific organization of feedforward inhibition (FFI). To explore the hypothesis that this difference is due to diverse pathway-specific strengths of the inputs to PV neurons we have performed subcellular Channelrhodopsin-2-assisted circuit mapping in slices of mouse visual cortex. Whole-cell patch-clamp recordings were obtained from retrobead-labeled FF(V1→LM)- and FB(LM→V1)-projecting Pyr cells, as well as from tdTomato-expressing PV neurons. The results show that the FF(V1→LM) pathway provides on average 3.7-fold stronger depolarizing input to layer 2/3 inhibitory PV neurons than to neighboring excitatory Pyr cells. In the FB(LM→V1) pathway, depolarizing inputs to layer 2/3 PV neurons and Pyr cells were balanced. Balanced inputs were also found in the FF(V1→LM) pathway to layer 5 PV neurons and Pyr cells, whereas FB(LM→V1) inputs to layer 5 were biased toward Pyr cells. The findings indicate that FFI in FF(V1→LM) and FB(LM→V1) circuits are organized in a pathway- and lamina-specific fashion.", "title": "" }, { "docid": "43398874a34c7346f41ca7a18261e878", "text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "ae5497a11458851438d6cc86daec189a", "text": "Automated activity recognition enables a wide variety of applications related to child and elderly care, disease diagnosis and treatment, personal health or sports training, for which it is key to seamlessly determine and log the user’s motion. This work focuses on exploring the use of smartphones to perform activity recognition without interfering in the user’s lifestyle. Thus, we study how to build an activity recognition system to be continuously executed in a mobile device in background mode. The system relies on device’s sensing, processing and storing capabilities to estimate significant movements/postures (walking at different paces—slow, normal, rush, running, sitting, standing). In order to evaluate the combinations of sensors, features and algorithms, an activity dataset of 16 individuals has been gathered. The performance of a set of lightweight classifiers (Naïve Bayes, Decision Table and Decision Tree) working on different sensor data has been fully evaluated and optimized in terms of accuracy, computational cost and memory fingerprint. Results have pointed out that a priori information on the relative position of the mobile device with respect to the user’s body enhances the estimation accuracy. Results show that computational low-cost Decision Tables using the best set of features among mean and variance and considering all the sensors (acceleration, gravity, linear acceleration, magnetometer, gyroscope) may be enough to get an activity estimation accuracy of around 88 % (78 % is the accuracy of the Naïve Bayes algorithm with the same characteristics used as a baseline). To demonstrate its applicability, the activity recognition system has been used to enable a mobile application to promote active lifestyles.", "title": "" }, { "docid": "b8e193262d1a70ab3b28d45b480dc1ca", "text": "Artificial Neural networks have been part of an attempt to emulate the learning curve of the human nervous system. However the vital difference of, nervous system being highly parallel and computer processor units remaining largely sequential persists. Here an attempt is made to bridge that gap with the help of Graphics Processing Units (GPUs) which are designed to be highly parallel. In particular Back propagation networks are considered which use supervised learning. Back-propagation algorithms, with no data dependencies are embarrassingly parallel and hence only a totally parallel system can exploit it fully. However, it has also been observed that GPUs underperform when either significant overhead in calculations is incurred or algorithm is not sufficiently parallel.", "title": "" }, { "docid": "facceffffe1ad0406b509b8d33f21c2e", "text": "Since the early 1960's, researchers have built a number of programming languages and environments with the intention of making programming accessible to a larger number of people. This article presents a taxonomy of languages and environments designed to make programming more accessible to novice programmers of all ages. The systems are organized by their primary goal, either to teach programming or to use programming to empower their users, and then, by each system's authors' approach, to making learning to program easier for novice programmers. The article explains all categories in the taxonomy, provides a brief description of the systems in each category, and suggests some avenues for future work in novice programming environments and languages.", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "c4ab0af91f664aa6d7674f986608ab06", "text": "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.", "title": "" }, { "docid": "e3819f2537f1249e0ceb637f6f086c1f", "text": "Applying robots in narrow and cluttered disaster environments such as oil refineries requires a slim body and a wide range of motion. It is also necessary to have abilities to absorb unexpected contact with the environment and to walk on scattered debris. In this paper we propose new compact and high performance torque-controlled actuators for legged robots to satisfy the above mentioned requirements. For axial compactness, torque sensors are designed as ring-shaped thin cylinders surrounding motors or gears with strain gauges for sensing. To achieve broad bandwidth of torque control, we introduced an analog differentiator circuit into an analog digital converter (ADC) board in order to suppress noise in the differential control of joint torque. We also propose methods to reduce torque ripple caused by the deformation of the harmonic drive gear and electromagnetic interference (EMI) from a motor and a motor driver. Finally, experiments of a collision with objects and movement on scattered debris were executed with a fully torque-controlled legged robot built with the proposed actuators.", "title": "" }, { "docid": "024cbb734053b256fd7b20b1a757d780", "text": "The IETF is currently working on service differentiation in the Internet. However, in wireless environments where bandwidth is scarce and channel conditions are variable, IP differentiated services are suboptimal without lower layers’ support. In this paper we present three service differentiation schemes for IEEE 802.11. The first one is based on scaling the contention window according to the priority of each flow or user. The second one assigns different inter frame spacings to different users. Finally, the last one uses different maximum frame lengths for different users. We simulate and analyze the performance of each scheme with TCP and UDP flows. Keywords—QoS, DiffServ, TCP, UDP, CBR, Wireless communications.", "title": "" } ]
scidocsrr
21187186d9301299b2b94767a8f0bb34
PhraseRNN: Phrase Recursive Neural Network for Aspect-based Sentiment Analysis
[ { "docid": "8d29cf5303d9c94741a8d41ca6c71da9", "text": "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "4ccff0d008f87fce72ed12c0cdeb4211", "text": "Title of dissertation: DEEP NEURAL NETWORKS AND REGRESSION MODELS FOR OBJECT DETECTION AND POSE ESTIMATION Kota Hara, Doctor of Philosophy, 2016 Dissertation directed by: Professor Rama Chellappa Department of Electrical and Computer Engineering Estimating the pose, orientation and the location of objects has been a central problem addressed by the computer vision community for decades. In this dissertation, we propose new approaches for these important problems using deep neural networks as well as tree-based regression models. For the first topic, we look at the human body pose estimation problem and propose a novel regression-based approach. The goal of human body pose estimation is to predict the locations of body joints, given an image of a person. Due to significant variations introduced by pose, clothing and body styles, it is extremely difficult to address this task by a standard application of the regression method. Thus, we address this task by dividing the whole body pose estimation problem into a set of local pose estimation problems by introducing a dependency graph which describes the dependency among different body joints. For each local pose estimation problem, we train a boosted regression tree model and estimate the pose by progressively applying the regression along the paths in a dependency graph starting from the root node. Our next work is on improving the traditional regression tree method and demonstrate its effectiveness for pose/orientation estimation tasks. The main issues of the traditional regression training are, 1) the node splitting is limited to binary splitting, 2) the form of the splitting function is limited to thresholding on a single dimension of the input vector and 3) the best splitting function is found by exhaustive search. We propose a novel node splitting algorithm for regression tree training which does not have the issues mentioned above. The algorithm proceeds by first applying k-means clustering in the output space, conducting multi-class classification by support vector machine (SVM) and determining the constant estimate at each leaf node. We apply the regression forest that includes our regression tree models to head pose estimation, car orientation estimation and pedestrian orientation estimation tasks and demonstrate its superiority over various standard regression methods. Next, we turn our attention to the role of pose information for the object detection task. In particular, we focus on the detection of fashion items a person is wearing or carrying. It is clear that the locations of these items are strongly correlated with the pose of the person. To address this task, we first generate a set of candidate bounding boxes by using an object proposal algorithm. For each candidate bounding box, image features are extracted by a deep convolutional neural network pre-trained on a large image dataset and the detection scores are generated by SVMs. We introduce a pose-dependent prior on the geometry of the bounding boxes and combine it with the SVM scores. We demonstrate that the proposed algorithm achieves significant improvement in the detection performance. Lastly, we address the object detection task by exploring a way to incorporate an attention mechanism into the detection algorithm. Humans have the capability of allocating multiple fixation points, each of which attends to different locations and scales of the scene. However, such a mechanism is missing in the current state-of-the-art object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. For detecting objects in an image, the network adaptively places a sequence of glimpses at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to the lack of ground truth annotations for the visual attention mechanism, we train our network using a reinforcement learning algorithm. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that do not employ the attention mechanism. DEEP NEURAL NETWORKS AND REGRESSION MODELS FOR OBJECT DETECTION AND POSE ESTIMATION", "title": "" }, { "docid": "c8ba40dd66f57f6d192a73be94440d07", "text": "PURPOSE\nWound infection after an ileostomy reversal is a common problem. To reduce wound-related complications, purse-string skin closure was introduced as an alternative to conventional linear skin closure. This study is designed to compare wound infection rates and operative outcomes between linear and purse-string skin closure after a loop ileostomy reversal.\n\n\nMETHODS\nBetween December 2002 and October 2010, a total of 48 consecutive patients undergoing a loop ileostomy reversal were enrolled. Outcomes were compared between linear skin closure (group L, n = 30) and purse string closure (group P, n = 18). The operative technique for linear skin closure consisted of an elliptical incision around the stoma, with mobilization, and anastomosis of the ileum. The rectus fascia was repaired with interrupted sutures. Skin closure was performed with vertical mattress interrupted sutures. Purse-string skin closure consisted of a circumstomal incision around the ileostomy using the same procedures as used for the ileum. Fascial closure was identical to linear closure, but the circumstomal skin incision was approximated using a purse-string subcuticular suture (2-0 Polysorb).\n\n\nRESULTS\nBetween group L and P, there were no differences of age, gender, body mass index, and American Society of Anesthesiologists (ASA) scores. Original indication for ileostomy was 23 cases of malignancy (76.7%) in group L, and 13 cases of malignancy (77.2%) in group P. The median time duration from ileostomy to reversal was 4.0 months (range, 0.6 to 55.7 months) in group L and 4.1 months (range, 2.2 to 43.9 months) in group P. The median operative time was 103 minutes (range, 45 to 260 minutes) in group L and 100 minutes (range, 30 to 185 minutes) in group P. The median hospital stay was 11 days (range, 5 to 4 days) in group L and 7 days (range, 4 to 14 days) in group P (P < 0.001). Wound infection was found in 5 cases (16.7%) in group L and in one case (5.6%) in group L (P = 0.26).\n\n\nCONCLUSION\nBased on this study, purse-string skin closure after a loop ileostomy reversal showed comparable outcomes, in terms of wound infection rates, to those of linear skin closure. Thus, purse-string skin closure could be a good alternative to the conventional linear closure.", "title": "" }, { "docid": "ca072e97f8a5486347040aeaa7909d60", "text": "Camera-based stereo-vision provides cost-efficient vision capabilities for robotic systems. The objective of this paper is to examine the performance of stereo-vision as means to enable a robotic inspection cell for haptic quality testing with the ability to detect relevant information related to the inspection task. This information comprises the location and 3D representation of a complex object under inspection as well as the location and type of quality features which are subject to the inspection task. Among the challenges is the low-distinctiveness of features in neighboring area, inconsistent lighting, similar colors as well as low intra-class variances impeding the retrieval of quality characteristics. The paper presents the general outline of the vision chain as well as performance analysis of various algorithms for relevant steps in the machine vision chain thus indicating the capabilities and drawbacks of a camera-based stereo-vision for flexible use in complex machine vision tasks.", "title": "" }, { "docid": "02bc71435bd53d8331e3ad2b30588c6d", "text": "Voting with cryptographic auditing, sometimes called open-audit voting, has remained, for the most part, a theoretical endeavor. In spite of dozens of fascinating protocols and recent ground-breaking advances in the field, there exist only a handful of specialized implementations that few people have experienced directly. As a result, the benefits of cryptographically audited elections have remained elusive. We present Helios, the first web-based, open-audit voting system. Helios is publicly accessible today: anyone can create and run an election, and any willing observer can audit the entire process. Helios is ideal for online software communities, local clubs, student government, and other environments where trustworthy, secretballot elections are required but coercion is not a serious concern. With Helios, we hope to expose many to the power of open-audit elections.", "title": "" }, { "docid": "af610e7f74aa6784442f8d8535132ade", "text": "This paper characterises the use of activity trackers as \"lived informatics\". This characterisation is contrasted with other discussions of personal informatics and the quantified self. The paper reports an interview study with activity tracker users. The study found: people do not logically organise, but interweave various activity trackers, sometimes with ostensibly the same functionality; that tracking is often social and collaborative rather than personal; that there are different styles of tracking, including goal driven tracking and documentary tracking; and that tracking information is often used and interpreted with reference to daily or short term goals and decision making. We suggest there will be difficulties in personal informatics if we ignore the way that personal tracking is enmeshed with everyday life and people's outlook on their future.", "title": "" }, { "docid": "174bce522f96f0206fb3aae6613cf821", "text": "Fake news and alternative facts have dominated the news cycle of late. In this paper, we present a prototype system that uses social argumentation to verify the validity of proposed alternative facts and help in the detection of fake news. We utilize fundamental argumentation ideas in a graph-theoretic framework that also incorporates semantic web and linked data principles. The argumentation structure is crowdsourced and mediated by expert moderators in a virtual community.", "title": "" }, { "docid": "c8527b75bef0c67a8efd60a91a9fcbde", "text": "These lecture notes were written for an M.A. level course in labor economics with focus on empirical identi…cation strategies.", "title": "" }, { "docid": "3105a48f0b8e45857e8d48e26b258e04", "text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.", "title": "" }, { "docid": "dd144f12a70a37160007f2b7f04b4d77", "text": "This research examines the role of trait empathy in emotional contagion through non-social targets-art objects. Studies 1a and 1b showed that high- (compared to low-) empathy individuals are more likely to infer an artist's emotions based on the emotional valence of the artwork and, as a result, are more likely to experience the respective emotions themselves. Studies 2a and 2b experimentally manipulated artists' emotions via revealing details about their personal life. Study 3 experimentally induced positive vs. negative emotions in individuals who then wrote literary texts. These texts were shown to another sample of participants. High- (compared to low-) empathy participants were more like to accurately identify and take on the emotions ostensibly (Studies 2a and 2b) or actually (Study 3) experienced by the \"artists\". High-empathy individuals' enhanced sensitivity to others' emotions is not restricted to social targets, such as faces, but extends to products of the human mind, such as objects of art.", "title": "" }, { "docid": "4244749f6f7fe48daf0c293f8dd439b2", "text": "With the rapid advancement in field of networking and communications, no aspect of human life is untouched. Commercial activities are also affected by the new advancements. Traditional commercial activities are changed and modified with the passage of time. Firstly, the age of E-Commerce arrived, then M-Commerce. Now, after E-Commerce and M-Commerce, the age of Ultimate Commerce is arrived. Any time/ Always/ Anywhere service providing is the key to this ultimate or ubiquitous commerce. This paper studies the concept of ubiquitous computing and its adaption to commerce with new issues associated. KeywordsUbiquitous computing, Ubiquitous commerce.", "title": "" }, { "docid": "49d2d46a16571524e94b22997d1b585c", "text": "In this paper, we discuss the development of the sprawling-type quadruped robot named “TITAN-XIII” and its dynamic walking algorithm. We develop an experimental quadruped robot especially designed for dynamic walking. Unlike dog-like robots, the prototype robot looks like a four-legged spider. As an experimental robot, we focus on the three basic concepts: lightweight, wide range of motion and ease of maintenance. To achieve these goals, we introduce a wire-driven mechanism using a synthetic fiber to transmit power to each axis making use of this wire-driven mechanism, we can locate the motors at the base of the leg, reducing, consequently, its inertia. Additionally, each part of the robot is unitized, and can be easily disassembled. As a dynamic walking algorithm, we proposed what we call “longitudinal acceleration trajectory”. This trajectory was applied to intermittent trot gait. The algorithm was tested with the developed robot, and its performance was confirmed through experiments.", "title": "" }, { "docid": "d56ff4b194c123b19a335e00b38ea761", "text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.", "title": "" }, { "docid": "0bd3beaad8cd6d6f19603eca9320718d", "text": "For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Vercellis, Carlo. Business intelligence : data mining and optimization for decision making / Carlo Vercellis. p. cm. Includes bibliographical references and index.", "title": "" }, { "docid": "89ca22c24d3b6fc397e8098e62d8d4a7", "text": "This paper introduces the design and development of a novel pressure-sensitive foot insole for real-time monitoring of plantar pressure distribution during walking. The device consists of a flexible insole with 64 pressure-sensitive elements and an integrated electronic board for high-frequency data acquisition, pre-filtering, and wireless transmission to a remote data computing/storing unit. The pressure-sensitive technology is based on an optoelectronic technology developed at Scuola Superiore Sant'Anna. The insole is a low-cost and low-power battery-powered device. The design and development of the device is presented along with its experimental characterization and validation with healthy subjects performing a task of walking at different speeds, and benchmarked against an instrumented force platform.", "title": "" }, { "docid": "a9ab91d8da943340db4181ac55ea1cd1", "text": "An architecture for the scalable speed and multiple supply voltage range of 1.6 V to 3.6 V, low voltage to high voltage level shifter has been proposed. The buffer containing the level shifter is fabricated in 40 nm CMOS process by thin oxide (32 Å thick) devices whose stress limit is 1.98 V (max). The technique generates a set of dynamic differential bias signals as a function of input data sequence, output state of the level shifter and the supply voltage for a given process and temperature to ensure the reliable operation of the level shift stage. The measurement results confirmed successful operation at 40 Mbps with 10 pF load on IO pad, with multiple supplies, 1.8 V - 2.7 V - 3.6 V.", "title": "" }, { "docid": "12d564ad22b33ee38078f18a95ed670f", "text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER.", "title": "" }, { "docid": "2759e52ca38436b7f07bd64e6092884f", "text": "This paper proposes a method of eye-model-based gaze estimation by RGB-D camera, Kinect sensor. Different from other methods, our method sets up a model to calibrate the eyeball center by gazing at a target in 3D space, not predefined. And then by detecting the pupil center, we can estimate the gaze direction. To achieve this algorithm, we first build a head model relying on Kinect sensor, then obtaining the 3D information of pupil center. As we need to know the eyeball center position in head model, we do a calibration by designing a target to gaze. Because the ray from eyeball center to target and the ray from eyeball center to pupil center should meet a relationship, we can have an equation to solve the real eyeball center position. After calibration, we can have a gaze estimation automatically at any time. Our method allows free head motion and it only needs a simple device, finally it also can run automatically in real-time. Experiments show that our method performs well and still has a room for improvement.", "title": "" }, { "docid": "97ac42e91c4f9fa9e20e6b6e8d3f8421", "text": "Wrist worn wearable computing devices are ideally suited for presenting notifications through haptic stimuli as they are always in direct contact with the user's skin. While prior work has explored the feasibility of haptic notifications, we highlight a lack of empirical studies on thermal and pressure feedback in the context of wearable devices. This paper introduces prototypes for thermal and pressure (squeeze) feedback on the wrist. It then presents a study characterizing recognition performance with thermal and pressure cues against baseline performance with vibrations.", "title": "" }, { "docid": "51bdd73d559644d6cb3967f1cd157843", "text": "Evolving business models, computing paradigms, and management practices are rapidly re-shaping the usage models of ICT infrastructures, and demanding for more flexibility and dynamicity in enterprise security, beyond the traditional “security perimeter” approach. Since valuable ICT assets cannot be easily enclosed within a trusted physical sandbox any more, there is an increasing need for a new generation of pervasive and capillary cybersecurity paradigms over distributed and geographically-scattered systems. Following the generalized trend towards virtualization, automation, software-definition, and hardware/software disaggregation, in this paper we elaborate on a multi-tier architecture made of a common, programmable, and pervasive data-plane and a powerful set of multi-vendor detection and analysis algorithms. Our approach leverages the growing level of programmability of ICT infrastructures to create a common and unified framework that could be used to monitor and protect distributed heterogeneous environments, including legacy enterprise networks, IoT installations, and virtual resources deployed in the cloud.", "title": "" } ]
scidocsrr
d09ea9dfa72ab402d587c3e80b2879b7
A literature review on Software-Defined Networking (SDN) research topics, challenges and solutions
[ { "docid": "7b5d610a7e7ff3f889b77a9a012d1bd2", "text": "Our paper deals with the Software Defined Networking which is in extensive use in present times due to its programmability that helps in initializing, controlling and managing the network dynamics. It allows the network administrators to work on centralized network configuration and improve data center network efficiency. SDN is basically becoming popular for replacing the static architecture of traditional networks and limited computing and storage of the modern computing environments like data centers. Operations are performed by the controllers with the static switches. Due to imbalance caused due to dynamic traffic controllers are underutilized. On the other hand controllers which are overloaded may cause switches to suffer time delays. Wireless networks involve no cabling, therefore it is cost-effective, efficient, easy-installable, manageable and adaptable. We present how SDN makes it easy to achieve end point security by checking the device's status. Local agents collect device information and send to cloud service to check for vulnerabilities. The results of those checks are sent to the SDN Controller through published Application Program Interfaces (APIs). The SDN Controller instructs Open Flow switches to direct vulnerable devices to a Quarantine Network, thus detecting suspicious traffic. The implementation is done using the data network mathematical model.", "title": "" } ]
[ { "docid": "b35922663b4728c409528675be15d586", "text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.", "title": "" }, { "docid": "3b4fec89137f9d4690bff6470b285192", "text": "The poor contrast and the overlapping of cervical cell cytoplasm are the major issues in the accurate segmentation of cervical cell cytoplasm. This paper presents an automated unsupervised cytoplasm segmentation approach which can effectively find the cytoplasm boundaries in overlapping cells. The proposed approach first segments the cell clumps from the cervical smear image and detects the nuclei in each cell clump. A modified Otsu method with prior class probability is proposed for accurate segmentation of nuclei from the cell clumps. Using distance regularized level set evolution, the contour around each nucleus is evolved until it reaches the cytoplasm boundaries. Promising results were obtained by experimenting on ISBI 2015 challenge dataset.", "title": "" }, { "docid": "5025766e66589289ccc31e60ca363842", "text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.", "title": "" }, { "docid": "a86c79f52fc8399ab00430459d4f0737", "text": "Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithmsperform, andwhich are their possible biases thatmay impair their effectiveness. Many popular ranking algorithms (such as Google’s PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks.We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dec96ec8501dd055a2550d7b71d7d4b7", "text": "Traditionally, environmental monitoring is achieved by a small number of expensive and high precision sensing unities. Collected data are retrieved directly from the equipment at the end of the experiment and after the unit is recovered. The implementation of a wireless sensor network provides an alternative solution by deploying a larger number of disposable sensor nodes. Nodes are equipped with sensors with less precision, however, the network as a whole provides better spatial resolution of the area and the users can have access to the data immediately. This paper surveys a comprehensive review of the available solutions to support wireless sensor network environmental monitoring applications.", "title": "" }, { "docid": "f574d3880ce63f95d8f14a53c5176c7c", "text": "Human dance was investigated with positron emission tomography to identify its systems-level organization. Three core aspects of dance were examined: entrainment, meter and patterned movement. Amateur dancers performed small-scale, cyclically repeated tango steps on an inclined surface to the beat of tango music, without visual guidance. Entrainment of dance steps to music, compared to self-pacing of movement, was supported by anterior cerebellar vermis. Movement to a regular, metric rhythm, compared to movement to an irregular rhythm, implicated the right putamen in the voluntary control of metric motion. Spatial navigation of leg movement during dance, when controlling for muscle contraction, activated the medial superior parietal lobule, reflecting proprioceptive and somatosensory contributions to spatial cognition in dance. Finally, additional cortical, subcortical and cerebellar regions were active at the systems level. Consistent with recent work on simpler, rhythmic, motor-sensory behaviors, these data reveal the interacting network of brain areas active during spatially patterned, bipedal, rhythmic movements that are integrated in dance.", "title": "" }, { "docid": "95e3da5f05e2ec86cb4e3ce23da15de1", "text": "The aim of this paper is to show a methodology to perform the mechanical design of a 6-DOF lightweight manipulator for assembling bar structures using a rotary-wing UAV. The architecture of the aerial manipulator is based on a comprehensive performance analysis, a manipulability study of the different options and a previous evaluation of the required motorization. The manipulator design consists of a base attached to the UAV landing gear, a robotic arm that supports 6-DOF, and a gripper-style end effector specifically developed for grasping bars as a result of this study. An analytical expression of the manipulator kinematic model is obtained.", "title": "" }, { "docid": "22c54c73daa3b5f93930e8cea5cb2fa1", "text": "Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras.", "title": "" }, { "docid": "653a2299cd8bc5cfb48e660390632911", "text": "Recent studies indicate that several Toll-like receptors (TLRs) are implicated in recognizing viral structures and instigating immune responses against viral infections. The aim of this study is to examine the expression of TLRs and proinflammatory cytokines in viral skin diseases such as verruca vulgaris (VV) and molluscum contagiosum (MC). Reverse transcription-polymerase chain reaction and immunostaining of skin samples were performed to determine the expression of specific antiviral and proinflammatory cytokines as well as 5 TLRs (TLR2, 3, 4, 7, and 9). In normal human skin, TLR2, 4, and 7 mRNA was constitutively expressed, whereas little TLR3 and 9 mRNA was detected. Compared to normal skin (NS), TLR3 and 9 mRNA was clearly expressed in VV and MC specimens. Likewise, immunohistochemistry indicated that keratinocytes in NS constitutively expressed TLR2, 4, and 7; however, TLR3 was rarely detected and TLR9 was only weakly expressed, whereas 5 TLRs were all strongly expressed on the epidermal keratinocytes of VV and MC lesions. In addition, the mRNA expression of IFN-beta and TNF-alpha was upregulated in the VV and MC samples. Immunohistochemistry indicated that IFN-beta and TNF-alpha were predominantly localized in the granular layer in the VV lesions and adjacent to the MC bodies. Our results indicated that VV and MC skin lesions expressed TLR3 and 9 in addition to IFN-beta and TNF-alpha. These viral-induced proinflammatory cytokines may play a pivotal role in cutaneous innate immune responses.", "title": "" }, { "docid": "cbe70e9372d1588f075d2037164b3077", "text": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.", "title": "" }, { "docid": "9a6de540169834992134eb02927d889d", "text": "In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (LingInfo, LexOnto, LMF) and present a new joint model for linguistic grounding of ontologies called LexInfo. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.", "title": "" }, { "docid": "e740e5ff2989ce414836c422c45570a9", "text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.", "title": "" }, { "docid": "e1ab21ad8cdc14a73eb4fc3e7d1df1bf", "text": "There is a growing need for in-memory database analytic services, especially in cloud settings. Concurrent query execution is common in such environments. A crucial deployment requirement is to employ a concurrent query execution scheduling framework that is flexible, precise, and adaptive to meet specified deployment goals. In addition, the framework must also aim to use all the underlying hardware resources effectively (for high performance and high cost efficiency). This paper focuses on the design and evaluation of such a scheduler framework. Our scheduler framework incorporates a design in which the scheduling policies are cleanly separated from the scheduling mechanisms, allowing the scheduler to support a variety of policies, such as fair and priority scheduling. The scheduler also contains a novel learning component to monitor and quickly adapt to changing resource requirements of concurrent queries. In addition, the scheduler easily incorporates a load controller to protect the system from thrashing in situations when resources are scarce/oversubscribed. We have implemented our scheduling framework in an in-memory database engine, and using this implementation we also demonstrate the effectiveness of our approach. Collectively, we present the design and implementation of a scheduling framework for in-memory database services on contemporary hardware in modern deployment settings.", "title": "" }, { "docid": "1d354f59b9659785bd1548c756611647", "text": "Phishing email is one of the major problems of today's Internet, resulting in financial losses for organizations and annoying individual users. Numerous approaches have been developed to filter phishing emails, yet the problem still lacks a complete solution. In this paper, we present a survey of the state of the art research on such attacks. This is the first comprehensive survey to discuss methods of protection against phishing email attacks in detail. We present an overview of the various techniques presently used to detect phishing email, at the different stages of attack, mostly focusing on machine-learning techniques. A comparative study and evaluation of these filtering methods is carried out. This provides an understanding of the problem, its current solution space, and the future research directions anticipated.", "title": "" }, { "docid": "cac9a8490e3d33b49a08fe14684bf256", "text": "A hand-held probe combining high-resolution reflectance confocal microscopy (RCM) and optical coherence tomography (OCT) within the same optical path was developed and preliminarily tested for assessing skin burns. The preliminary results show that these two optical technologies have complementary capabilities that can help the clinician to more rapidly identify the dermal-epidermal junction, determine the integrity of the epidermal layer, and determine tissue perfusion status.", "title": "" }, { "docid": "fc167904e713a2b4c48fd50b7efa5332", "text": "Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.", "title": "" }, { "docid": "f4535d47191caaa1e830e5d8fae6e1ba", "text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "title": "" }, { "docid": "bc3c1d59e062199eef0d27562b8278ce", "text": "A 45° linearly polarized hollow-waveguide 16 × 16- slot array antenna is proposed for point-to-point wireless communication in the 71-76 GHz and 81-86 GHz bands. The antenna is composed of an equally-split corporate-feed circuit and 2 × 2-element sub-arrays which radiate the 45 ° linear polarization. Low sidelobe characteristics are obtained in the E-plane by diagonal placement of the square antenna rotating by 45 degrees. To suppress cross polarization, a radiating narrow-slot pair is adopted. The sub-array is designed by a genetic algorithm and 25.7% reflection bandwidth for is obtained by decreasing the Q of two eigenmodes of the radiating element. A 16 × 16-element array is fabricated by diffusion bonding of thin copper plates. The total thickness is 3.2 mm which is less than 1 free space wavelength. A high gain of 32.9 dBi, high antenna efficiency of 86.6% and low sidelobe characteristics of -27.1-dB first sidelobe levels are measured at the center frequency of 78.5 GHz. Further, the broadband characteristics of reflection , show a gain of more than 31.4 dBi, and low cross polarization of less than -30 dB are achieved over the 71-86 GHz band.", "title": "" }, { "docid": "02bd18358ac5cb5539a99d4c2babd2ea", "text": "This tutorial provides an overview of the key research results in the area of entity resolution that are relevant to addressing the new challenges in entity resolution posed by the Web of data, in which real world entities are described by interlinked data rather than documents. Since such descriptions are usually partial, overlapping and sometimes evolving, entity resolution emerges as a central problem both to increase dataset linking but also to search the Web of data for entities and their relations.", "title": "" } ]
scidocsrr
410ef8bfcdc5d9a8ce4b22c8a8ccc622
Biosorption: a solution to pollution?
[ { "docid": "e6d6d3b41a0914036a77d5d151d745a8", "text": "Only within the past decade has the potential of metal biosorption by biomass materials been well established. For economic reasons, of particular interest are abundant biomass types generated as a waste byproduct of large-scale industrial fermentations or certain metal-binding algae found in large quantities in the sea. These biomass types serve as a basis for newly developed metal biosorption processes foreseen particularly as a very competitive means for the detoxification of metal-bearing industrial effluents. The assessment of the metal-binding capacity of some new biosorbents is discussed. Lead and cadmium, for instance, have been effectively removed from very dilute solutions by the dried biomass of some ubiquitous species of brown marine algae such as Ascophyllum and Sargassum, which accumulate more than 30% of biomass dry weight in the metal. Mycelia of the industrial steroid-transforming fungi Rhizopus and Absidia are excellent biosorbents for lead, cadmium, copper, zinc, and uranium and also bind other heavy metals up to 25% of the biomass dry weight. Biosorption isotherm curves, derived from equilibrium batch sorption experiments, are used in the evaluation of metal uptake by different biosorbents. Further studies are focusing on the assessment of biosorbent performance in dynamic continuous-flow sorption systems. In the course of this work, new methodologies are being developed that are aimed at mathematical modeling of biosorption systems and their effective optimization. Elucidation of mechanisms active in metal biosorption is essential for successful exploitation of the phenomenon and for regeneration of biosorbent materials in multiple reuse cycles. The complex nature of biosorbent materials makes this task particularly challenging. Discussion focuses on the composition of marine algae polysaccharide structures, which seem instrumental in metal uptake and binding. The state of the art in the field of biosorption is reviewed in this article, with many references to recent reviews and key individual contributions.", "title": "" } ]
[ { "docid": "226fdcdd185b2686e11732998dca31a2", "text": "Blockchain has received much attention in recent years. This immense popularity has raised a number of concerns, scalability of blockchain systems being a common one. In this paper, we seek to understand how Ethereum, a well-established blockchain system, would respond to sharding. Sharding is a prevalent technique to increase the scalability of distributed systems. To understand how sharding would affect Ethereum, we model Ethereum blockchain as a graph and evaluate five methods to partition the graph. We assess methods using three metrics: the balance among shards, the number of transactions that would involve multiple shards, and the amount of data that would be relocated across shards upon repartitioning of the graph.", "title": "" }, { "docid": "e9940668ce12749d7b6ee82ea1e1e2e4", "text": "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.", "title": "" }, { "docid": "31118ada9270facdc97465bfb28a3571", "text": "Transimpedance amplifiers using voltage feedback operational amplifiers are widely used for current to voltage conversion in applications when a moderatehigh bandwidth and a high sensitivity are required, such as photodiodes, photomultipliers, electron multipliers and capacitive sensors. The conventional circuit presents a virtual earth to the input and at low frequencies, the input capacitance is usually not a significant concem. However, at high frequencies and especially for high sensitivity circuits, the total input capacitance can severely limit the available bandwidth from the circuit [1,6]. The input capacitance in effect constitutes part of the feedback network of the op-amp and hence reduces the available loop gain at high frequencies. In some cases a high input capacitance can cause the circuit to have a lightly damped or unstable dynamic response. Lag compensation by simply adding feedback capacitance is generally used to guarantee stability, however this approach does not permit the full gain-bandwidth characteristic of the op-amp to be fully exploited.", "title": "" }, { "docid": "40e73596d477cf9282e9142785c71066", "text": "The broaden-and-build theory of positive emotions predicts that positive emotions broaden the scopes of attention and cognition, and, by consequence, initiate upward spirals toward increasing emotional well-being. The present study assessed this prediction by testing whether positive affect and broad-minded coping reciprocally and prospectively predict one another. One hundred thirty-eight college students completed self-report measures of affect and coping at two assessment periods 5 weeks apart. As hypothesized, regression analyses showed that initial positive affect, but not negative affect, predicted improved broad-minded coping, and initial broad-minded coping predicted increased positive affect, but not reductions in negative affect. Further mediational analyses showed that positive affect and broad-minded coping serially enhanced one another. These findings provide prospective evidence to support the prediction that positive emotions initiate upward spirals toward enhanced emotional wellbeing. Implications for clinical practice and health promotion are discussed.", "title": "" }, { "docid": "c7b92058dd9aee5217725a55ca1b56ff", "text": "For the autonomous navigation of mobile robots, robust and fast visual localization is a challenging task. Although some end-to-end deep neural networks for 6-DoF Visual Odometry (VO) have been reported with promising results, they are still unable to solve the drift problem in long-range navigation. In this paper, we propose the deep global-relative networks (DGRNets), which is a novel global and relative fusion framework based on Recurrent Convolutional Neural Networks (RCNNs). It is designed to jointly estimate global pose and relative localization from consecutive monocular images. DGRNets include feature extraction sub-networks for discriminative feature selection, RCNNs-type relative pose estimation subnetworks for smoothing the VO trajectory and RCNNs-type global pose regression sub-networks for avoiding the accumulation of pose errors. We also propose two loss functions: the first one consists of Cross Transformation Constraints (CTC) that utilize geometric consistency of the adjacent frames to train a more accurate relative sub-networks, and the second one is composed of CTC and Mean Square Error (MSE) between the predicted pose and ground truth used to train the end-to-end DGRNets. The competitive experiments on indoor Microsoft 7-Scenes and outdoor KITTI dataset show that our DGRNets outperform other learning-based monocular VO methods in terms of pose accuracy.", "title": "" }, { "docid": "3a032dc19fc6dc19a2d0cde0ec3fa248", "text": "PURPOSE\nVideo match analysis is used for the assessment of physical performances of professional soccer players, particularly for the identification of \"high intensities\" considered as \"high running speeds.\" However, accelerations are also essential elements setting metabolic loads, even when speed is low. We propose a more detailed assessment of soccer players' metabolic demands by video match analysis with the aim of also taking into account accelerations.\n\n\nMETHODS\nA recent study showed that accelerated running on a flat terrain is equivalent to running uphill at constant speed, the incline being dictated by the acceleration. Because the energy cost of running uphill is known, this makes it possible to estimate the instantaneous energy cost of accelerated running, the corresponding instantaneous metabolic power, and the overall energy expenditure, provided that the speed (and acceleration) is known. Furthermore, the introduction of individual parameters makes it possible to customize performance profiles, especially as it concerns energy expenditure derived from anaerobic sources. Data from 399 \"Serie-A\" players (mean +/- SD; age = 27 +/- 4 yr, mass = 75.8 +/- 5.0 kg, stature = 1.80 +/- 0.06 m) were collected during the 2007-2008 season.\n\n\nRESULTS\nMean match distance was 10,950 +/- 1044 m, and average energy expenditure was 61.12 +/- 6.57 kJ x kg(-1). Total distance covered at high power (>20 W x kg(-1)) amounted to 26% and corresponding energy expenditure to approximately 42% of the total. \"High intensities\" expressed as high-power output are two to three times larger than those based only on running speed.\n\n\nCONCLUSIONS\nThe present approach for the assessment of top-level soccer players match performance through video analysis allowed us to assess instantaneous metabolic power, thus redefining the concept of \"high intensity\" on the basis of actual metabolic power rather than on speed alone.", "title": "" }, { "docid": "36162ebd7d7c5418e4c78bad5bbba8ab", "text": "In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.", "title": "" }, { "docid": "aa234355d0b0493e1d8c7a04e7020781", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" }, { "docid": "0860b29f52d403a0ff728a3e356ec071", "text": "Neuroanatomy has entered a new era, culminating in the search for the connectome, otherwise known as the brain's wiring diagram. While this approach has led to landmark discoveries in neuroscience, potential neurosurgical applications and collaborations have been lagging. In this article, the authors describe the ideas and concepts behind the connectome and its analysis with graph theory. Following this they then describe how to form a connectome using resting state functional MRI data as an example. Next they highlight selected insights into healthy brain function that have been derived from connectome analysis and illustrate how studies into normal development, cognitive function, and the effects of synthetic lesioning can be relevant to neurosurgery. Finally, they provide a précis of early applications of the connectome and related techniques to traumatic brain injury, functional neurosurgery, and neurooncology.", "title": "" }, { "docid": "b0d11ab83aa6ae18d1a2be7c8e8803b5", "text": "Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.", "title": "" }, { "docid": "8a3d56fe9db0cde24b68ee796dd0ad42", "text": "Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained. Although previous research showed that shallow feed-forward nets sometimes can learn the complex functions previously learned by deep nets while using the same number of parameters as the deep models they mimic, in this paper we demonstrate that the same methods cannot be used to train accurate models on CIFAR-10 unless the student models contain multiple layers of convolution. Although the student models do not have to be as deep as the teacher model they mimic, the students need multiple convolutional layers to learn functions of comparable accuracy as the deep convolutional teacher.", "title": "" }, { "docid": "ff40eca4b4a27573e102b40c9f70aea4", "text": "This paper is concerned with the question of how to online combine an ensemble of active learners so as to expedite the learning progress during a pool-based active learning session. We develop a powerful active learning master algorithm, based a known competitive algorithm for the multi-armed bandit problem and a novel semi-supervised performance evaluation statistic. Taking an ensemble containing two of the best known active learning algorithms and a new algorithm, the resulting new active learning master algorithm is empirically shown to consistently perform almost as well as and sometimes outperform the best algorithm in the ensemble on a range of classification problems.", "title": "" }, { "docid": "01dc6744b32251a80adad50dac21b1de", "text": "Recommender systems have been widely studied in the literature as they have real world impacts in many E-commerce platforms and social networks. Most previous systems are based on the user-item recommendation matrix, which contains users’ history recommendation activities on items. In this paper, we propose a novel predictive collaborative filtering approach that exploits both the partially observed user-item recommendation matrix and the item-based side information to produce top-N recommender systems. The proposed approach automatically identifies the most interesting items for each user from his or her non-recommended item pool by aggregating over his or her recommended items via a low-rank coefficient matrix. Moreover, it also simultaneously builds linear regression models from the item-based side information such as item reviews to predict the item recommendation scores for the users. The proposed approach is formulated as a rank constrained joint minimization problem with integrated least squares losses, for which an efficient analytical solution can be derived. To evaluate the proposed learning technique, empirical evaluations on five recommendation tasks are conducted. The experimental results demonstrate the efficacy of the proposed approach comparing to the competing methods.", "title": "" }, { "docid": "956df6118923176f5826f3b1fa0ff5b0", "text": "Over the past decade, Korean popular culture has spread infectiously throughout the world. The term, “Korean wave,” has been used to describe this rising popularity of Korean popular culture. The Korean wave exploded in the media across the world generating a ripple effect. The Korean government took full advantage of this national phenomenon and began aiding Korean media industries in exporting Korean pop culture. This global expansion has contributed to enhancing South Korea’s national image and its economy and has been seen as a tool for public diplomacy. This paper analyzed the Korean wave and its implications for cultural influence on neighboring countries. Furthermore, this study explored how national identity impacts framing processes related to media coverage and public response.", "title": "" }, { "docid": "9ca12c5f314d077093753dc0f3ff9cd5", "text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.", "title": "" }, { "docid": "3519172a7bf6d4183484c613dcc65b0a", "text": "There has been minimal attention paid in the literature to the aesthetics of the perioral area, either in youth or in senescence. Aging around the lips traditionally was thought to result from a combination of thinning skin surrounding the area, ptosis, and loss of volume in the lips. The atrophy of senescence was treated by adding volume to the lips and filling the deep nasolabial creases. There is now a growing appreciation for the role of volume enhancement in the perioral region and the sunken midface, as well as for dentition, in the resting and dynamic appearance of the perioral area (particularly in youth). In this article, the authors describe the senior author's (BG) preferred methods for aesthetic enhancement of the perioral region and his rejuvenative techniques developed over the past 28 years. The article describes the etiologies behind the dysmorphologies in this area and presents a problem-oriented algorithm for treating them.", "title": "" }, { "docid": "f29a57d3d39c665cdd3ea364c80cd5e4", "text": "Low-rank representation (LRR) has received considerable attention in subspace segmentation due to its effectiveness in exploring low-dimensional subspace structures embedded in data. To preserve the intrinsic geometrical structure of data, a graph regularizer has been introduced into LRR framework for learning the locality and similarity information within data. However, it is often the case that not only the high-dimensional data reside on a non-linear low-dimensional manifold in the ambient space, but also their features lie on a manifold in feature space. In this paper, we propose a dual graph regularized LRR model (DGLRR) by enforcing preservation of geometric information in both the ambient space and the feature space. The proposed method aims for simultaneously considering the geometric structures of the data manifold and the feature manifold. Furthermore, we extend the DGLRR model to include non-negative constraint, leading to a parts-based representation of data. Experiments are conducted on several image data sets to demonstrate that the proposed method outperforms the state-of-the-art approaches in image clustering.", "title": "" }, { "docid": "5ffb3e630e5f020365e471e94d678cbb", "text": "This paper presents one perspective on recent developments related to software engineering in the industrial automation sector that spans from manufacturing factory automation to process control systems and energy automation systems. The survey's methodology is based on the classic SWEBOK reference document that comprehensively defines the taxonomy of software engineering domain. This is mixed with classic automation artefacts, such as the set of the most influential international standards and dominating industrial practices. The survey focuses mainly on research publications which are believed to be representative of advanced industrial practices as well.", "title": "" }, { "docid": "3c530cf20819fe98a1fb2d1ab44dd705", "text": "This paper presents a novel representation for three-dimensional objects in terms of affine-invariant image patches and their spatial relationships. Multi-view co nstraints associated with groups of patches are combined wit h a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true three-dimensional affine and Euclidean models from multiple images and their recognition in a single photograp h taken from an arbitrary viewpoint. The proposed approach does not require a separate segmentation stage and is applicable to cluttered scenes. Preliminary modeling and recognition results are presented.", "title": "" }, { "docid": "9ffb4220530a4758ea6272edf6e7e531", "text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.", "title": "" } ]
scidocsrr
b33ae5fe69c0d79b5b31b0ad19d33c46
Warped Convolutions: Efficient Invariance to Spatial Transformations
[ { "docid": "59e3e0099e215000b34e32d90b0bd650", "text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.", "title": "" } ]
[ { "docid": "bc262b5366f1bf14e5120f68df8f5254", "text": "BACKGROUND\nThe aim of this study was to compare the results of laparoscopy-assisted total gastrectomy with those of open total gastrectomy for early gastric cancer.\n\n\nMETHODS\nPatients with gastric cancer who underwent total gastrectomy with curative intent in three Korean tertiary hospitals between January 2003 and December 2010 were included in this multicentre, retrospective, propensity score-matched cohort study. Cox proportional hazards regression models were used to evaluate the association between operation method and survival.\n\n\nRESULTS\nA total of 753 patients with early gastric cancer were included in the study. There were no significant differences in the matched cohort for overall survival (hazard ratio (HR) for laparoscopy-assisted versus open total gastrectomy 0.96, 95 per cent c.i. 0.57 to 1.65) or recurrence-free survival (HR 2.20, 0.51 to 9.52). The patterns of recurrence were no different between the two groups. The severity of complications, according to the Clavien-Dindo classification, was similar in both groups. The most common complications were anastomosis-related in the laparoscopy-assisted group (8.0 per cent versus 4.2 per cent in the open group; P = 0.015) and wound-related in the open group (1.6 versus 5.6 per cent respectively; P = 0.003). Postoperative death was more common in the laparoscopy-assisted group (1.6 versus 0.2 per cent; P = 0.045).\n\n\nCONCLUSION\nLaparoscopy-assisted total gastrectomy for early gastric cancer is feasible in terms of long-term results, including survival and recurrence. However, a higher postoperative mortality rate and an increased risk of anastomotic leakage after laparoscopic-assisted total gastrectomy are of concern.", "title": "" }, { "docid": "8910a81438e6487da3856ea6b43dcc0e", "text": "This paper describes a computer architecture, Spatial Computation (SC), which is based on the translation of high-level language programs directly into hardware structures. SC program implementations are completely distributed, with no centralized control. SC circuits are optimized for wires at the expense of computation units.In this paper we investigate a particular implementation of SC: ASH (Application-Specific Hardware). Under the assumption that computation is cheaper than communication, ASH replicates computation units to simplify interconnect, building a system which uses very simple, completely dedicated communication channels. As a consequence, communication on the datapath never requires arbitration; the only arbitration required is for accessing memory. ASH relies on very simple hardware primitives, using no associative structures, no multiported register files, no scheduling logic, no broadcast, and no clocks. As a consequence, ASH hardware is fast and extremely power efficient.In this work we demonstrate three features of ASH: (1) that such architectures can be built by automatic compilation of C programs; (2) that distributed computation is in some respects fundamentally different from monolithic superscalar processors; and (3) that ASIC implementations of ASH use three orders of magnitude less energy compared to high-end superscalar processors, while being on average only 33% slower in performance (3.5x worst-case).", "title": "" }, { "docid": "57991cdfd00786c929d1a909ba22cbee", "text": "This system description explains how to use several bilingual dictionaries and aligned corpora in order to create translation candidates for novel language pairs. It proposes (1) a graph-based approach which does not depend on cyclical translations and (2) a combination of this method with a collocation-based model using the multilingually aligned Europarl corpus.", "title": "" }, { "docid": "beddbd22bbeb636d8e5aeb56c1863d9a", "text": "In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction,and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of r ecent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminatin g to a unifying discussion, and a forward-looking conclusion. I. I NTRODUCTION: HISTORICAL OVERVIEW While the first modern-day industrial robot, Unimate, began work on the General Motors assembly line in 1961, and was conceived in 1954 by George Devol [1], [2], the concept of a robot has a very long history, starting in mythology and folklore, and the first mechanical predecessors (automa ta) having been constructed in Ancient Times. For example, in Greek mythology, the God Hephaestus is reputed to have made mechanical servants from gold ([3] in p.114, and [4] verse 18.419). Furthermore, a rich tradition of designing a d building mechanical, pneumatic or hydraulic automata also exists: from the automata of Ancient Egyptian temples, to th e mechanical pigeon of the Pythagorean Archytas of Tarantum circa 400BC [5], to the accounts of earlier automata found in the Lie Zi text in China in 300BC [6], to the devices of Heron of Alexandria [7] in the 1st century. The Islamic world also plays an important role in the development of automata; AlJazari, an Arab inventor, designed and constructed numerou s automatic machines, and is even reputed to have devised the first programmable humanoid robot in 1206AD [8]. The word “robot”, a Slavic word meaning servitude, was first used in this context by the Czech author Karel Capek in 1921 [9]. However, regarding robots with natural-language conversa tional abilities, it wasnt until the 1990’s that the first pio neering systems started to appear. Despite the long history of mytho logy and automata, and the fact that even the mythological handmaidens of Hephaestus were reputed to have been given a voice [3], and despite the fact that the first general-purpo se electronic speech synthesizer was developed by Noriko Omed a in Japan in 1968 [10], it wasnt until the early 1990’s that conversational robots such as MAIA [11], RHINO [12], and AESOP [13] appeared. These robots cover a range of intended application domains; for example, MAIA was intended to carry objects and deliver them, while RHINO is a museum guide robot, and AESOP a surgical robot. In more detail, the early systems include Polly, a robotic guide that could give tours in offices [14], [15]. Polly had very simple interaction capacities; it could perceive huma n feet waving a “tour wanted” signal, and then it would just use pre-determined phrases during the tour itself. A slightly m ore advanced system was TJ [16]. TJ could verbally respond to simple commands, such as “go left”, albeit through a keyboar d. RHINO, on the other hand [12], could respond to tour-start commands, but then, again, just offered a pre-programmed to ur with fixed programmer-defined verbal descriptions. Regardi ng mobile assistant robots with conversational capabilities in the 1990s, a classic system is MAIA [11], [17], obeying simple commands, and carrying objects around places, as well as the mobile office assistant which could not only deliver parcels but guide visitors described in [18], and the similar in functio nality Japanese-language robot Jijo-2 [19], [20], [21]. Finally, an important book from the period is [22], which is characteristi c of the traditional natural-language semantics-inspired the oretical approaches to the problem of human-robot communication, and also of the great gap between the theoretical proposals and the actual implemented systems of this early decade. What is common to all the above early systems is that they share a number of limitations. First, all of them only accept a fixed and small number of simple canned commands , and they respond with a set of canned answers . Second, the only speech acts(in the sense of Searle [23]) that they can handle are requests. Third, the dialogue they support is cle arly ot flexibly mixed initiative; in most cases it is just humaninitiative. Four, they dont really support situated language , i.e. language about their physical situations and events th at are happening around them; except for a fixed number of canned location names in a few cases. Five, they are not able to handleaffective speech ; i.e. emotion-carrying prosody is either recognized nor generated. Six, their non-verbal communication[24] capabilities are almost non-existent; for example, gestures, gait, facial expressions, and head nods are neither recognized nor produced. And seventh, their dialog ue systems are usually effectively stimulus-response or stim ulusstate-response systems; i.e. no real speech planningor purposeful dialogue generation is taking place, and certainly not in conjunction with the motor planning subsystems of the robot. Last but quite importantly, no real learning, off-line or on-the-fly is taking place in these systems; verbal behavior s have to be prescribed. All of these shortcomings of the early systems of the 1990s, effectively have become desiderata for the next two decades of research: the 2000s and 2010s, which we are in at the moment. Thus, in this paper, we will start by providing a discussion giving motivation to the need for existence of interactive r obots with natural human-robot communication capabilities, and then we will enlist a number of desiderata for such systems, which have also effectively become areas of active research in the last decade. Then, we will examine these desiderata one by one, and discuss the research that has taken place towards their fulfillment. Special consideration will be given to th e socalled “symbol grounding problem” [25], which is central to most endeavors towards natural language communication wit h physically embodied agents, such as robots. Finally, after a discussion of the most important open problems for the futur e, we will provide a concise conclusion. II. M OTIVATION : INTERACTIVE ROBOTS WITH NATURAL LANGUAGE CAPABILITIES BUT WHY? There are at least two avenues towards answering this fundamental question, and both will be attempted here. The first avenue will attempt to start from first principles and derive a rationale towards equipping robots with natural language . The second, more traditional and safe avenue, will start fro m a concrete, yet partially transient, base: application dom ains existing or potential. In more detail: Traditionally, there used to be clear separation between design and deployment phases for robots. Application-spec ific robots (for example, manufacturing robots, such as [26]) were: (a) designed by expert designers, (b) possibly tailor programmed and occasionally reprogrammed by specialist engineers at their installation site, and (c) interacted wi th their environment as well as with specialized operators during ac tual operation. However, the phenomenal simplicity but also the accompanying inflexibility and cost of this traditional set ting is often changing nowadays. For example, one might want to have broader-domain and less application-specific robot s, necessitating more generic designs, as well as less effort b y the programmer-engineers on site, in order to cover the vari ous contexts of operation. Even better, one might want to rely less on specialized operators, and to have robots interact a nd collaborate with non-expert humans with little if any prior training. Ideally, even the actual traditional programmin g and re-programming might also be transferred over to non-exper t humans; and instead of programming in a technical language, to be replaced by intuitive tuition by demonstration, imita tion and explanation [27], [28], [29]. Learning by demonstratio n and imitation for robots already has quite some active resea ch; but most examples only cover motor and aspects of learning, and language and communication is not involved deeply. And this is exactly where natural language and other forms of fluid and natural human-robot communication enter the picture: Unspecialized non-expert humans are used to (and quite good at) teaching and interacting with other humans through a mixture of natural language as well as nonverbal signs. Thus, it makes sense to capitalize on this existing ab ility of non-expert humans by building robots that do not require humans to adapt to them in a special way, and which can fluidly collaborate with other humans, interacting with the m and being taught by them in a natural manner, almost as if they were other humans themselves. Thus, based on the above observations, the following is one classic line of motivation towards justifying efforts f or equipping robots with natural language capabilities: Why n ot build robots that can comprehend and generate human-like interactive behaviors, so that they can cooperate with and b e taught by non-expert humans, so that they can be applied in a wide range of contexts with ease? And of course, as natural language plays a very important role within these behaviors, why not build robots that can fluidly converse wit h humans in natural language, also supporting crucial non-ve rbal communication aspects, in order to maximize communication effectiveness, and enable their quick and effective applic ation? Thus, having presented the classical line of reasoning arriving towards the utility of equipping robots with natural language capabilities, and having discussed a space of possibilities regarding role assignment between human and rob ot, let us now move to the second, more concrete, albeit less general avenue towards justifying conversational robots: nam ely, specific applications, existing or potential. Such applica tions, where natural human-robot interaction capabilities with v erbal and non-verbal aspects would be desirabl", "title": "" }, { "docid": "d310779b1006f90719a0ece3cf2583b2", "text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.", "title": "" }, { "docid": "5efac2dbb6d407ab5a30758e8c493a3f", "text": "Big Data are becoming a new technology focus both in science and in industry and motivate technology shift to data centric architecture and operational models. There is a vital need to define the basic information/semantic models, architecture components and operational models that together comprise a so-called Big Data Ecosystem. This paper discusses a nature of Big Data that may originate from different scientific, industry and social activity domains and proposes improved Big Data definition that includes the following parts: Big Data properties (also called Big Data 5V: Volume, Velocity, Variety, Value and Veracity), data models and structures, data analytics, infrastructure and security. The paper discusses paradigm change from traditional host or service based to data centric architecture and operational models in Big Data. The Big Data Architecture Framework (BDAF) is proposed to address all aspects of the Big Data Ecosystem and includes the following components: Big Data Infrastructure, Big Data Analytics, Data structures and models, Big Data Lifecycle Management, Big Data Security. The paper analyses requirements to and provides suggestions how the mentioned above components can address the main Big Data challenges. The presented work intends to provide a consolidated view of the Big Data phenomena and related challenges to modern technologies, and initiate wide discussion.", "title": "" }, { "docid": "7f929d87c65a551577868d5b24ac4d6c", "text": "This paper is concerned with providing radio access network (RAN) elements (supply) for flash crowd traffic demands. The concept of multi-tier cells [heterogeneous networks (HetNets)] has been introduced in 5G network proposals to alleviate the erratic supply–demand mismatch. However, since the locations of the RAN elements are determined mainly based on the long-term traffic behavior in 5G networks, even the HetNet architecture will have difficulty in coping up with the cell overload induced by flash crowd traffic. In this paper, we propose a proactive drone-cell deployment framework to alleviate overload conditions caused by flash crowd traffic in 5G networks. First, a hybrid distribution and three kinds of flash crowd traffic are developed in this framework. Second, we propose a prediction scheme and an operation control scheme to solve the deployment problem of drone cells according to the information collected from the sensor network. Third, the software-defined networking technology is employed to seamlessly integrate and disintegrate drone cells by reconfiguring the network. Our experimental results have shown that the proposed framework can effectively address the overload caused by flash crowd traffic.", "title": "" }, { "docid": "bb9dbc6b86f45787c03a146cdcfdf5c4", "text": "AIM\nThe purpose of this study was to analyze tooth loss after root fractures and to assess the influence of the type of healing and the location of the root fracture. Furthermore, the actual cause of tooth loss was analyzed.\n\n\nMATERIAL AND METHODS\nLong-term survival rates were calculated using data from 492 root-fractured teeth in 432 patients. The cause of tooth loss was assessed as being the result of either pulp necrosis (including endodontic failures), new traumas or excessive mobility. The statistics used were Kaplan-Meier and the log rank method.\n\n\nRESULTS AND CONCLUSIONS\nThe location of the root fracture had a strong significant effect on tooth survival (P = 0.0001). The 10-year tooth survival of apical root fractures was 89% [95% confidence interval (CI), 78-99%], of mid-root fractures 78% (CI, 64-92%), of cervical-mid-root fractures 67% (CI, 50-85%), and of cervical fractures 33% (CI, 17-49%). The fracture-healing type offered further prognostic information. No tooth loss was observed in teeth with hard tissue fracture healing regardless of the position of the fracture. For teeth with interposition of connective tissue, the location of the fracture had a significant influence on tooth loss (P = 0.0001). For teeth with connective tissue healing, the estimated 8-year survival of apical, mid-root, and cervical-mid-root fractures were all more than 80%, whereas the estimated 8-year survival of cervical fractures was 25% (CI, 7-43%). For teeth with non-healing with interposition of granulation tissue, the location of the fracture showed a significant influence on tooth loss (P = 0.0001). The cause of tooth loss was found to be very dependent upon the location of the fracture. In conclusion, the long-term tooth survival of root fractures was strongly influenced by the type of healing and the location of the fracture.", "title": "" }, { "docid": "4468a8d7f01c1b3e6adcf316bdc34f81", "text": "Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government.", "title": "" }, { "docid": "309a5105be37cbbae67619eac6874f12", "text": "PURPOSE\nTo conduct a systematic review of prospective studies assessing the association of vitamin D intake or blood levels of 25-hydroxyvitamin D [25(OH)D] with the risk of colorectal cancer using meta-analysis.\n\n\nMETHODS\nRelevant studies were identified by a search of MEDLINE and EMBASE databases before October 2010 with no restrictions. We included prospective studies that reported relative risk (RR) estimates with 95% CIs for the association between vitamin D intake or blood 25(OH)D levels and the risk of colorectal, colon, or rectal cancer. Approximately 1,000,000 participants from several countries were included in this analysis.\n\n\nRESULTS\nNine studies on vitamin D intake and nine studies on blood 25(OH)D levels were included in the meta-analysis. The pooled RRs of colorectal cancer for the highest versus lowest categories of vitamin D intake and blood 25(OH)D levels were 0.88 (95% CI, 0.80 to 0.96) and 0.67 (95% CI, 0.54 to 0.80), respectively. There was no heterogeneity among studies of vitamin D intake (P = .19) or among studies of blood 25(OH)D levels (P = .96). A 10 ng/mL increment in blood 25(OH)D level conferred an RR of 0.74 (95% CI, 0.63 to 0.89).\n\n\nCONCLUSION\nVitamin D intake and blood 25(OH)D levels were inversely associated with the risk of colorectal cancer in this meta-analysis.", "title": "" }, { "docid": "b3eefd1fa34f0eb02541b598881396f9", "text": "We present a complete scalable system for 6 d.o.f. camera tracking based on natural features. Crucially, the calculation is based only on pre-captured reference images and previous estimates of the camera pose and is hence suitable for online applications. We match natural features in the current frame to two spatially separated reference images. We overcome the wide baseline matching problem by matching to the previous frame and transferring point positions to the reference images. We then minimize deviations from the two-view and three-view constraints between the reference images and the current frame as a function of the camera position parameters. We stabilize this calculation using a recursive form of temporal regularization that is similar in spirit to the Kalman filter. We can track camera pose over hundreds of frames and realistically integrate virtual objects with only slight jitter.", "title": "" }, { "docid": "6278090a0206b812a31f5eb60f6d9381", "text": "The “Mozart effect” reported by Rauscher, Shaw, and Ky (1993, 1995) indicates that spatial-temporal abilities are enhanced after listening to music composed by Mozart. We replicated and extended the effect in Experiment 1: Performance on a spatial-temporal task was better after participants listened to a piece composed by Mozart or by Schubert than after they sat in silence. In Experiment 2, the advantage for the music condition disappeared when the control condition consisted of a narrated story instead of silence. Rather, performance was a function of listeners’preference (music or story), with better performance following the preferred condition. Claims that exposure to music composed by Mozart improves spatial-temporal abilities (Rauscher, Shaw, & Ky, 1993, 1995) have received widespread attention in the news media. Based on these findings, Georgia Governor Zell Miller recently budgeted for a compact disc or cassette for each infant born in state. Reports published in Science(Holden, 1994), the APA Monitor(Martin, 1994), and the popular press indicate that scientists and the general public are giving serious consideration to the possibility that music listening and music lessons improve other abilities. If these types of associations can be confirmed, the implications would be considerable. For example, listening to music could improve the performance of pilots and structural engineers. Such associations would also provide evidence against contemporary theories of modularity (Fodor, 1983) and multiple intelligences (Gardner, 1993), which argue for independence of functioning across domains. Although facilitation in spatial-temporal performance following exposure to music (Rauscher et al., 1993, 1995) is temporary (10 to 15 min), long-term improvements in spatial-temporal reasoning as a consequence of music lessons have also been reported (Gardiner, Fox, Knowles, & Jeffrey, 1996; Rauscher et al., 1997). Unfortunately, the media have not been careful to distinguish these disparate findings. The purpose of the present study was to provide a more complete explanation of the short-term phenomenon. Rauscher and her colleagues have proposed that the so-called Mozart effect can be explained by the trion model (Leng & Shaw, 1991), which posits that exposure to complex musical compositions excites cortical firing patterns similar to those used in spatial-temporal reasoning, so that performance on spatial-temporal tasks is positively affected by exposure to music. On the surface, the Mozart effect is similar to robust psychological phenomena such as transfer or priming. For example, the effect could be considered an instance of positive, nonspecific transfer across domains and modalities (i.e., music listening and visual-spatial performance) that do not have a well-documented association. Transfer is said to occur when knowledge or skill acquired in one situation influences performance in another (Postman, 1971). In the case of the Mozart effect, however, passive listening to music—rather than overt learning—influences spatial-temporal performance. The Mozart effect also bears similarities to associative priming effects and spreading activation (Collins & Loftus, 1975). But priming effects tend to disappear when the prime and the target have few features in common (Klimesch, 1994, pp. 163–165), and cross-modal priming effects are typically weak (Roediger & McDermott, 1993). Moreover, it is far from obvious which features are shared by stimuli as diverse as a Mozart sonata and a spatial-temporal task. In short, the Mozart effect described by Rauscher et al. (1993, 1995) is difficult to situate in a context of known cognitive phenomena. Stough, Kerkin, Bates, and Mangan (1994) failed to replicate the findings of Rauscher et al., although their use of Raven’s Advanced Progressive Matrices rather than spatial tasks from the Stanford-Binet Intelligence Scale (Rauscher et al., 1993, 1995) to assess spatial abilities may account for the discrepancies. Whereas tasks measuring spatial recognition (such as the Raven’s test) require a search for physical similarities among visually presented stimuli, spatial-temporal tasks (e.g., the Paper Folding and Cutting, PF&C, subtest of the StanfordBinet; mental rotation tasks; jigsaw puzzles) require mental transformation of the stimuli (Rauscher & Shaw, 1998). In their review of previous successes and failures at replicating the Mozart effect, Rauscher and Shaw (1998) concluded that the effect is obtainable only with spatial-temporal tasks. Our goal in Experiment 1 was to replicate and extend the basic findings of Rauscher et al. (1993, 1995). A completely computercontrolled procedure was used to test adults’ performance on a PF&C task immediately after they listened to music or sat in silence. Half of the participants listened to Mozart during the music condition; the other half listened to Schubert. The purpose of Experiment 2 was to test the hypothesis that the Mozart effect is actually a consequence of participants’ preference for one testing condition over another, the assumption being that better performance would follow the preferred condition. Control conditions in Rauscher et al. (1993) included a period of silence or listening to a relaxation tape, both of which might have been less interesting or arousing than listening to a Mozart sonata. Consequently, if the participants in that study preferred the Mozart condition, this factor might account for the differential performance on the spatial-temporal task that followed. In a subsequent experiment (Rauscher et al., 1995), comparison conditions involved silence or a combination of minimalist music (Philip Glass), a taped short story, and repetitive dance music. Minimalist and repetitive music might also induce boredom or low levels of arousal, much like silence, and the design precluded direct comparison of the short-story and music conditions. Indeed, in all other instances in which the Mozart effect has been successfully replicated (see Rauscher & Shaw, 1998), control conditions consisted of sitting in silence or listening to relaxation tapes or repetitive music. In Experiment 2, our control condition involved simply listening to a short story. Address correspondence to E. Glenn Schellenberg, Department of Psychology, University of Toronto at Mississauga, Mississauga, Ontario, Canada L5L 1C6; e-mail: g.schellenberg@utoronto.ca. PSYCHOLOGICAL SCIENCE Kristin M. Nantais and E. Glenn Schellenberg", "title": "" }, { "docid": "95e89119b672d76a9cd5ada7b2ae7362", "text": "The aim of this project is to optimize an Arithmetic Logical Unit with BIST capability. Arithmetic Logical Unit is used in many processing and computing devices, due to rapid development of technology the faster arithmetic and logical unit which consume less power and area required.Due to the increasing integration complexities of IC‘s the Optimized Arithmetic Logical Unit implement sometimes may mal-function, so testing capability must be provide and this is accomplished by Built-In-Self-Test (BIST).So this project has been done with the help of Verilog Hardware Description Language, Simulated by Xilinx10.1 Software and is synthesized by cadence tool. After synthesis Area and power are reduced by 31% and 42% respectively. KeywordsOptimized ALU, Ripple Carry Adder, Vedic Multiplier, Built-In-Self-Test.", "title": "" }, { "docid": "279d6de6ed6ade25d5ac0ff3d1ecde49", "text": "This paper explores the relationship between TV viewership ratings for Scandinavian's most popular talk show, Skavlan and public opinions expressed on its Facebook page. The research aim is to examine whether the activity on social media affects the number of viewers per episode of Skavlan, how the viewers are affected by discussions on the Talk Show, and whether this creates debate on social media afterwards. By analyzing TV viewer ratings of Skavlan talk show, Facebook activity and text classification of Facebook posts and comments with respect to type of emotions and brand sentiment, this paper identifes patterns in the users' real-world and digital world behaviour.", "title": "" }, { "docid": "2f3e10724dca50927bd1a39cfd1f45e5", "text": "Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering (CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to resolve are scalability and sparseness of the user profiles. In this paper, we describe Alternating-Least-Squares with Weighted-λ-Regularization (ALS-WR), a parallel algorithm that we designed for the Netflix Prize, a large-scale collaborative filtering challenge. We use parallel Matlab on a Linux cluster as the experimental platform. We show empirically that the performance of ALS-WR monotonically increases with both the number of features and the number of ALS iterations. Our ALS-WR applied to the Netflix dataset with 1000 hidden features obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. Combined with the parallel version of other known methods, we achieved a performance improvement of 5.91% over Netflix’s own CineMatch recommendation system. Our method is simple and scales well to very large datasets.", "title": "" }, { "docid": "08b5bff9f96619083c16607090311345", "text": "This demo presents a prototype mobile app that provides out-of-the-box personalised content recommendations to its users by leveraging and combining the user's location, their Facebook and/or Twitter feed and their in-app actions to automatically infer their interests. We build individual models for each user and each location. At retrieval time we construct the user's personalised feed by mixing different sources of content-based recommendations with content directly from their Facebook/Twitter feeds, locally trending articles and content propagated through their in-app social network. Both explicit and implicit feedback signals from the users' interactions with their recommendations are used to update their interests models and to learn their preferences over the different content sources.", "title": "" }, { "docid": "bba4d637cf40e81ea89e61e875d3c425", "text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.", "title": "" }, { "docid": "8aeead40ab3112b0ef69c77c73885d46", "text": "We provide a new understanding of the fundamental nature of adversarially robust classifiers and how they differ from standard models. In particular, we show that there provably exists a trade-off between the standard accuracy of a model and its robustness to adversarial perturbations. We demonstrate an intriguing phenomenon at the root of this tension: a certain dichotomy between “robust” and “non-robust” features. We show that while robustness comes at a price, it also has some surprising benefits. Robust models turn out to have interpretable gradients and feature representations that align unusually well with salient data characteristics. In fact, they yield striking feature interpolations that have thus far been possible to obtain only using generative models such as GANs.", "title": "" }, { "docid": "7a77d8d381ec543033626be54119358a", "text": "The advent of continuous glucose monitoring (CGM) is a significant stride forward in our ability to better understand the glycemic status of our patients. Current clinical practice employs two forms of CGM: professional (retrospective or \"masked\") and personal (real-time) to evaluate and/or monitor glycemic control. Most studies using professional and personal CGM have been done in those with type 1 diabetes (T1D). However, this technology is agnostic to the type of diabetes and can also be used in those with type 2 diabetes (T2D). The value of professional CGM in T2D for physicians, patients, and researchers is derived from its ability to: (1) to discover previously unknown hyper- and hypoglycemia (silent and symptomatic); (2) measure glycemic control directly rather than through the surrogate metric of hemoglobin A1C (HbA1C) permitting the observation of a wide variety of metrics that include glycemic variability, the percent of time within, below and above target glucose levels, the severity of hypo- and hyperglycemia throughout the day and night; (3) provide actionable information for healthcare providers derived by the CGM report; (4) better manage patients on hemodialysis; and (5) effectively and efficiently analyze glycemic effects of new interventions whether they be pharmaceuticals (duration of action, pharmacodynamics, safety, and efficacy), devices, or psycho-educational. Personal CGM has also been successfully used in a small number of studies as a behavior modification tool in those with T2D. This comprehensive review describes the differences between professional and personal CGM and the evidence for the use of each form of CGM in T2D. Finally, the opinions of key professional societies on the use of CGM in T2D are presented.", "title": "" } ]
scidocsrr
b72babe4bd9f883b21d78ed3b85770e2
FedX: Optimization Techniques for Federated Query Processing on Linked Data
[ { "docid": "a9b159f9048c1dadb941e1462ba5826f", "text": "Distributed data processing is becoming a reality. Businesses want to do it for many reasons, and they often must do it in order to stay competitive. While much of the infrastructure for distributed data processing is already there (e.g., modern network technology), a number of issues make distributed data processing still a complex undertaking: (1) distributed systems can become very large, involving thousands of heterogeneous sites including PCs and mainframe server machines; (2) the state of a distributed system changes rapidly because the load of sites varies over time and new sites are added to the system; (3) legacy systems need to be integrated—such legacy systems usually have not been designed for distributed data processing and now need to interact with other (modern) systems in a distributed environment. This paper presents the state of the art of query processing for distributed database and information systems. The paper presents the “textbook” architecture for distributed query processing and a series of techniques that are particularly useful for distributed database systems. These techniques include special join techniques, techniques to exploit intraquery paralleli sm, techniques to reduce communication costs, and techniques to exploit caching and replication of data. Furthermore, the paper discusses different kinds of distributed systems such as client-server, middleware (multitier), and heterogeneous database systems, and shows how query processing works in these systems.", "title": "" }, { "docid": "9de44948e28892190f461199a1d33935", "text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data", "title": "" } ]
[ { "docid": "ca990b1b43ca024366a2fe73e2a21dae", "text": "Guanabenz (2,6-dichlorobenzylidene-amino-guanidine) is a centrally acting antihypertensive drug whose mechanism of action is via alpha2 adrenoceptors or, more likely, imidazoline receptors. Guanabenz is marketed as an antihypertensive agent in human medicine (Wytensin tablets, Wyeth Pharmaceuticals). Guanabenz has reportedly been administered to racing horses and is classified by the Association of Racing Commissioners International as a class 3 foreign substance. As such, its identification in a postrace sample may result in significant sanctions against the trainer of the horse. The present study examined liquid chromatographic/tandem quadrupole mass spectrometric (LC-MS/MS) detection of guanabenz in serum samples from horses treated with guanabenz by rapid i.v. injection at 0.04 and 0.2 mg/kg. Using a method adapted from previous work with clenbuterol, the parent compound was detected in serum with an apparent limit of detection of approximately 0.03 ng/ml and the limit of quantitation was 0.2 ng/ml. Serum concentrations of guanabenz peaked at approximately 100 ng/ml after the 0.2 mg/kg dose, and the parent compound was detected for up to 8 hours after the 0.04 mg/kg dose. Urine samples tested after administration of guanabenz at these dosages yielded evidence of at least one glucuronide metabolite, with the glucuronide ring apparently linked to a ring hydroxyl group or a guanidinium hydroxylamine. The LC-MS/MS results presented here form the basis of a confirmatory test for guanabenz in racing horses.", "title": "" }, { "docid": "4cf05216efd9f075024d4a3e63cdd511", "text": "BACKGROUND\nSecondary failure of oral hypoglycemic agents is common in patients with type 2 diabetes mellitus (T2DM); thus, patients often need insulin therapy. The most common complication of insulin treatment is lipohypertrophy (LH).\n\n\nOBJECTIVES\nThis study was conducted to estimate the prevalence of LH among insulin-treated patients with Patients with T2DM, to identify the risk factors for the development of LH, and to examine the association between LH and glycemic control.\n\n\nPATIENTS AND METHODS\nA total of 1090 patients with T2DM aged 20 to 89 years, who attended the diabetes clinics at the National Center for Diabetes, Endocrinology, and Genetics (NCDEG, Amman, Jordan) between October 2011 and January 2012, were enrolled. The presence of LH was examined by inspection and palpation of insulin injection sites at the time of the visit as relevant clinical and laboratory data were obtained. The LH was defined as a local tumor-like swelling of subcutaneous fatty tissue at the site of repeated insulin injections.\n\n\nRESULTS\nThe overall prevalence of LH was 37.3% (27.4% grade 1, 9.7% grade 2, and 0.2% grade 3). The LH was significantly associated with the duration of diabetes, needle length, duration of insulin therapy, lack of systematic rotation of insulin injection sites, and poor glycemic control.\n\n\nCONCLUSIONS\nThe LH is a common problem in insulin-treated Jordanian patients with T2DM. More efforts are needed to educate patients and health workers on simple interventions such as using shorter needles and frequent rotation of the insulin injection sites to avoid LH and improve glycemic control.", "title": "" }, { "docid": "d1cde8ce9934723224ecf21c3cab6615", "text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.", "title": "" }, { "docid": "4627d8e86bec798979962847523cc7e0", "text": "Consuming news over online media has witnessed rapid growth in recent years, especially with the increasing popularity of social media. However, the ease and speed with which users can access and share information online facilitated the dissemination of false or unverified information. One way of assessing the credibility of online news stories is by examining the attached images. These images could be fake, manipulated or not belonging to the context of the accompanying news story. Previous attempts to news verification provided the user with a set of related images for manual inspection. In this work, we present a semi-automatic approach to assist news-consumers in instantaneously assessing the credibility of information in hypertext news articles by means of meta-data and feature analysis of images in the articles. In the first phase, we use a hybrid approach including image and text clustering techniques for checking the authenticity of an image. In the second phase, we use a hierarchical feature analysis technique for checking the alteration in an image, where different sets of features, such as edges and SURF, are used. In contrast to recently reported manual news verification, our presented work shows a quantitative measurement on a custom dataset. Results revealed an accuracy of 72.7% for checking the authenticity of attached images with a dataset of 55 articles. Finding alterations in images resulted in an accuracy of 88% for a dataset of 50 images.", "title": "" }, { "docid": "54368ada8cc316af20995b5096764bd1", "text": "Effective managing and sharing of knowledge has the power to improve individual’s lives and society. However, research has shown that people are reluctant to share. Knowledge sharing (KS) involve not only our knowledge, but a process of giving and receiving of knowledge with others. Knowledge sharing capabilities (KSC) is an individual’s capability to share experience, expertise and know-how with other employees in the organization. Previous studies identified many factors affecting KSC either in public or private sectors. Upon a critical review on factors affecting KS and factors affecting KSC, this paper attempts to examine the factors that have been cited as significant in influencing employees KSC within Electronic Government (EG) agencies in Malaysia. Two capable factors that are considered in this study are technical factor and non-technical factor. This paper proposes an integrated conceptual framework of employees KSC which can be used for research enhancement.", "title": "" }, { "docid": "e55b84112fdb179faa8affbf9fed8c72", "text": "A polynomial threshold function (PTF) of degree <i>d</i> is a boolean function of the form <i>f</i>=<i>sgn</i>(<i>p</i>), where <i>p</i> is a degree-<i>d</i> polynomial, and <i>sgn</i> is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function <i>g</i> is called δ-close to constant if, for some <i>v</i>∈{1,−1}, we have <i>g</i>(<i>x</i>)=<i>v</i> for all but at most δ fraction of inputs. We show for every PTF <i>f</i> of degree <i>d</i>≥ 1, and parameters 0<δ, <i>r</i>≤ 1/16, that \n<table class=\"display dcenter\"><tr style=\"vertical-align:middle\"><td class=\"dcell\"><i>Pr</i><sub>ρ∼ <i>R</i><sub><i>r</i></sub></sub> [<i>f</i><sub>ρ</sub> is not  δ -close to constant] ≤ </td><td class=\"dcell\">√</td><td class=\"dcell\"><table style=\"border:0;border-spacing:1;border-collapse:separate;\" class=\"cellpadding0\"><tr><td class=\"hbar\"></td></tr><tr><td style=\"text-align:center;white-space:nowrap\" ><i>r</i></td></tr></table></td><td class=\"dcell\">· (log<i>r</i><sup>−1</sup> · logδ<sup>−1</sup>)<sup><i>O</i>(<i>d</i><sup>2</sup>)</sup>,  </td></tr></table> where ρ∼ <i>R</i><sub><i>r</i></sub> is a random restriction leaving each variable, independently, free with probability <i>r</i>, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random <em>block</em> restrictions: given an arbitrary partitioning of input variables into <i>m</i> blocks, a random block restriction picks a uniformly random block ℓ∈ [<i>m</i>] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF <i>f</i> of degree <i>d</i> becomes δ-close to constant when hit with a random block restriction, except with probability at most <i>m</i><sup>−1/2</sup> · (log<i>m</i>· logδ<sup>−1</sup>)<sup><i>O</i>(<i>d</i><sup>2</sup>)</sup>. As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ <i>d</i>≪ √log<i>n</i>/loglog<i>n</i>, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (<em>STOC</em>, 2016) and Chen, Santhanam, and Srinivasan (<em>CCC</em>, 2016). In particular, we show that there is an <i>n</i>-variate boolean function <i>F</i><sub><i>n</i></sub> ∈ <i>P</i> such that every depth-2 circuit with PTF gates of degree <i>d</i>≥ 1 that computes <i>F</i><sub><i>n</i></sub> must have at least (<i>n</i><sup>3/2+1/<i>d</i></sup>)· (log<i>n</i>)<sup>−<i>O</i>(<i>d</i><sup>2</sup>)</sup> wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-<i>d</i> PTFs due to Kane (<em>Computational Complexity</em>, 2014), and the Littlewood-Offord type anticoncentration bound for degree-<i>d</i> multilinear polynomials due to Meka, Nguyen, and Vu (<em>Theory of Computing</em>, 2016). Finally, we give <em>derandomized</em> versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (<em>SICOMP</em>, 2013).", "title": "" }, { "docid": "83c81ecb870e84d4e8ab490da6caeae2", "text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.", "title": "" }, { "docid": "4c542a4b5a948a037a4c49bce238d04a", "text": "Agar-based nanocomposite films with different types of nanoclays, such as Cloisite Na+, Cloisite 30B, and Cloisite 20A, were prepared using a solvent casting method, and their tensile, water vapor barrier, and antimicrobial properties were tested. Tensile strength (TS), elongation at break (E), and water vapor permeability (WVP) of control agar film were 29.7±1.7 MPa, 45.3±9.6%, and (2.22±0.19)×10(-9) g·m/m2·s·Pa, respectively. All the film properties tested, including transmittance, tensile properties, WVP, and X-ray diffraction patterns, indicated that Cloisite Na+ was the most compatible with agar matrix. TS of the nanocomposite films prepared with 5% Cloisite Na+ increased by 18%, while WVP of the nanocomposite films decreased by 24% through nanoclay compounding. Among the agar/clay nanocomposite films tested, only agar/Cloisite 30B nanocomposite film showed a bacteriostatic function against Listeria monocytogenes.", "title": "" }, { "docid": "13abacabef42365ac61be64597698f78", "text": "Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. As it can be edited by anyone, entries frequently get vandalized, leading to the possibility that it might spread of falsified information if such posts are not detected. The WSDM 2017 Wiki Vandalism Detection Challenge requires us to solve this problem by computing a vandalism score denoting the likelihood that a revision corresponds to an act of vandalism and performance is measured using the ROC-AUC obtained on a held-out test set. This paper provides the details of our submission that obtained an ROC-AUC score of 0.91976 in the final evaluation.", "title": "" }, { "docid": "94fc516df0c0a5f0ebaf671befe10982", "text": "In this paper, an 8th-order cavity filter with two symmetrical transmission zeros in stopband is designedwith the method of generalized Chebyshev synthesis so as to satisfy the IMT-Advanced system demands. To shorten the development cycle of the filter from two or three days to several hours, a co-simulation with Ansoft HFSS and Designer is presented. The effectiveness of the co-simulation method is validated by the excellent consistency between the simulation and the experiment results.", "title": "" }, { "docid": "c3e46c3317d81b2d8b8c53f7e5cd37b9", "text": "A novel rainfall prediction method has been proposed. In the present work rainfall prediction in Southern part of West Bengal (India) has been conducted. A two-step method has been employed. Greedy forward selection algorithm is used to reduce the feature set and to find the most promising features for rainfall prediction. First, in the training phase the data is clustered by applying k-means algorithm, then for each cluster a separate Neural Network (NN) is trained. The proposed two step prediction model (Hybrid Neural Network or HNN) has been compared with MLP-FFN classifier in terms of several statistical performance measuring metrics. The data for experimental purpose is collected by Dumdum meteorological station (West Bengal, India) over the period from 1989 to 1995. The experimental results have suggested a reasonable improvement over traditional methods in predicting rainfall. The proposed HNN model outperformed the compared models by achieving 84.26% accuracy without feature selection and 89.54% accuracy with feature selection.", "title": "" }, { "docid": "e84174b539588b969f7d2230063b30c4", "text": "STUDY DESIGN\nThis was a biomechanical push-out testing study using a porcine model.\n\n\nOBJECTIVE\nThe purpose was to evaluate the strength of implant-bone interface of a porous titanium scaffold by comparing it to polyetheretherketone (PEEK) and allograft.\n\n\nSUMMARY OF BACKGROUND DATA\nOsseointegration is important for achieving maximal stability of spinal fusion implants and it is desirable to achieve as quickly as possible. Common PEEK interbody fusion implants appear to have limited osseointegration potential because of the formation of fibrous tissue along the implant-bone interface. Porous, three-dimensional titanium materials may be an option to enhance osseointegration.\n\n\nMETHODS\nUsing the skulls of two swine, in the region of the os frontale, 16 identical holes (4 mm diameter) were drilled to 10 mm depth in each skull. Porous titanium, PEEK, and allograft pins were press fit into the holes. After 5 weeks, animals were euthanized and the skull sections with the implants were cut into sections with each pin centered within a section. Push-out testing was performed using an MTS machine with a push rate of 6 mm/min. Load-deformation curves were used to compute the extrinsic material properties of the bone samples. Maximum force (N) and shear strength (MPa) were extracted from the output to record the bonding strength between the implant and surrounding bone. When calculating shear strength, maximum force was normalized by the actual implant surface area in contact with surrounding bone.\n\n\nRESULTS\nMean push-out shear strength was significantly greater in the porous titanium scaffold group than in the PEEK or allograft groups (10.2 vs. 1.5 vs. 3.1 MPa, respectively; P < 0.05).\n\n\nCONCLUSION\nThe push-out strength was significantly greater for the implants with porous titanium coating compared with the PEEK or allograft. These results suggest that the material has promise for facilitating osseointegration for implants, including interbody devices for spinal fusion.\n\n\nLEVEL OF EVIDENCE\nN/A.", "title": "" }, { "docid": "1a747f8474841b6b99184487994ad6a2", "text": "This paper discusses the effects of multivariate correlation analysis on the DDoS detection and proposes an example, a covariance analysis model for detecting SYN flooding attacks. The simulation results show that this method is highly accurate in detecting malicious network traffic in DDoS attacks of different intensities. This method can effectively differentiate between normal and attack traffic. Indeed, this method can detect even very subtle attacks only slightly different from the normal behaviors. The linear complexity of the method makes its real time detection practical. The covariance model in this paper to some extent verifies the effectiveness of multivariate correlation analysis for DDoS detection. Some open issues still exist in this model for further research.", "title": "" }, { "docid": "b5b8553b1f50a48af88f9902eab74254", "text": "In this paper we introduce the Fourier tag, a synthetic fiducial marker used to visually encode information and provide controllable positioning. The Fourier tag is a synthetic target akin to a bar-code that specifies multi-bit information which can be efficiently and robustly detected in an image. Moreover, the Fourier tag has the beneficial property that the bit string it encodes has variable length as a function of the distance between the camera and the target. This follows from the fact that the effective resolution decreases as an effect of perspective. This paper introduces the Fourier tag, describes its design, and illustrates its properties experimentally.", "title": "" }, { "docid": "22c72f94040cd65dde8e00a7221d2432", "text": "Research on “How to create a fair, convenient attendance management system”, is being pursued by academics and government departments fervently. This study is based on the biometric recognition technology. The hand geometry machine captures the personal hand geometry data as the biometric code and applies this data in the attendance management system as the attendance record. The attendance records that use this technology is difficult to replicate by others. It can improve the reliability of the attendance records and avoid fraudulent issues that happen when you use a register. This research uses the social survey method-questionnaire to evaluate the theory and practice of introducing biometric recognition technology-hand geometry capturing into the attendance management system.", "title": "" }, { "docid": "fba7801d0b187a9a5fbb00c9d4690944", "text": "Acute pulmonary embolism (PE) poses a significant burden on health and survival. Its severity ranges from asymptomatic, incidentally discovered subsegmental thrombi to massive, pressor-dependent PE complicated by cardiogenic shock and multisystem organ failure. Rapid and accurate risk stratification is therefore of paramount importance to ensure the highest quality of care. This article critically reviews currently available and emerging tools for risk-stratifying acute PE, and particularly for distinguishing between elevated (intermediate) and low risk among normotensive patients. We focus on the potential value of risk assessment strategies for optimizing severity-adjusted management. Apart from reviewing the current evidence on advanced early therapy of acute PE (thrombolysis, surgery, catheter interventions, vena cava filters), we discuss recent advances in oral anticoagulation with vitamin K antagonists, and with new direct inhibitors of factor Xa and thrombin, which may contribute to profound changes in the treatment and secondary prophylaxis of venous thrombo-embolism in the near future.", "title": "" }, { "docid": "17bf75156f1ffe0daffd3dbc5dec5eb9", "text": "Celebrities are admired, appreciated and imitated all over the world. As a natural result of this, today many brands choose to work with celebrities for their advertisements. It can be said that the more the brands include celebrities in their marketing communication strategies, the tougher the competition in this field becomes and they allocate a large portion of their marketing budget to this. Brands invest in celebrities who will represent them in order to build the image they want to create. This study aimed to bring under spotlight the perceptions of Turkish customers regarding the use of celebrities in advertisements and marketing communication and try to understand their possible effects on subsequent purchasing decisions. In addition, consumers’ reactions and perceptions were investigated in the context of the product-celebrity match, to what extent the celebrity conforms to the concept of the advertisement and the celebrity-target audience match. In order to achieve this purpose, a quantitative research was conducted as a case study concerning Mavi Jeans (textile company). Information was obtained through survey. The results from this case study are supported by relevant theories concerning the main subject. The most valuable result would be that instead of creating an advertisement around a celebrity in demand at the time, using a celebrity that fits the concept of the advertisement and feeds the concept rather than replaces it, that is celebrity endorsement, will lead to more striking and positive results. Keywords—Celebrity endorsement, product-celebrity match, advertising.", "title": "" }, { "docid": "7adbcbcf5d458087d6f261d060e6c12b", "text": "Operation of MOS devices in the strong, moderate, and weak inversion regions is considered. The advantages of designing the input differential stage of a CMOS op amp to operate in the weak or moderate inversion region are presented. These advantages include higher voltage gain, less distortion, and ease of compensation. Specific design guidelines are presented to optimize amplifier performance. Simulations that demonstrate the expected improvements are given.", "title": "" }, { "docid": "7d7db3f70ba6bcb5f9bf615bd8110eba", "text": "Freshwater and energy are essential commodities for well being of mankind. Due to increasing population growth on the one hand, and rapid industrialization on the other, today’s world is facing unprecedented challenge of meeting the current needs for these two commodities as well as ensuring the needs of future generations. One approach to this global crisis of water and energy supply is to utilize renewable energy sources to produce freshwater from impaired water sources by desalination. Sustainable practices and innovative desalination technologies for water reuse and energy recovery (staging, waste heat utilization, hybridization) have the potential to reduce the stress on the existing water and energy sources with a minimal impact to the environment. This paper discusses existing and emerging desalination technologies and possible combinations of renewable energy sources to drive them and associated desalination costs. It is suggested that a holistic approach of coupling renewable energy sources with technologies for recovery, reuse, and recycle of both energy and water can be a sustainable and environment friendly approach to meet the world’s energy and water needs. High capital costs for renewable energy sources for small-scale applications suggest that a hybrid energy source comprising both grid-powered energy and renewable energy will reduce the desalination costs considering present economics of energy. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
e4083bd6ef3949cfa2dcbca60010b1ef
Summarization Approach for Unstructured Customer Reviews in the field of E-commerce A Comparative Analysis
[ { "docid": "613ddf5a74bdb225608dea785ba97154", "text": "We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface.", "title": "" } ]
[ { "docid": "7f8ca7d8d2978bfc08ab259fba60148e", "text": "Over the last few years, much online volunteered geographic information (VGI) has emerged and has been increasingly analyzed to understand places and cities, as well as human mobility and activity. However, there are concerns about the quality and usability of such VGI. In this study, we demonstrate a complete process that comprises the collection, unification, classification and validation of a type of VGI—online point-of-interest (POI) data—and develop methods to utilize such POI data to estimate disaggregated land use (i.e., employment size by category) at a very high spatial resolution (census block level) using part of the Boston metropolitan area as an example. With recent advances in activity-based land use, transportation, and environment (LUTE) models, such disaggregated land use data become important to allow LUTE models to analyze and simulate a person’s choices of work location and activity destinations and to understand policy impacts on future cities. These data can also be used as alternatives to explore economic activities at the local level, especially as government-published census-based disaggregated employment data have become less available in the recent decade. Our new approach provides opportunities for cities to estimate land use at high resolution with low cost by utilizing VGI while ensuring its quality with a certain accuracy threshold. The automatic classification of POI can also be utilized for other types of analyses on cities. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "32cf33cbd55f05661703d028f9ffe40f", "text": "Due to the ease with which digital information can be altered, many digital forensic techniques have recently been developed to authenticate multimedia content. One important digital forensic result is that adding or deleting frames from an MPEG video sequence introduces a temporally distributed fingerprint into the video can be used to identify frame deletion or addition. By contrast, very little research exists into anti-forensic operations designed to make digital forgeries undetectable by forensic techniques. In this paper, we propose an anti-forensic technique capable of removing the temporal fingerprint from MPEG videos that have undergone frame addition or deletion. We demonstrate that our proposed anti-forensic technique can effectively remove this fingerprint through a series of experiments.", "title": "" }, { "docid": "1beb1c36b24f186de59d6c8ef5348dcd", "text": "We present a new corpus, PersonaBank, consisting of 108 personal stories from weblogs that have been annotated with their STORY INTENTION GRAPHS, a deep representation of the fabula of a story. We describe the topics of the stories and the basis of the STORY INTENTION GRAPH representation, as well as the process of annotating the stories to produce the STORY INTENTION GRAPHs and the challenges of adapting the tool to this new personal narrative domain We also discuss how the corpus can be used in applications that retell the story using different styles of tellings, co-tellings, or as a content planner.", "title": "" }, { "docid": "035d8347dae1f328b6617e1f39ca9f75", "text": "We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated output, and provide a richer set of connections between computations for inference and generation, which enables more effective communication of information throughout the model during training. To improve performance on natural images, we incorporate a lightweight autoregressive model in the reconstruction distribution. These techniques permit end-to-end training of models with 10+ layers of latent variables. Experiments show that our approach achieves state-of-the-art performance on standard image modelling benchmarks, can expose latent class structure in the absence of label information, and can provide convincing imputations of occluded regions in natural images.", "title": "" }, { "docid": "7888fdb4698faca5c4b2dd8c79932df2", "text": "A quadruped robot “Baby Elephant” with parallel legs has been developed. It is about 1 m tall, 1.2 m long and 0.5m wide. It weighs about 130 kg. Driven by a new type of hydraulic actuation system, the Baby Elephant is designed to work as a mechanical carrier. It can carry a payload more than 50 kg. In this study, the structure of the legs is introduced first. Then the effect of the springs for increasing the loading capability is explained and discovered. The design problem of the spring parameters is also discussed. Finally, simulations and experiments are carried out to confirm the effect.", "title": "" }, { "docid": "732433b4cc1d9a3fcf10339e53eb3ab8", "text": "Humans and mammals possess their own feet. Using the mobility of their feet, they are able to walk in various environments such as plain land, desert, swamp, and so on. Previously developed biped robots and four-legged robots did not employ such adaptable foot. In this work, a biomimetic foot mechanism is investigated through analysis of the foot structure of the human-being. This foot mechanism consists of a toe, an ankle, a heel, and springs replacing the foot muscles and tendons. Using five toes and springs, this foot can adapt to various environments. A mathematical modeling for this foot mechanism was performed and its characteristics were observed through numerical simulation.", "title": "" }, { "docid": "582b19b8dfb01928d82cbccf7497186b", "text": "Test coverage is an important metric of software quality, since it indicates thoroughness of testing. In industry, test coverage is often measured as statement coverage. A fundamental problem of software testing is how to achieve higher statement coverage faster, and it is a difficult problem since it requires testers to cleverly find input data that can steer execution sooner toward sections of application code that contain more statements.\n We created a novel fully automatic approach for aChieving higher stAtement coveRage FASTer (CarFast), which we implemented and evaluated on twelve generated Java applications whose sizes range from 300 LOC to one million LOC. We compared CarFast with several popular test case generation techniques, including pure random, adaptive random, and Directed Automated Random Testing (DART). Our results indicate with strong statistical significance that when execution time is measured in terms of the number of runs of the application on different input test data, CarFast outperforms the evaluated competitive approaches on most subject applications.", "title": "" }, { "docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc", "text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.", "title": "" }, { "docid": "6a76f00b62951358a1449814556251b3", "text": "Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. Until now, black-box attacks against neural networks have relied on transferability of adversarial examples. White-box attacks are used to generate adversarial examples on a substitute model and then transferred to the black-box target model. In this paper, we introduce a direct attack against black-box neural networks, that uses another attacker neural network to learn to craft adversarial examples. We show that our attack is capable of crafting adversarial examples that are indistinguishable from the source input and are misclassified with overwhelming probability reducing accuracy of the black-box neural network from 99.4% to 0.77% on the MNIST dataset, and from 91.4% to 6.8% on the CIFAR-10 dataset. Our attack can adapt and reduce the effectiveness of proposed defenses against adversarial examples, requires very little training data, and produces adversarial examples that can transfer to different machine learning models such as Random Forest, SVM, and K-Nearest Neighbor. To demonstrate the practicality of our attack, we launch a live attack against a target black-box model hosted online by Amazon: the crafted adversarial examples reduce its accuracy from 91.8% to 61.3%. Additionally, we show attacks proposed in the literature have unique, identifiable distributions. We use this information to train a classifier that is robust against such attacks.", "title": "" }, { "docid": "f487aa05fe1d2b0bc93862e711fd2e92", "text": "This paper presents a novel deep learning architecture to classify structured objects in datasets with a large number of visually similar categories. Our model extends the CRF objective function to a nonlinear form, by factorizing the pairwise potential matrix, to learn neighboring-class embedding. The embedding and the classifier are jointly trained to optimize this highly nonlinear CRF objective function. The non-linear model is trained on object-level samples, which is much faster and more accurate than the standard sequence-level training of the linear model. This model overcomes the difficulties of existing CRF methods to learn the contextual relationships thoroughly when there is a large number of classes and the data is sparse. The performance of the proposed method is illustrated on a huge dataset that contains images of retail-store product displays, taken in varying settings and viewpoints, and shows significantly improved results compared to linear CRF modeling and sequence-level training.", "title": "" }, { "docid": "2134a8a054e995c6ef40da6d7fbf2010", "text": "Blood oxygen saturation is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of a wearable wireless blood saturation monitoring system. Reflectance pulse oxymeter based on Near Infrared Spectroscopy (NIRS) techniques are applied for enhancing the flexibility of measurements at different locations on the body of the neonates and the compatibility to be integrated into a non-invasive monitoring platform, such as a neonatal smart jacket. Prototypes with the reflectance sensors embedded in soft fabrics are built. The thickness of device is minimized to optimize comfort. To evaluate the performance of the prototype, experiments on the premature babies were carried out at NICU of Máxima Medical Centre (MMC) in Veldhoven, the Netherlands. The results show that the heart rate and SpO2 measured by the proposed design are corresponding to the readings of the standard monitor.", "title": "" }, { "docid": "43fc5fee6e45f32b449312b0f7fa3101", "text": "BACKGROUND\nMuch of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn this longitudinal retrospective (01/1996-06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel.", "title": "" }, { "docid": "cacef3b17bafadd25cf9a49e826ee066", "text": "Road accidents are frequent and many cause casualties. Fast handling can minimize the number of deaths from traffic accidents. In addition to victims of traffic accidents, there are also patients who need emergency handling of the disease he suffered. One of the first help that can be given to the victim or patient is to use an ambulance equipped with medical personnel and equipment needed. The availability of ambulance and accurate information about victims and road conditions can help the first aid process for victims or patients. Supportive treatment can be done to deal with patients by determining the best route (nearest and fastest) to the nearest hospital. The best route can be known by utilizing the collaboration between the Dijkstra algorithm and the Floyd-warshall algorithm. This application applies Dijkstra's algorithm to determine the fastest travel time to the nearest hospital. The Floyd-warshall algorithm is implemented to determine the closest distance to the hospital. Data on some nearby hospitals will be collected by the system using Dijkstra's algorithm and then the system will calculate the fastest distance based on the last traffic condition using the Floyd-warshall algorithm to determine the best route to the nearest hospital recommended by the system. This application is built with the aim of providing support for the first handling process to the victim or the emergency patient by giving the ambulance calling report and determining the best route to the nearest hospital.", "title": "" }, { "docid": "fb3018d852c2a7baf96fb4fb1233b5e5", "text": "The term twin spotting refers to phenotypes characterized by the spatial and temporal co-occurrence of two (or more) different nevi arranged in variable cutaneous patterns, and can be associated with extra-cutaneous anomalies. Several examples of twin spotting have been described in humans including nevus vascularis mixtus, cutis tricolor, lesions of overgrowth, and deficient growth in Proteus and Elattoproteus syndromes, epidermolytic hyperkeratosis of Brocq, and the so-called phacomatoses pigmentovascularis and pigmentokeratotica. We report on a 28-year-old man and a 15-year-old girl, who presented with a previously unrecognized association of paired cutaneous vascular nevi of the telangiectaticus and anemicus types (naevus vascularis mixtus) distributed in a mosaic pattern on the face (in both patients) and over the entire body (in the man) and a complex brain malformation (in both patients) consisting of cerebral hemiatrophy, hypoplasia of the cerebral vessels and homolateral hypertrophy of the skull and sinuses (known as Dyke-Davidoff-Masson malformation). Both patients had facial asymmetry and the young man had facial dysmorphism, seizures with EEG anomalies, hemiplegia, insulin-dependent diabetes mellitus (IDDM), autoimmune thyroiditis, a large hepatic cavernous vascular malformation, and left Legg-Calvé-Perthes disease (LCPD) [LCPD-like presentation]. Array-CGH analysis and mutation analysis of the RASA1 gene were normal in both patients.", "title": "" }, { "docid": "c1c3b9393dd375b241f69f3f3cbf5acd", "text": "The purpose of trust and reputation systems is to strengthen the quality of markets and communities by providing an incentive for good behaviour and quality services, and by sanctioning bad behaviour and low quality services. However, trust and reputation systems will only be able to produce this effect when they are sufficiently robust against strategic manipulation or direct attacks. Currently, robustness analysis of TRSs is mostly done through simple simulated scenarios implemented by the TRS designers themselves, and this can not be considered as reliable evidence for how these systems would perform in a realistic environment. In order to set robustness requirements it is important to know how important robustness really is in a particular community or market. This paper discusses research challenges for trust and reputation systems, and proposes a research agenda for developing sound and reliable robustness principles and mechanisms for trust and reputation systems.", "title": "" }, { "docid": "13dde006bafe07a259b15ffade01e972", "text": "Although studies on employee recovery accumulate at a stunning pace, the commonly used theory (Effort-Recovery model) that explains how recovery occurs has not been explicitly tested. We aimed to unravel the recovery process by examining whether off-job activities enhance next morning vigor to the extent that they enable employees to relax and detach from work. In addition, we investigated whether adequate recovery also helps employees to work with more enthusiasm and vigor on the next workday. On five consecutive days, a total of 74 employees (356 data points) reported the hours they spent on various off-job activities, their feelings of psychological detachment, and feelings of relaxation before going to sleep. Feelings of vigor were reported on the next morning, and day-levels of work engagement were reported after work. As predicted, leisure activities (social, low-effort, and physical activities) increased next morning vigor through enhanced psychological detachment and relaxation. High-duty off-job activities (work and household tasks) reduced vigor because these activities diminished psychological detachment and relaxation. Moreover, off-job activities significantly affected next day work engagement. Our results support the assumption that recovery occurs when employees engage in off-job activities that allow for relaxation and psychological detachment. The findings also underscore the significance of recovery after work: Adequate recovery not only enhances vigor in the morning, but also helps employees to stay engaged during the next workday.", "title": "" }, { "docid": "139ecd9ff223facaec69ad6532f650db", "text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.", "title": "" }, { "docid": "a31652c0236fb5da569ffbf326eb29e5", "text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6", "title": "" }, { "docid": "ed9d72566cdf3e353bf4b1e589bf85eb", "text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.", "title": "" }, { "docid": "b7969a0c307b51dc563a165f267f1c8f", "text": "This study examined the overlap in teen dating violence and bullying perpetration and victimization, with regard to acts of physical violence, psychological abuse, and-for the first time ever-digitally perpetrated cyber abuse. A total of 5,647 youth (51% female, 74% White) from 10 schools participated in a cross-sectional anonymous survey. Results indicated substantial co-occurrence of all types of teen dating violence and bullying. Youth who perpetrated and/or experienced physical, psychological, and cyber bullying were likely to have also perpetrated/experienced physical and sexual dating violence, and psychological and cyber dating abuse.", "title": "" } ]
scidocsrr
455eea6cac37a5cd931c9cf662aeb4b9
A novel pressure sensing circuit for non-invasive RF/microwave blood glucose sensors
[ { "docid": "66dc20e12d8b6b99b67485203293ad07", "text": "A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types.", "title": "" } ]
[ { "docid": "f8209a4b6cb84b63b1f034ec274fe280", "text": "A major challenge in topic classification (TC) is the high dimensionality of the feature space. Therefore, feature extraction (FE) plays a vital role in topic classification in particular and text mining in general. FE based on cosine similarity score is commonly used to reduce the dimensionality of datasets with tens or hundreds of thousands of features, which can be impossible to process further. In this study, TF-IDF term weighting is used to extract features. Selecting relevant features and determining how to encode them for a learning machine method have a vast impact on the learning machine methods ability to extract a good model. Two different weighting methods (TF-IDF and TF-IDF Global) were used and tested on the Reuters-21578 text categorization test collection. The obtained results emerged a good candidate for enhancing the performance of English topics FE. Simulation results the Reuters-21578 text categorization show the superiority of the proposed algorithm.", "title": "" }, { "docid": "f69723ed73c7edd9856883bbb086ed0c", "text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.", "title": "" }, { "docid": "9f18fbdbf3ae3f33702a60895cbcc22b", "text": "Existing studies indicate that there exists strong correlation between personality and personal preference, thus personality could potentially be used to build more personalized recommender system. Personality traits are mainly measured by psychological questionnaires, and it is hard to obtain personality traits of large amount of users in real-world scenes.In this paper, we propose a new approach to automatically identify personality traits with Social Media contents in Chinese language environments. Social Media content features were extracted from 1766 Sina micro blog users, and the predicting model is trained with machine learning algorithms.The experimental results demonstrate that users' personality traits could be predicted from Social Media contents with acceptable Pearson Correlation, which makes it possible to develop user profiles for recommender system. In future, user profiles with predicted personality traits would be used to enhance the performance of existing personalized recommendation systems.", "title": "" }, { "docid": "d1741f908ea854331c8c40f2d3334882", "text": "We train a generator by maximum likelihood and we also train the same generator architecture by Wasserstein GAN. We then compare the generated samples, exact log-probability densities and approximate Wasserstein distances. We show that an independent critic trained to approximate Wasserstein distance between the validation set and the generator distribution helps detect overfitting. Finally, we use ideas from the one-shot learning literature to develop a novel fast learning critic.", "title": "" }, { "docid": "1149ffb77bc5d32b07a5bad4e0fb0409", "text": "Real-world factoid or list questions often have a simple structure, yet are hard to match to facts in a given knowledge base due to high representational and linguistic variability. For example, to answer \"who is the ceo of apple\" on Freebase requires a match to an abstract \"leadership\" entity with three relations \"role\", \"organization\" and \"person\", and two other entities \"apple inc\" and \"managing director\". Recent years have seen a surge of research activity on learning-based solutions for this method. We further advance the state of the art by adopting learning-to-rank methodology and by fully addressing the inherent entity recognition problem, which was neglected in recent works.\n We evaluate our system, called Aqqu, on two standard benchmarks, Free917 and WebQuestions, improving the previous best result for each benchmark considerably. These two benchmarks exhibit quite different challenges, and many of the existing approaches were evaluated (and work well) only for one of them. We also consider efficiency aspects and take care that all questions can be answered interactively (that is, within a second). Materials for full reproducibility are available on our website: http://ad.informatik.uni-freiburg.de/publications.", "title": "" }, { "docid": "2af262d6dda0e4de4abbc593a828326a", "text": "We investigate strategies for selection of databases and instances for training cross-corpus emotion recognition systems, that is, systems that generalize across different labelling concepts, languages and interaction scenarios. We propose objective measures for prototypicality based on distances in a large space of brute-forced acoustic features and show their relation to the expected performance in cross-corpus testing. We perform extensive evaluation on eight commonly used corpora of emotional speech reaching from acted to fully natural emotion and limited phonetic content to conversational speech. In the result, selecting prototypical training instances by the proposed criterion can deliver a gain of up to 7.5 % unweighted accuracy in cross-corpus arousal recognition, and there is a correlation of .571 between the proposed prototypicality measure of databases and the expected unweighted accuracy in cross-corpus testing by Support Vector Machines.", "title": "" }, { "docid": "a380ee9ea523d1a3a09afcf2fb01a70d", "text": "Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to stateof-the-art results. In this work, we offer a principled interpretation of back-translation as approximate inference in a generative model of bitext and show how the standard implementation of back-translation corresponds to a single iteration of the wake-sleep algorithm in our proposed model. Moreover, this interpretation suggests a natural iterative generalization, which we demonstrate leads to further improvement of up to 1.6 BLEU.", "title": "" }, { "docid": "654e6d2e1d1160a6dd7180abcce0f8bd", "text": "E-government research has become a recognized research domain and many policies and strategies are formulated for e-government implementations. Most of these target the next few years and limited attention has been giving to the long term. The eGovRTD2020, a European Commission co-funded project, investigated the future research on e-government driven by changing circumstances and the evolution of technology. This project consists of an analysis of the state of play, a scenario-building, a gap analysis and a roadmapping activity. In this paper the roadmapping methodology fitting the unique characteristics of the e-government field is presented and the results are briefly discussed. The use of this methodology has resulted in the identification of a large number of e-government research themes. It was found that a roadmapping methodology should match the unique characteristics of e-government. The research shows the need of multidisciplinary research.", "title": "" }, { "docid": "1ea55074ab304cbf308968fc8611c0d6", "text": "•Movies alter societal thinking patterns in previously unexplored social phenomena, by exposing the individual to what is shown on screen as the “norm” •Typical studies focus on audio/video modalities to estimate differences along factors such as gender •Linguistic analysis provides complementary information to the audio/video based analytics •We examine differences across gender, race and age", "title": "" }, { "docid": "843e1f3bbdf76d0fcd90e4a7f906b921", "text": "This study aimed to elucidate which component of flaxseed, i.e. secoisolariciresinol diglucoside (SDG) lignan or flaxseed oil (FO), makes tamoxifen (TAM) more effective in reducing growth of established estrogen receptor positive breast tumors (MCF-7) at low circulating estrogen levels, and potential mechanisms of action. In a 2 x 2 factorial design, ovariectomized athymic mice with established tumors were treated for 8 wk with TAM together with basal diet (control), or basal diet supplemented with SDG (1 g/kg diet), FO (38.5 g/kg diet), or combined SDG and FO. SDG and FO were at levels in 10% flaxseed diet. Palpable tumors were monitored and after animal sacrifice, analyzed for cell proliferation, apoptosis, ER-mediated (ER-alpha, ER-beta, trefoil factor 1, cyclin D1, progesterone receptor, AIBI), growth factor-mediated (epidermal growth factor receptor, human epidermal growth factor receptor-2, insulin-like growth factor receptor-1, phosphorylated mitogen activated protein kinase, PAKT, BCL2) signaling pathways and angiogenesis (vascular endothelial growth factor). All treatments reduced the growth of TAM-treated tumors by reducing cell proliferation, expression of genes, and proteins involved in the ER- and growth factor-mediated signaling pathways with FO having the greatest effect in increasing apoptosis compared with TAM treatment alone. SDG and FO reduced the growth of TAM-treated tumors but FO was more effective. The mechanisms involve both the ER- and growth factor-signaling pathways.", "title": "" }, { "docid": "15f46090f74282257979c38c5f151469", "text": "Integrating data from multiple sources has been a longstanding challenge in the database community. Techniques such as privacy-preserving data mining promises privacy, but assume data has integration has been accomplished. Data integration methods are seriously hampered by inability to share the data to be integrated. This paper lays out a privacy framework for data integration. Challenges for data integration in the context of this framework are discussed, in the context of existing accomplishments in data integration. Many of these challenges are opportunities for the data mining community.", "title": "" }, { "docid": "932dc0c02047cd701e41530c42d830bc", "text": "The concept of \"extra-cortical organization of higher mental functions\" proposed by Lev Vygotsky and expanded by Alexander Luria extends cultural-historical psychology regarding the interplay of natural and cultural factors in the development of the human mind. Using the example of self-regulation, the authors explore the evolution of this idea from its origins to recent findings on the neuropsychological trajectories of the development of executive functions. Empirical data derived from the Tools of the Mind project are used to discuss the idea of using classroom intervention to study the development of self-regulation in early childhood.", "title": "" }, { "docid": "d93d8a7c61b4cbe21b551d08458844c5", "text": "Presently there is a rapid development of the internet and the telecommunication techniques. Importance of information security is increasing. Cryptography and steganography are the major areas which work on information hiding and security. In this paper a method is used to embed a color secret image inside a color cover image. A 2-3-3 LSB insertion method has been used for image steganography. The important quality of a steganographic system is to be less distortive while increasing the size of the secret image. Use of cryptography along with steganography increases the security. Arnold CATMAP encryption technique is used for encrypting the secret image. Color space plays an important role in increasing network bandwidth efficiency. YUV color space provides reduced bandwidth for chrominance components. This paper demonstrates that YUV color space can also be used for security purposes. Keywords— Watermarking, Haar Wavelet, DWT, PSNR", "title": "" }, { "docid": "428d522f59dbef1c52421abcaaa7a0c2", "text": "We devise new coding methods to minimize Phase Change Memory write energy. Our method minimizes the energy required for memory rewrites by utilizing the differences between PCM read, set, and reset energies. We develop an integer linear programming method and employ dynamic programming to produce codes for uniformly distributed data. We also introduce data-aware coding schemes to efficiently address the energy minimization problem for stochastic data. Our evaluations show that the proposed methods result in up to 32% and 44% reduction in memory energy consumption for uniform and stochastic data respectively.", "title": "" }, { "docid": "dcdaeb7c1da911d0b1a2932be92e0fb4", "text": "As computational agents are increasingly used beyond research labs, their success will depend on their ability to learn new skills and adapt to their dynamic, complex environments. If human users—without programming skills— can transfer their task knowledge to agents, learning can accelerate dramatically, reducing costly trials. The tamer framework guides the design of agents whose behavior can be shaped through signals of approval and disapproval, a natural form of human feedback. More recently, tamer+rl was introduced to enable human feedback to augment a traditional reinforcement learning (RL) agent that learns from a Markov decision process’s (MDP) reward signal. We address limitations of prior work on tamer and tamer+rl, contributing in two critical directions. First, the four successful techniques for combining human reward with RL from prior tamer+rl work are tested on a second task, and these techniques’ sensitivities to parameter changes are analyzed. Together, these examinations yield more general and prescriptive conclusions to guide others who wish to incorporate human knowledge into an RL algorithm. Second, tamer+rl has thus far been limited to a sequential setting, in which training occurs before learning from MDP reward. In this paper, we introduce a novel algorithm that shares the same spirit as tamer+rl but learns simultaneously from both reward sources, enabling the human feedback to come at any time during the reinforcement learning process. We call this algorithm simultaneous tamer+rl. To enable simultaneous learning, we introduce a new technique that appropriately determines the magnitude of the human model’s influence on the RL algorithm throughout time and state-action space.", "title": "" }, { "docid": "bf305e88c6f2878c424eca1223a02a8d", "text": "The first plausible scheme of fully homomorphic encryption (FHE), introduced by Gentry in 2009, was considered a major breakthrough in the field of information security. FHE allows the evaluation of arbitrary functions directly on encrypted data on untrusted servers. However, previous implementations of FHE on general-purpose processors had very long latency, which makes it impractical for cloud computing. The most computationally intensive components in the Gentry-Halevi FHE primitives are the large-number modular multiplications and additions. In this paper, we attempt to use customized circuits to speedup the large number multiplication. Strassen's algorithm is employed in the design of an efficient, high-speed large-number multiplier. In particular, we propose an architecture design of an 768K-bit multiplier. As a key compoment, an 64K-point finite-field fast Fourier transform (FFT) processor is designed and prototyped on the Stratix-V FPGA. At 100 MHz, the FPGA implementation is about twice as fast as the same FFT algorithm executed on the NVIDA C2050 GPU which has 448 cores running at 1.15 GHz but at much lower power consumption.", "title": "" }, { "docid": "428c480be4ae3d2043c9f5485087c4af", "text": "Current difference-expansion (DE) embedding techniques perform one layer embedding in a difference image. They do not turn to the next difference image for another layer embedding unless the current difference image has no expandable differences left. The obvious disadvantage of these techniques is that image quality may have been severely degraded even before the later layer embedding begins because the previous layer embedding has used up all expandable differences, including those with large magnitude. Based on integer Haar wavelet transform, we propose a new DE embedding algorithm, which utilizes the horizontal as well as vertical difference images for data hiding. We introduce a dynamical expandable difference search and selection mechanism. This mechanism gives even chances to small differences in two difference images and effectively avoids the situation that the largest differences in the first difference image are used up while there is almost no chance to embed in small differences of the second difference image. We also present an improved histogram-based difference selection and shifting scheme, which refines our algorithm and makes it resilient to different types of images. Compared with current algorithms, the proposed algorithm often has better embedding capacity versus image quality performance. The advantage of our algorithm is more obvious near the embedding rate of 0.5 bpp.", "title": "" }, { "docid": "ee23ef5c3f266008e0d5eeca3bbc6e97", "text": "We use variation at a set of eight human Y chromosome microsatellite loci to investigate the demographic history of the Y chromosome. Instead of assuming a population of constant size, as in most of the previous work on the Y chromosome, we consider a model which permits a period of recent population growth. We show that for most of the populations in our sample this model fits the data far better than a model with no growth. We estimate the demographic parameters of this model for each population and also the time to the most recent common ancestor. Since there is some uncertainty about the details of the microsatellite mutation process, we consider several plausible mutation schemes and estimate the variance in mutation size simultaneously with the demographic parameters of interest. Our finding of a recent common ancestor (probably in the last 120,000 years), coupled with a strong signal of demographic expansion in all populations, suggests either a recent human expansion from a small ancestral population, or natural selection acting on the Y chromosome.", "title": "" }, { "docid": "6414893702d8f332f5a7767fd3811395", "text": "Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. Algorithms are becoming increasingly complex, and in particular, the performance of many emerging algorithms is data dependent, meaning the distribution of the noise added to query answers may change depending on the input data. Theoretical analysis typically only considers the worst case, making empirical study of average case performance increasingly important. In this paper we propose a set of evaluation principles which we argue are essential for sound evaluation. Based on these principles we propose DPBench, a novel evaluation framework for standardized evaluation of privacy algorithms. We then apply our benchmark to evaluate algorithms for answering 1- and 2-dimensional range queries. The result is a thorough empirical study of 15 published algorithms on a total of 27 datasets that offers new insights into algorithm behavior---in particular the influence of dataset scale and shape---and a more complete characterization of the state of the art. Our methodology is able to resolve inconsistencies in prior empirical studies and place algorithm performance in context through comparison to simple baselines. Finally, we pose open research questions which we hope will guide future algorithm design.", "title": "" } ]
scidocsrr
c03596679e018c5f34254a773da1524f
Security and privacy for storage and computation in cloud computing
[ { "docid": "97fee760308f95398b6717a091a977d2", "text": "We introduce and formalize the notion of Verifiable Computation , which enables a computationally weak client to “outsource” the computation of a functio n F on various dynamically-chosen inputs x1, ...,xk to one or more workers. The workers return the result of the fu nction evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi . The primary constraint is that the verification of the proof should requi re substantially less computational effort than computingF(xi) from scratch. We present a protocol that allows the worker to return a compu tationally-sound, non-interactive proof that can be verified inO(m· poly(λ)) time, wherem is the bit-length of the output of F , andλ is a security parameter. The protocol requires a one-time pr e-processing stage by the client which takes O(|C| · poly(λ)) time, whereC is the smallest known Boolean circuit computing F . Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about t hexi or yi values.", "title": "" } ]
[ { "docid": "590a44ab149b88e536e67622515fdd08", "text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).", "title": "" }, { "docid": "1fa2b4aa557c0efef7a53717dbe0c3fe", "text": "Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations.", "title": "" }, { "docid": "5fabe23b0eccc0c8cf752db44e2f7085", "text": "This article presents new evidence from English that the theory of grammar makes a distinction between the contrastive focus and discourse-new status of constituents. The evidence comes from a phonetic investigation which compares the prosody of all-new sentences with the prosody of sentences combining contrastive focus and discourse-new constituents. We have found that while the sentences of these different types in our experimental materials are not distinguished in their patterns of distribution of pitch accents and phonological phrase organization, they do differ in patterns of phonetic prominence—duration, pitch and intensity, which vary according to the composition of the sentence in terms of contrastive and/or new constituents. The central new finding is that contrastive focus constituents are more phonetically prominent than discourse new constituents that are contained within the same sentence. These distinctions in phonetic prominence are plausibly the consequence of distinctions in the phonological representation of phrasal prosodic prominence (stress) for contrastive focus and discourse-new constituents in English.", "title": "" }, { "docid": "9b10757ca3ca84784033c20f064078b7", "text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.", "title": "" }, { "docid": "029cca0b7e62f9b52e3d35422c11cea4", "text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.", "title": "" }, { "docid": "a56a95db6d9d0f0ccf26192b7e2322ff", "text": "CRISPR-Cas9 is a versatile genome editing technology for studying the functions of genetic elements. To broadly enable the application of Cas9 in vivo, we established a Cre-dependent Cas9 knockin mouse. We demonstrated in vivo as well as ex vivo genome editing using adeno-associated virus (AAV)-, lentivirus-, or particle-mediated delivery of guide RNA in neurons, immune cells, and endothelial cells. Using these mice, we simultaneously modeled the dynamics of KRAS, p53, and LKB1, the top three significantly mutated genes in lung adenocarcinoma. Delivery of a single AAV vector in the lung generated loss-of-function mutations in p53 and Lkb1, as well as homology-directed repair-mediated Kras(G12D) mutations, leading to macroscopic tumors of adenocarcinoma pathology. Together, these results suggest that Cas9 mice empower a wide range of biological and disease modeling applications.", "title": "" }, { "docid": "e27da58188be54b71187d3489fa6b4e7", "text": "In a prospective-longitudinal study of a representative birth cohort, we tested why stressful experiences lead to depression in some people but not in others. A functional polymorphism in the promoter region of the serotonin transporter (5-HT T) gene was found to moderate the influence of stressful life events on depression. Individuals with one or two copies of the short allele of the 5-HT T promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than individuals homozygous for the long allele. This epidemiological study thus provides evidence of a gene-by-environment interaction, in which an individual's response to environmental insults is moderated by his or her genetic makeup.", "title": "" }, { "docid": "2650ec74eb9b8c368f213212218989ea", "text": "Illumina-based next generation sequencing (NGS) has accelerated biomedical discovery through its ability to generate thousands of gigabases of sequencing output per run at a fraction of the time and cost of conventional technologies. The process typically involves four basic steps: library preparation, cluster generation, sequencing, and data analysis. In 2015, a new chemistry of cluster generation was introduced in the newer Illumina machines (HiSeq 3000/4000/X Ten) called exclusion amplification (ExAmp), which was a fundamental shift from the earlier method of random cluster generation by bridge amplification on a non-patterned flow cell. The ExAmp peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/125724 doi: bioRxiv preprint first posted online Apr. 9, 2017;", "title": "" }, { "docid": "a8de67cc99337dd8cdb92e1d6859f211", "text": "We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP). We enrich Saul with components that are necessary for a broad range of learning based Natural Language Processing tasks at various levels of granularity. We illustrate these advances using three different, well-known NLP problems, and show how these generic learning and inference modules can directly exploit Saul’s graph-based data representation. These properties allow the programmer to easily switch between different model formulations and configurations, and consider various kinds of dependencies and correlations among variables of interest with minimal programming effort. We argue that Saul provides an extremely useful paradigm both for the design of advanced NLP systems and for supporting advanced research in NLP.", "title": "" }, { "docid": "6ef52ad99498d944e9479252d22be9c8", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "42fcc24e20ad15de00eb1f93add8b827", "text": "Although scientometrics is seeing increasing use in Information Systems (IS) research, in particular for evaluating research efforts and measuring scholarly influence; historically, scientometric IS studies are focused primarily on ranking authors, journals, or institutions. Notwithstanding the usefulness of ranking studies for evaluating the productivity of the IS field’s formal communication channels and its scholars, the IS field has yet to exploit the full potential that scientometrics offers, especially towards its progress as a discipline. This study makes a contribution by raising the discourse surrounding the value of scientometric research in IS, and proposes a framework that uncovers the multi-dimensional bases for citation behaviour and its epistemological implications on the creation, transfer, and growth of IS knowledge. Having identified 112 empirical research evaluation studies in IS, we select 44 substantive scientometric IS studies for in-depth content analysis. The findings from this review allow us to map an engaging future in scientometric research, especially towards enhancing the IS field’s conceptual and theoretical development. Journal of Information Technology advance online publication, 12 January 2016; doi:10.1057/jit.2015.29", "title": "" }, { "docid": "2967df08ad0b9987ce2d6cb6006d3e69", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" }, { "docid": "ad24de7b81fec45126c756b41e39822b", "text": "University teachers provided first year Arts students with hundreds of cinematic images online to analyse as a key part of their predominantly face-to-face undergraduate course. This qualitative study investigates the extent to which the groups engaged in learning involving their analysis of the images and how this was related to their perception of the ICT-mediated environment. Interviews and questionnaires completed by students revealed that the extent of engaged learning was related to the quality of the approach to groupwork reported by the students, the quality of their approach to the analysis of the images and their perceptions of key aspects of the online environment which provided the images. The findings have implications for the design and approach to teaching best suited for students involved in groupwork and the use of ICT resources provided to promote engaged experiences of learning.", "title": "" }, { "docid": "265a709088f671ba484ffba937ae2977", "text": "We test a number of the leading computational color constancy algorithms using a comprehensive set of images. These were of 33 different scenes under 11 different sources representative of common illumination conditions. The algorithms studied include two gray world methods, a version of the Retinex method, several variants of Forsyth's gamut-mapping method, Cardei et al.'s neural net method, and Finlayson et al.'s Color by Correlation method. We discuss a number of issues in applying color constancy ideas to image data, and study in depth the effect of different preprocessing strategies. We compare the performance of the algorithms on image data with their performance on synthesized data. All data used for this study are available online at http://www.cs.sfu.ca/(tilde)color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca/(tilde)color/code). Experiments with synthesized data (part one of this paper) suggested that the methods which emphasize the use of the input data statistics, specifically color by correlation and the neural net algorithm, are potentially the most effective at estimating the chromaticity of the scene illuminant. Unfortunately, we were unable to realize comparable performance on real images. Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.", "title": "" }, { "docid": "b44df1268804e966734ea404b8c29360", "text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.", "title": "" }, { "docid": "37b8114afeba61ac1e381405f2503ced", "text": "Measurements of the phases of free jet waves relative to an acoustic excitation, and of the pattern and time phase of the sound pressure produced by the same jet impinging on an edge, provide a consistent model for Stage I frequencies of edge tones and of an organ pipe with identical geometry. Both systems are explained entirely in terms of volume displacement of air by the jet. During edge-tone oscillation, 180 ø of phase delay occur on the jet. Peak positive acoustic pressure on a given side of the edge occurs at the instant the jet profile crosses the edge and starts into that side. For the pipe, additional phase shifts occur that depend on the driving points for the jet current, the Q of the pipe, and the frequency of oscillation. Introduction of this additional phase shift yields an accurate prediction of the frequencies of a blown pipe and the blowing pressure at which mode jumps will occur.", "title": "" }, { "docid": "d15ce9f62f88a07db6fa427fae61f26c", "text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.", "title": "" }, { "docid": "38e2848daec38de283341bd3055915c9", "text": "IoT devices are becoming increasingly intelligent and context-aware. Sound is an attractive sensory modality because it is information-rich but not as computationally demanding as alternatives such as vision. New applications of ultra-low power (ULP), ‘always-on’ intelligent acoustic sensing includes agricultural monitoring to detect pests or precipitation, infrastructure health tracking to recognize acoustic symptoms, and security/safety monitoring to identify dangerous conditions. A major impediment for the adoption of always-on, context-aware sensing is power consumption, particularly for ultra-small IoT devices requiring long-term operation without battery replacement. To sustain operation with a 1mm2 solar cell in ambient light (100lux) or achieve a lifetime of 10 years using a button cell battery (2mAh), <20nW power consumption must be achieved, which is more than 2 orders of magnitude lower than current state-of-the-art acoustic sensing systems [1,2]. More broadly a previous ULP signal acquisition IC [3] consumes just 3nW while 64nW ECG monitoring system [4] includes back-end classification, however there are no sub-20nW complete sensing systems with both analog frontend and digital backend.", "title": "" }, { "docid": "b1b6e670f21479956d2bbe281c6ff556", "text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
980d995d2d83ade8ec86128b375ba104
Example-Based facial animation for blend shape interpolation
[ { "docid": "88615ac1788bba148f547ca52bffc473", "text": "This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.", "title": "" } ]
[ { "docid": "a03059021fff5913f0a43b7f2db7653a", "text": "Wheeled Mobile Robots (WMRs) are the most widely used class of mobile robots. This is due to their fast maneuvering, simple controllers and energy saving characteristics. A dynamics-based sliding mode controller for WMR trajectorytracking is proposed. Robustness to external disturbances and parameter uncertainties is achieved. Closed loop real-time results show good performances in trajectory tracking even if for high upper bound of uncertainties.", "title": "" }, { "docid": "1450854a32ea6c18f4cc817f686aaf15", "text": "This article reports on the development of two measures relating to historical trauma among American Indian people: The Historical Loss Scale and The Historical Loss Associated Symptoms Scale. Measurement characteristics including frequencies, internal reliability, and confirmatory factor analyses were calculated based on 143 American Indian adult parents of children aged 10 through 12 years who are part of an ongoing longitudinal study of American Indian families in the upper Midwest. Results indicate both scales have high internal reliability. Frequencies indicate that the current generation of American Indian adults have frequent thoughts pertaining to historical losses and that they associate these losses with negative feelings. Two factors of the Historical Loss Associated Symptoms Scale indicate one anxiety/depression component and one anger/avoidance component. The results are discussed in terms of future research and theory pertaining to historical trauma among American Indian people.", "title": "" }, { "docid": "bb999acceac5f0bc1f21879529746546", "text": "How do real graphs evolve over time? What are normal growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.\n Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).\n Existing graph generation models do not exhibit these types of behavior even at a qualitative level. We provide a new graph generator, based on a forest fire spreading process that has a simple, intuitive justification, requires very few parameters (like the flammability of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.\n We also notice that the forest fire model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point.\n Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of relation between densification and the degree distribution.", "title": "" }, { "docid": "62e7974231c091845f908a50f5365d7f", "text": "Sequentiality of access is an inherent characteristic of many database systems. We use this observation to develop an algorithm which selectively prefetches data blocks ahead of the point of reference. The number of blocks prefetched is chosen by using the empirical run length distribution and conditioning on the observed number of sequential block references immediately preceding reference to the current block. The optimal number of blocks to prefetch is estimated as a function of a number of “costs,” including the cost of accessing a block not resident in the buffer (a miss), the cost of fetching additional data blocks at fault times, and the cost of fetching blocks that are never referenced. We estimate this latter cost, described as memory pollution, in two ways. We consider the treatment (in the replacement algorithm) of prefetched blocks, whether they are treated as referenced or not, and find that it makes very little difference. Trace data taken from an operational IMS database system is analyzed and the results are presented. We show how to determine optimal block sizes. We find that anticipatory fetching of data can lead to significant improvements in system operation.", "title": "" }, { "docid": "f1582ae3d1ce78c1ad84ab5e552e29bd", "text": "The emergence of sensory-guided behavior depends on sensorimotor coupling during development. How sensorimotor experience shapes neural processing is unclear. Here, we show that the coupling between motor output and visual feedback is necessary for the functional development of visual processing in layer 2/3 (L2/3) of primary visual cortex (V1) of the mouse. Using a virtual reality system, we reared mice in conditions of normal or random visuomotor coupling. We recorded the activity of identified excitatory and inhibitory L2/3 neurons in response to transient visuomotor mismatches in both groups of mice. Mismatch responses in excitatory neurons were strongly experience dependent and driven by a transient release from inhibition mediated by somatostatin-positive interneurons. These data are consistent with a model in which L2/3 of V1 computes a difference between an inhibitory visual input and an excitatory locomotion-related input, where the balance between these two inputs is finely tuned by visuomotor experience.", "title": "" }, { "docid": "1afe9ff72d69e09c24a11187ea7dca2d", "text": "In the Intelligent Robotics Laboratory (IRL) at Vanderbilt University we seek to develop service robots with a high level of social intelligence and interactivity. In order to achieve this goal, we have identified two main issues for research. The first issue is how to achieve a high level of interaction between the human and the robot. This has lead to the formulation of our philosophy of Human Directed Local Autonomy (HuDL), a guiding principle for research, design, and implementation of service robots. The motivation for integrating humans into a service robot system is to take advantage of human intelligence and skill. Human intelligence can be used to interpret robot sensor data, eliminating computationally expensive and possibly error-prone automated analyses. Human skill is a valuable resource for trajectory and path planning as well as for simplifying the search process. In this paper we present our plans for integrating humans into a service robot system. We present our paradigm for human/robot interaction, HuDL. The second issue is the general problem of system integration, with a specific focus on integrating humans into the service robotic system. This work has lead to the development of the Intelligent Machine Architecture (IMA), a novel software architecture that has been specifically designed to simplify the integration of the many diverse algorithms, sensors, and actuators necessary for socially intelligent service robots. Our testbed system is described, and some example applications of HuDL for aids to the physically disabled are given. An evaluation of the effectiveness of the IMA is also presented.", "title": "" }, { "docid": "9a7016a02eda7fcae628197b0625832b", "text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.", "title": "" }, { "docid": "e59b4429d7304f3b3dc69c5c67f8fbf7", "text": "Visualization and visual analysis play important roles in exploring, analyzing, and presenting scientific data. In many disciplines, data and model scenarios are becoming multifaceted: data are often spatiotemporal and multivariate; they stem from different data sources (multimodal data), from multiple simulation runs (multirun/ensemble data), or from multiphysics simulations of interacting phenomena (multimodel data resulting from coupled simulation models). Also, data can be of different dimensionality or structured on various types of grids that need to be related or fused in the visualization. This heterogeneity of data characteristics presents new opportunities as well as technical challenges for visualization research. Visualization and interaction techniques are thus often combined with computational analysis. In this survey, we study existing methods for visualization and interactive visual analysis of multifaceted scientific data. Based on a thorough literature review, a categorization of approaches is proposed. We cover a wide range of fields and discuss to which degree the different challenges are matched with existing solutions for visualization and visual analysis. This leads to conclusions with respect to promising research directions, for instance, to pursue new solutions for multirun and multimodel data as well as techniques that support a multitude of facets.", "title": "" }, { "docid": "274a9094764edd249f1682fbca93a866", "text": "Visual saliency detection is a challenging problem in computer vision, but one of great importance and numerous applications. In this paper, we propose a novel model for bottom-up saliency within the Bayesian framework by exploiting low and mid level cues. In contrast to most existing methods that operate directly on low level cues, we propose an algorithm in which a coarse saliency region is first obtained via a convex hull of interest points. We also analyze the saliency information with mid level visual cues via superpixels. We present a Laplacian sparse subspace clustering method to group superpixels with local features, and analyze the results with respect to the coarse saliency region to compute the prior saliency map. We use the low level visual cues based on the convex hull to compute the observation likelihood, thereby facilitating inference of Bayesian saliency at each pixel. Extensive experiments on a large data set show that our Bayesian saliency model performs favorably against the state-of-the-art algorithms.", "title": "" }, { "docid": "22650cb6c1470a076fc1dda7779606ec", "text": "This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Experimental results on various datasets demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "0b08e657d012d26310c88e2129c17396", "text": "In order to accurately determine the growth of greenhouse crops, the system based on AVR Single Chip microcontroller and wireless sensor networks is developed, it transfers data through the wireless transceiver devices without setting up electric wiring, the system structure is simple. The monitoring and management center can control the temperature and humidity of the greenhouse, measure the carbon dioxide content, and collect the information about intensity of illumination, and so on. In addition, the system adopts multilevel energy memory. It combines energy management with energy transfer, which makes the energy collected by solar energy batteries be used reasonably. Therefore, the self-managing energy supply system is established. The system has advantages of low power consumption, low cost, good robustness, extended flexible. An effective tool is provided for monitoring and analysis decision-making of the greenhouse environment.", "title": "" }, { "docid": "82d3217331a70ead8ec3064b663de451", "text": "The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer’s output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.", "title": "" }, { "docid": "074011796235a8ab0470ba0fe967918f", "text": "We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions:popularity and productivity. Intuitively, a candidate ispopular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies.", "title": "" }, { "docid": "fee64e0be9a5db75c3f259aae01b6a12", "text": "A simple method, based on elementary fourth-order cumulants, is proposed for the classification of digital modulation schemes. These statistics are natural in this setting as they characterize the shape of the distribution of the noisy baseband I and Q samples. It is shown that cumulant-based classification is particularly effective when used in a hierarchical scheme, enabling separation into subclasses at low signal-to-noise ratio with small sample size. Thus, the method can be used as a preliminary classifier if desired. Computational complexity is order N , whereN is the number of complex baseband data samples. This method is robust in the presence of carrier phase and frequency offsets and can be implemented recursively. Theoretical arguments are verified via extensive simulations and comparisons with existing approaches.", "title": "" }, { "docid": "70e3a918cb152278360c2c54a8934b2c", "text": "In translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a cross-sentence context-aware approach and investigate the influence of historical contextual information on the performance of neural machine translation (NMT). First, this history is summarized in a hierarchical way. We then integrate the historical representation into NMT in two strategies: 1) a warm-start of encoder and decoder states, and 2) an auxiliary context source for updating decoder states. Experimental results on a large Chinese-English translation task show that our approach significantly improves upon a strong attention-based NMT system by up to +2.1 BLEU points.", "title": "" }, { "docid": "feb184ada1d0deb3c1798beb3da8ff53", "text": "Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D scene flow from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.", "title": "" }, { "docid": "a97f7ed65c4ba37bbda5e0af9abec425", "text": "Two novel ternary CNTFET-based SRAM cells are proposed in this paper. The first proposed CNTFET SRAM uses additional CNTFETs to sink the bit lines to ground; its operation is nearly independent of the ternary values. The second cell utilizes the traditional voltage controller (or supply) of a binary SRAM in a ternary SRAM; it consists of adding two CNTFETs to the first proposed cell. CNTFET features (such as sizing and density) and performance metrics (such as SNM and PDP) and write/read times are considered and assessed in detail. The impact of different features (such as chirality and CNT density) is also analyzed with respect to the operations of the memory cells. The effects of different process variations (such as lithography and density/number of CNTs) are extensively evaluated with respect to performance metrics. In nearly all cases, the proposed cells outperform existing CNTFET-based cells by showing a small standard deviation in the simulated memory circuits. & 2016 Published by Elsevier B.V. 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97", "title": "" }, { "docid": "e9aac361f8ca1bb8f10409859aef718d", "text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.", "title": "" }, { "docid": "f9ee024cb18a0bd8cee77ceee0fdf5cb", "text": "Software development is often plagued with unanticipated problems which cause projects to miss deadlines, exceed budgets, or deliver less than satisfactory products. While these problems cannot be eliminated totally, some of them can be controlled better by taking appropriate preventive action. Risk management is an area of project management that deals with these threats before they occur. Organizations may be able to avoid a large number of problems if they use systematic risk management procedures and techniques early in projects.", "title": "" }, { "docid": "d9c44d16ac67dbde54cd0ac5dbb7b5bb", "text": "The chord progression is a fundamental building block in music which sketches the overall mood of a song. Many composers compose music by first deciding chord progressions as a structure and then adding melody and details. Despite its importance, it is rarely used as an emotional feature in music emotion recognition. Few previous works considered chords or intervals as features but the progression or transition of chords were ignored. In this work, we explore the effect of chord progressions in music emotion recognition. We collected music database and extracted features to form an emotion recognition model. The chord progression is then detected from each song, and its effectiveness is showed using cross-validation. The results show that chord progressions have influence in music emotion, especially valence.", "title": "" } ]
scidocsrr
3c22011f6235a2058f2667d3a8b6ad63
Scalability Problems of Simple Genetic Algorithms
[ { "docid": "20e6ffb912ee0291d53e7a2750d1b426", "text": "This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state\") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques.", "title": "" } ]
[ { "docid": "fc0e18090c2f568d88f7f800ab8b876f", "text": "In this paper, a novel approach to UAV’s automatic landing on the ship’s deck is proposed. We present the design of the cooperative object, and then begin our basic research on UAV autonomous landing on a ship by using computer vision and affine moment invariants. We analyze the infrared radiation images in our experiments by extracting the target from the background and then recognizing it. Also, we calculate the angle of yaw. We study the basic research concerning automatic UAV navigation and landing on the deck. Based on our experiments, the average recognition time is 17.2 ms which is obtained through the use of affine moment invariants. This type of speed is expected to improve the reliability and real-time performance of autonomous UAV landing. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fc911b7fa4b7ad770a61c2e4d719a18d", "text": "This paper presents a web-based expert system for wheat crop in Pakistan. Wheat is one of the major grain crops in Pakistan. It is cultivated in vast areas of Punjab followed by Sindh and ranked first as a cereal crop in the country. Our rule-based expert system covers two main classes of problems namely diseases and pests, normally encountered in wheat crop. The expert system is intended to help the farmers, researchers and students and provides an efficient and goal-oriented approach for solving common problems of wheat. The system gives results that are correct and consistent.", "title": "" }, { "docid": "ecc13932233e3ffc4c68e90d8eaf9aec", "text": "An inviscid actuator disk model is embedded in a three-dimensional low-order panel method code for inviscid incompressible flow in order to study the propeller effects on an arbitrary body. The actuator disk model predicts the time–averaged induced velocities in the slipstream of a propeller with an arbitrary radial distribution of load. The model is constructed by superposition of four vorticity distributions, by neglecting the radial contraction of the vortex tube and assuming a fixed wake for the propeller. Experimental data, available from the licterature, have been used to validate the actuator disk model embedded in the panel method code.", "title": "" }, { "docid": "57a23f68303a3694e4e6ba66e36f7015", "text": "OBJECTIVE\nTwo studies using cross-sectional designs explored four possible mechanisms by which loneliness may have deleterious effects on health: health behaviors, cardiovascular activation, cortisol levels, and sleep.\n\n\nMETHODS\nIn Study 1, we assessed autonomic activity, salivary cortisol levels, sleep quality, and health behaviors in 89 undergraduate students selected based on pretests to be among the top or bottom quintile in feelings of loneliness. In Study 2, we assessed blood pressure, heart rate, salivary cortisol levels, sleep quality, and health behaviors in 25 older adults whose loneliness was assessed at the time of testing at their residence.\n\n\nRESULTS\nTotal peripheral resistance was higher in lonely than nonlonely participants, whereas cardiac contractility, heart rate, and cardiac output were higher in nonlonely than lonely participants. Lonely individuals also reported poorer sleep than nonlonely individuals. Study 2 indicated greater age-related increases in blood pressure and poorer sleep quality in lonely than nonlonely older adults. Mean salivary cortisol levels and health behaviors did not differ between groups in either study.\n\n\nCONCLUSIONS\nResults point to two potentially orthogonal predisease mechanisms that warrant special attention: cardiovascular activation and sleep dysfunction. Health behavior and cortisol regulation, however, may require more sensitive measures and large sample sizes to discern their roles in loneliness and health.", "title": "" }, { "docid": "7297a6317a3fc515d2d46943a2792c69", "text": "The present work elaborates the process design methodology for the evaluation of the distillation systems based on the economic, exergetic and environmental point of view, the greenhouse gas (GHG) emissions. The methodology proposes the Heat Integrated Pressure Swing Distillation Sequence (HiPSDS) is economic and reduces the GHG emissions than the conventional Extractive Distillation Sequence (EDS) and the Pressure Swing Distillation Sequence (PSDS) for the case study of isobutyl alcohol and isobutyl acetate with the solvents for EDS and with low pressure variations for PSDS and HiPSDS. The study demonstrates that the exergy analysis can predict the results of the economic and environmental evaluation associated with the process design.", "title": "" }, { "docid": "0e4722012aeed8dc356aa8c49da8c74f", "text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.", "title": "" }, { "docid": "f68f82e0d7f165557433580ad1e3e066", "text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press", "title": "" }, { "docid": "15ccdecd20bbd9c4b93c57717cbfb787", "text": "As a crucial challenge for video understanding, exploiting the spatial-temporal structure of video has attracted much attention recently, especially on video captioning. Inspired by the insight that people always focus on certain interested regions of video content, we propose a novel approach which will automatically focus on regions-of-interest and catch their temporal structures. In our approach, we utilize a specific attention model to adaptively select regions-of-interest for each video frame. Then a Dual Memory Recurrent Model (DMRM) is introduced to incorporate temporal structure of global features and regions-of-interest features in parallel, which will obtain rough understanding of video content and particular information of regions-of-interest. Since the attention model could not always catch the right interests, we additionally adopt semantic supervision to attend to interested regions more correctly. We evaluate our method for video captioning on two public benchmarks: the Microsoft Video Description Corpus (MSVD) and the Montreal Video Annotation Dataset (M-VAD). The experiments demonstrate that catching temporal regions-of-interest information really enhances the representation of input videos and our approach obtains the state-of-the-art results on popular evaluation metrics like BLEU-4, CIDEr, and METEOR.", "title": "" }, { "docid": "863202feb1410b177c6bb10ccc1fa43d", "text": "Multimedia retrieval plays an indispensable role in big data utilization. Past efforts mainly focused on single-media retrieval. However, the requirements of users are highly flexible, such as retrieving the relevant audio clips with one query of image. So challenges stemming from the “media gap,” which means that representations of different media types are inconsistent, have attracted increasing attention. Cross-media retrieval is designed for the scenarios where the queries and retrieval results are of different media types. As a relatively new research topic, its concepts, methodologies, and benchmarks are still not clear in the literature. To address these issues, we review more than 100 references, give an overview including the concepts, methodologies, major challenges, and open issues, as well as build up the benchmarks, including data sets and experimental results. Researchers can directly adopt the benchmarks to promptly evaluate their proposed methods. This will help them to focus on algorithm design, rather than the time-consuming compared methods and results. It is noted that we have constructed a new data set XMedia, which is the first publicly available data set with up to five media types (text, image, video, audio, and 3-D model). We believe this overview will attract more researchers to focus on cross-media retrieval and be helpful to them.", "title": "" }, { "docid": "35c299197861d0a57763bbc392e90bb2", "text": "Imperfect-information games, where players have private information, pose a unique challenge in artificial intelligence. In recent years, Heads-Up NoLimit Texas Hold’em poker, a popular version of poker, has emerged as the primary benchmark for evaluating game-solving algorithms for imperfectinformation games. We demonstrate a winning agent from the 2016 Annual Computer Poker Competition, Baby Tartanian8.", "title": "" }, { "docid": "c83ee7ea10dbfba3b2388cc0d929e685", "text": "Clinical laboratory measurements are vital to the medical decision-making process, and specifically, measurement of rheumatoid factor antibodies is part of the disease criteria for various autoimmune conditions. Uncertainty estimates describe the quality of the measurement process, and uncertainty in calibration of the instrument used in the measurement can be an important contributor to the net measurement uncertainty. In this paper, we develop a physics-based mathematical model of the rheumatoid factor measurement process, or assay, and then use the Monte Carlo method to investigate the effect of uncertainty in the calibration process on the correlation structure of the parameters of the calibration function. We demonstrate numerically that a change in uncertainty of the calibration process can be quantified by one of two metrics: (1) the 1-norm condition number of the correlation matrix, or (2) the sum of the absolute values of the correlation coefficients between the parameters of the calibration function.", "title": "" }, { "docid": "189d370fc5c12157b1fffa6196195798", "text": "In this report a number of algorithms for optimal control of a double inverted pendulum on a cart (DIPC) are investigated and compared. Modeling is based on Euler-Lagrange equations derived by specifying a Lagrangian, difference between kinetic and potential energy of the DIPC system. This results in a system of nonlinear differential equations consisting of three 2-nd order equations. This system of equations is then transformed into a usual form of six 1-st order ordinary differential equations (ODE) for control design purposes. Control of a DIPC poses a certain challenge, since unlike a robot, the system is underactuated: one controlling force per three degrees of freedom (DOF). In this report, problem of optimal control minimizing a quadratic cost functional is addressed. Several approaches are tested: linear quadratic regulator (LQR), state-dependent Riccati equation (SDRE), optimal neural network (NN) control, and combinations of the NN with the LQR and the SDRE. Simulations reveal superior performance of the SDRE over the LQR and improvements provided by the NN, which compensates for model inadequacies in the LQR. Limited capabilities of the NN to approximate functions over the wide range of arguments prevent it from significantly improving the SDRE performance, providing only marginal benefits at larger pendulum deflections.", "title": "" }, { "docid": "dda05235fd5bcbb31a01ad52d2677963", "text": "Collaborative filtering based recommender system is prone to shilling attacks because of its open nature. Shillers inject pseudonomous profiles in the system’s database with the intent of manipulating the recommendations to their benefits. Prior study has shown that the system’s behavior can be easily influenced by even a less number of shilling profiles. In this paper, we simulated various attack models on Movie-Lens 1 dataset and used machine learning techniques to detect the attacks. We compared five classification algorithms and proposed a new model by integrating two models with high performances. In our experiments, we investigated and proved that the combination of random forest and adaptive boosting algorithm is more accurate than simple random forest model. Keywords— Collaborative filtering, recommender system, shilling attack, prediction shift, precision, recall, fmeasure, classification", "title": "" }, { "docid": "59d21d59428ba708111e148236589f92", "text": "Distributed storage systems are increasingly transitioning to the use of erasure codes since they offer higher reliability at significantly lower storage costs than data replication. However, these codes tradeoff recovery performance as they require multiple disk reads and network transfers for reconstructing an unavailable data block. As a result, most existing systems use an erasure code either optimized for storage overhead or recovery performance. In this paper, we present HACFS, a new erasure-coded storage system that instead uses two different erasure codes and dynamically adapts with workload changes. It uses a fast code to optimize for recovery performance and a compact code to reduce the storage overhead. A novel conversion mechanism is used to efficiently upcode and downcode data blocks between fast and compact codes. We show that HACFS design techniques are generic and successfully apply it to two different code families: Product and LRC codes. We have implemented HACFS as an extension to the Hadoop Distributed File System (HDFS) and experimentally evaluate it with five different workloads from production clusters. The HACFS system always maintains a low storage overhead and significantly improves the recovery performance as compared to three popular singlecode storage systems. It reduces the degraded read latency by up to 46%, and the reconstruction time and disk/network traffic by up to 45%.", "title": "" }, { "docid": "cd87f849e0f68d8081645ffb942984d6", "text": "Recent years have witnessed extensive studies on distance metric learning (DML) for improving similarity search in multimedia information retrieval tasks. Despite their successes, most existing DML methods suffer from two critical limitations: (i) they typically attempt to learn a linear distance function on the input feature space, in which the assumption of linearity limits their capacity of measuring the similarity on complex patterns in real-world applications; (ii) they are often designed for learning distance metrics on uni-modal data, which may not effectively handle the similarity measures for multimedia objects with multimodal representations. To address these limitations, in this paper, we propose a novel framework of online multimodal deep similarity learning (OMDSL), which aims to optimally integrate multiple deep neural networks pretrained with stacked denoising autoencoder. In particular, the proposed framework explores a unified two-stage online learning scheme that consists of (i) learning a flexible nonlinear transformation function for each individual modality, and (ii) learning to find the optimal combination of multiple diverse modalities simultaneously in a coherent process. We conduct an extensive set of experiments to evaluate the performance of the proposed algorithms for multimodal image retrieval tasks, in which the encouraging results validate the effectiveness of the proposed technique.", "title": "" }, { "docid": "ffe6edef11daef1db0c4aac77bed7a23", "text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.", "title": "" }, { "docid": "d3e8dce306eb20a31ac6b686364d0415", "text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.", "title": "" }, { "docid": "fce1c7d2cfdd3d4149d5d11c8081ead3", "text": "Very High Resolution (VHR) satellite images offer a great potential for the extraction of landuse and land-cover related information for urban areas. The available techniques are diverse and need to be further examined before operational use is possible. In this paper we applied two pixel-by-pixel classification techniques and the object-oriented image analysis approach (eCognition) for a land-cover classification of a Quickbird image of a study area in the northern part of the city of Ghent (Belgium). Only small differences in overall Kappa were noted between the best results of the pixel-based approach (neural network classification with Haralick texture measures) and the object-oriented classification (eCognition). A rule-based procedure using ancillary information on elevation derived from a digital surface model was applied on the pixel-based land-cover classification in order to obtain information on the spatial distribution of buildings and artificial surfaces.", "title": "" }, { "docid": "1bdf73110d3fdbe2cfbbd99f8388d170", "text": "ACKNOWLEDGEMENT First of all I would like to thank my ALLAH Almighty Who gave me the courage, health, and energy to accomplish my thesis in due time and without Whose help this study which required untiring efforts would have not been possible to complete within the time limits. key elements required from the supervisor(s) to write and complete a thesis of a good standard and a quality within deadlines. It is a matter of utmost pleasure for me to extend my gratitude and give due credit to my supervisor Yinghong Chen whose support has always been there in need of time and who provided me with all these key elements to complete my dissertation within the time frame. Acknowledgement would be incomplete without extending my gratitude to one of my friends in Pakistan Mr. mammoth help in data collection made this study possible. Moreover, he has been supporting me enthusiastically throughout my work to make my thesis ready in due time. My thanks is also due to my examiner Max Zamanian whose valuable comments and suggestions made colossal contribution in improving my dissertation. Last but not least, I extend my thanks to my entire family for moral support and prays for my health and successful completion of my dissertation within time limits. ABSTRACT Islamic banking and finance in Pakistan started in 1977-78 with the elimination of interest in compliance with the Principles of Islamic Shari'ah in Islamic banking practices. Since then, amendments in financial system to allow the issuance of new interest-free instrument of corporate financing, promulgation of ordinance to permit the establishment of Mudaraba companies and floatation of Mudaraba Certificates, constitution of Commission for Transformation of Financial System (CTFS), and the establishments of Islamic Banking Department by the State Bank of Pakistan are some of the key steps taken place by the governments. The aim of this study is to examine and to evaluate the performance of the first Islamic bank in Pakistan, i.e. Meezan Bank Limited (MBL) in comparison with that of a group of 5 Pakistani conventional banks. The study evaluates performance of the Islamic bank (MBL) in profitability, liquidity, risk, and efficiency for the period of 2003-2007. Asset Utilization (AU), and Income to Expense ratio (IER) are used to assess banking performances. T-test and F-test are used in determining the significance of the differential performance of the two groups of banks. The study found that MBL …", "title": "" } ]
scidocsrr
fb83c7d0fc6bbbae24139beb85db814f
Semi-Supervised Multinomial Naive Bayes for Text Classification by Leveraging Word-Level Statistical Constraint
[ { "docid": "3ac2f2916614a4e8f6afa1c31d9f704d", "text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.", "title": "" }, { "docid": "7f74c519207e469c39f81d52f39438a0", "text": "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.", "title": "" }, { "docid": "41b8fb6fd9237c584ce0211f94a828be", "text": "Over the last few years, two of the main research directions in machine learning of natural language processing have been the study of semi-supervised learning algorithms as a way to train classifiers when the labeled data is scarce, and the study of ways to exploit knowledge and global information in structured learning tasks. In this paper, we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms. Our novel framework unifies and can exploit several kinds of task specific constraints. The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.", "title": "" } ]
[ { "docid": "a9cb3364f0bb9727ceb8f6a5e55a2244", "text": "In diesem Paper wird eine Arbeit dargestellt, in der eine empirische Studie vorgenommen wurde, um Anhaltspunkte für die Kompetenzentwicklung in Computerspielen am Beispiel von WoW (World of Warcraft) im Vergleich zum beruflichen Kontext zu zeigen. Neben der tatsächlichen Übertragbarkeit einiger Kompetenzbereiche zwischen der virtuellen und der realen Welt (z.B. Methodenkompetenz und Sozialkompetenz) zeigte die Studie auch, dass sehr genau das Feld der zu untersuchenden Aspekte in einem MMORPG der Größe von WoW abgesteckt werden muss, um relevante Aussagen zu erhalten.", "title": "" }, { "docid": "435618f85e2ca71ac23b68f09413ad1e", "text": "> Context • The enactive paradigm in the cognitive sciences is establishing itself as a strong and comprehensive alternative to the computationalist mainstream. However, its own particular historical roots have so far been largely ignored in the historical analyses of the cognitive sciences. > Problem • In order to properly assess the enactive paradigm’s theoretical foundations in terms of their validity, novelty and potential future directions of development, it is essential for us to know more about the history of ideas that has led to the current state of affairs. > Method • The meaning of the disappearance of the field of cybernetics and the rise of second-order cybernetics is analyzed by taking a closer look at the work of representative figures for each of the phases – Rosenblueth, Wiener and Bigelow for the early wave of cybernetics, Ashby for its culmination, and von Foerster for the development of the second-order approach. > Results • It is argued that the disintegration of cybernetics eventually resulted in two distinct scientific traditions, one going from symbolic AI to modern cognitive science on the one hand, and the other leading from second-order cybernetics to the current enactive paradigm. > Implications • We can now understand that the extent to which the cognitive sciences have neglected their cybernetic parent is precisely the extent to which cybernetics had already carried the tendencies that would later find fuller expression in second-order cybernetics. >", "title": "" }, { "docid": "30ba59e335d9b448b29d2528b5e08a5c", "text": "Classification of alcoholic electroencephalogram (EEG) signals is a challenging job in biomedical research for diagnosis and treatment of brain diseases of alcoholic people. The aim of this study was to introduce a robust method that can automatically identify alcoholic EEG signals based on time–frequency (T–F) image information as they convey key characteristics of EEG signals. In this paper, we propose a new hybrid method to classify automatically the alcoholic and control EEG signals. The proposed scheme is based on time–frequency images, texture image feature extraction and nonnegative least squares classifier (NNLS). In T–F analysis, the spectrogram of the short-time Fourier transform is considered. The obtained T–F images are then converted into 8-bit grayscale images. Co-occurrence of the histograms of oriented gradients (CoHOG) and Eig(Hess)-CoHOG features are extracted from T–F images. Finally, obtained features are fed into NNLS classifier as input for classify alcoholic and control EEG signals. To verify the effectiveness of the proposed approach, we replace the NNLS classifier by artificial neural networks, k-nearest neighbor, linear discriminant analysis and support vector machine classifier separately, with the same features. Experimental outcomes along with comparative evaluations with the state-of-the-art algorithms manifest that the proposed method outperforms competing algorithms. The experimental outcomes are promising, and it can be anticipated that upon its implementation in clinical practice, the proposed scheme will alleviate the onus of the physicians and expedite neurological diseases diagnosis and research.", "title": "" }, { "docid": "8583702b48549c5bbf1553fa0e39a882", "text": "A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.", "title": "" }, { "docid": "2b3851ac0d4202a90896d160523bedc3", "text": "Crying is a communication method used by infants given the limitations of language. Parents or nannies who have never had the experience to take care of the baby will experience anxiety when the infant is crying. Therefore, we need a way to understand about infant's cry and apply the formula. This research develops a system to classify the infant's cry sound using MACF (Mel-Frequency Cepstrum Coefficients) feature extraction and BNN (Backpropagation Neural Network) based on voice type. It is classified into 3 classes: hungry, discomfort, and tired. A voice input must be ascertained as infant's cry sound which using 3 features extraction (pitch with 2 approaches: Modified Autocorrelation Function and Cepstrum Pitch Determination, Energy, and Harmonic Ratio). The features coefficients of MFCC are furthermore classified by Backpropagation Neural Network. The experiment shows that the system can classify the infant's cry sound quite well, with 30 coefficients and 10 neurons in the hidden layer.", "title": "" }, { "docid": "8f9af064f348204a71f0e542b2b98e7b", "text": "It is often useful to classify email according to the intent of the sender (e.g., \"propose a meeting\", \"deliver information\"). We present experimental results in learning to classify email in this fashion, where each class corresponds to a verbnoun pair taken from a predefined ontology describing typical “email speech acts”. We demonstrate that, although this categorization problem is quite different from “topical” text classification, certain categories of messages can nonetheless be detected with high precision (above 80%) and reasonable recall (above 50%) using existing text-classification learning methods. This result suggests that useful task-tracking tools could be constructed based on automatic classification into this taxonomy.", "title": "" }, { "docid": "a10aa780d9f1a65461ad0874173d8f56", "text": "OS fingerprinting tries to identify the type and version of a system based on gathered information of a target host. It is an essential step for many subsequent penetration attempts and attacks. Traditional OS fingerprinting depends on banner grabbing schemes or network traffic analysis results to identify the system. These interactive procedures can be detected by intrusion detection systems (IDS) or fooled by fake network packets. In this paper, we propose a new OS fingerprinting mechanism in virtual machine hypervisors that adopt the memory de-duplication technique. Specifically, when multiple memory pages with the same contents occupy only one physical page, their reading and writing access delay will demonstrate some special properties. We use the accumulated access delay to the memory pages that are unique to some specific OS images to derive out whether or not our VM instance and the target VM are using the same OS. The experiment results on VMware ESXi hypervisor with both Windows and Ubuntu Linux OS images show the practicability of the attack. We also discuss the mechanisms to defend against such attacks by the hypervisors and VMs.", "title": "" }, { "docid": "b379ab0167138bc46697aa392c0df177", "text": "Real-time load composition knowledge will dramatically benefit demand-side management (DSM). Previous works disaggregate the load via either intrusive or nonintrusive load monitoring. However, due to the difficulty in accessing all houses via smart meters at all times and the unavailability of frequently measured high-resolution load signatures at bulk supply points, neither is suitable for frequent or widespread application. This paper employs the artificial intelligence (AI) tool to develop a load disaggregation approach for bulk supply points based on the substation rms measurement without relying on smart meter data, customer surveys, or high-resolution load signatures. Monte Carlo simulation is used to generate the training and validation data. Load compositions obtained by the AI tool are compared with the validation data and used for load characteristics estimation and validation. Probabilistic distributions and confidence levels of different confidence intervals for errors of load compositions and load characteristics are also derived.", "title": "" }, { "docid": "6b718717d5ecef343a8f8033803a55e6", "text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.", "title": "" }, { "docid": "16afaad8bfdc64f9d97e9829f2029bc6", "text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.", "title": "" }, { "docid": "af05ec4998302687aae09cc1d5ad4ccd", "text": "The development of wireless portable electronics is moving towards smaller and lighter devices. Although low noise amplifier (LNA) performance is extremely good nowadays, the design engineer still has to make some complex system trades. Many LNA are large, heavy and consume a lot of power. The design of an LNA in radio frequency (RF) circuits requires the trade-off of many important characteristics, such as gain, noise figure (NF), stability, power consumption and complexity. This situation forces designers to make choices in the design of RF circuits. The designed simulation process is done using the Advance Design System (ADS), while FR4 strip board is used for fabrication purposes. A single stage LNA has successfully designed with 7.78 dB forward gain and 1.53 dB noise figure; it is stable along the UNII frequency band.", "title": "" }, { "docid": "f5293e05169ee48d69f317d80066b88f", "text": "In this work, we propose a direct least-squares solution to the perspective-n-point (PnP) pose estimation problem of a partially uncalibrated camera, whose intrinsic parameters except the focal length are known. The basic idea is to construct a proper objective function with respect to the target variables and extract all its stationary points so as to find the global minimum. The advantages of our proposed solution over existing ones are that (i) the objective function is directly built upon the imaging equation, such that all the 3D-to-2D correspondences contribute equally to the minimized error, and that (ii) the proposed solution is noniterative, in the sense that the stationary points are retrieved by means of eigenvalue factorization and the common iterative refinement step is not needed. In addition, the proposed solution has O(n) complexity, and can be used to handle both planar and nonplanar 3D points. Experimental results show that the proposed solution is much more accurate than the existing state-of-the-art solutions, and is even comparable to the maximum likelihood estimation by minimizing the reprojection error.", "title": "" }, { "docid": "105f34c3fa2d4edbe83d184b7cf039aa", "text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.", "title": "" }, { "docid": "8f3c275ac076489747ad329edf1d8757", "text": "Wilt is an important disease of banana causing significant reduction in yield. In presentstudy, the pathogenic fungus was isolated frompseudo stem of infected plants of banana.The in vitro efficacy of different plant extracts viz.,Azardiachta indica, Artemessia annua, Eucalyptus globulus, and Ocimum sanctum were tested to managepanama wilt of banana. Different concentrations 5, 10, 15 and 20% of plant extracts were used in the study. All the plant extracts showed significant reduction in the growth ofpathogen. Among the different extracts 20% of Azardiachta indica was found most effective followed by Eucalyptus globulus, Artemessia annua and Ocimum sanctum.", "title": "" }, { "docid": "34b3c5ee3ea466c23f5c7662f5ce5b33", "text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.", "title": "" }, { "docid": "074567500751d814eef4ba979dc3cc8d", "text": "Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms’ merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems,", "title": "" }, { "docid": "e22f9516948725be20d8e331d5bafa56", "text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.", "title": "" }, { "docid": "70a0e3a8683f72cf65679740681e0d20", "text": "mathematical methods in artificial intelligence. Book lovers, when you need a new book to read, find the book here. Never worry not to find what you need. Is the mathematical methods in artificial intelligence your needed book now? That's true; you are really a good reader. This is a perfect book that comes from great author to share with you. The book offers the best experience and lesson to take, not only take, but also learn.", "title": "" }, { "docid": "dd4a95a6ffdb1a1c5c242b7a5d969d29", "text": "A microstrip antenna with frequency agility and polarization diversity is presented. Commercially available packaged RF microelectrical-mechanical (MEMS) single-pole double-throw (SPDT) devices are used with a novel feed network to provide four states of polarization control; linear-vertical, linear-horizontal, left-hand circular and right-handed circular. Also, hyper-abrupt silicon junction tuning diodes are used to tune the antenna center frequency from 0.9-1.5 GHz. The microstrip antenna is 1 in x 1 in, and is fabricated on a 4 in x 4 in commercial-grade dielectric laminate. To the authors' knowledge, this is the first demonstration of an antenna element with four polarization states across a tunable bandwidth of 1.4:1.", "title": "" } ]
scidocsrr
ec8951758ac906219458a6f05a076222
Generation of THz wave with orbital angular momentum by graphene patch reflectarray
[ { "docid": "2943c046bae638a287ddaf72129bee0e", "text": "The use of graphene for fixed-beam reflectarray antennas at Terahertz (THz) is proposed. Graphene's unique electronic band structure leads to a complex surface conductivity at THz frequencies, which allows the propagation of very slow plasmonic modes. This leads to a drastic reduction of the electrical size of the array unit cell and thereby good array performance. The proposed reflectarray has been designed at 1.3 THz and comprises more than 25000 elements of size about λ0/16. The array reflective unit cell is analyzed using a full vectorial approach, taking into account the variation of the angle of incidence and assuming local periodicity. Good performance is obtained in terms of bandwidth, cross-polar, and grating lobes suppression, proving the feasibility of graphene-based reflectarrays and other similar spatially fed structures at Terahertz frequencies. This result is also a first important step toward reconfigurable THz reflectarrays using graphene electric field effect.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "0bb270bfff12141bdc6daeb7415befd0", "text": "Community analysis algorithm proposed by Clauset, Newman, and Moore (CNM algorithm) finds community structure in social networks. Unfortunately, CNM algorithm does not scale well and its use is practically limited to networks whose sizes are up to 500,000 nodes. We show that this inefficiency is caused from merging communities in unbalanced manner and that a simple heuristics that attempts to merge community structures in a balanced manner can dramatically improve community structure analysis. The proposed techniques are tested using data sets obtained from existing social networking service that hosts 5.5 million users. We have tested three three variations of the heuristics. The fastest method processes a SNS friendship network with 1 million users in 5 minutes (70 times faster than CNM) and another friendship network with 4 million users in 35 minutes, respectively. Another one processes a network with 500,000 nodes in 50 minutes (7 times faster than CNM), finds community structures that has improved modularity, and scales to a network with 5.5 million.", "title": "" }, { "docid": "09c808f014ff9b93795a5e040b2ad7de", "text": "The Internet of Things (IoT) concept proposes that everyday objects are globally accessible from the Internet and integrate into new services having a remarkable impact on our society. Opposite to Internet world, things usually belong to resource-challenged environmentswhere energy, data throughput, and computing resources are scarce. Building upon existing standards in the field such as IEEE1451 and ZigBee and rooted in context semantics, this paper proposes CTP (CommunicationThings Protocol) as a protocol specification to allow interoperability among things with different communication standards as well as simplicity and functionality to build IoT systems. Also, this paper proposes the use of the IoT gateway as a fundamental component in IoT architectures to provide seamless connectivity and interoperability among things and connect two different worlds to build the IoT: the Things world and the Internet world. Both CTP and IoT gateway constitute a middleware content-centric architecture presented as the mechanism to achieve a balance between the intrinsic limitations of things in the physical world and what is required from them in the virtual world. Saidmiddleware content-centric architecture is implementedwithin the frame of two European projects targeting smart environments and proving said CTP’s objectives in real scenarios.", "title": "" }, { "docid": "26cd0260e2a460ac5aa96466ff92f748", "text": "Deep Convolutional Neural Networks (CNNs) have demonstrated excellent performance in image classification, but still show room for improvement in object-detection tasks with many categories, in particular for cluttered scenes and occlusion. Modern detection algorithms like Regions with CNNs (Girshick et al., 2014) rely on Selective Search (Uijlings et al., 2013) to propose regions which with high probability represent objects, where in turn CNNs are deployed for classification. Selective Search represents a family of sophisticated algorithms that are engineered with multiple segmentation, appearance and saliency cues, typically coming with a significant runtime overhead. Furthermore, (Hosang et al., 2014) have shown that most methods suffer from low reproducibility due to unstable superpixels, even for slight image perturbations. Although CNNs are subsequently used for classification in top-performing object-detection pipelines, current proposal methods are agnostic to how these models parse objects and their rich learned representations. As a result they may propose regions which may not resemble high-level objects or totally miss some of them. To overcome these drawbacks we propose a boosting approach which directly takes advantage of hierarchical CNN features for detecting regions of interest fast. We demonstrate its performance on ImageNet 2013 detection benchmark and compare it with state-of-the-art methods. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.", "title": "" }, { "docid": "a3735cc40727de4016ee29f6a29d578f", "text": "By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.", "title": "" }, { "docid": "014759efa636aec38aa35287b61e44a4", "text": "Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection", "title": "" }, { "docid": "9b1a7f811d396e634e9cc5e34a18404e", "text": "We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.", "title": "" }, { "docid": "164a1246119f8e7c230864ac5300da60", "text": "1,2,3,4,5 Department Of Computer Engineering, Smt .Kashibai Navale College of Engineering, Pune. ----------------------------------------------------------------------------***-------------------------------------------------------------------------ABSTRACTIn today’s world social networking platforms such as Instagram, Facebook, Google+ etc, have created the boon in our humanitarian society[1]. Along with these social networking platforms there comes a great responsibility of handling user privacy as well as user data. In most of these websites, data is stored on the centralized system called as the server. [1] The whole system crash down if the server goes down. One of the solutions for this problem is to use a decentralized system. Decentralized applications works on Blockchain. A Blockchain is a group of blocks connected sequentially to each other. The blockchains are designed so that transactions remain immutable i.e. unchanged hence provides security. The data can be distributed and no one can tampered that data. This paper presents a decentralized social media photo sharing web application which is based on blockchain technology where the user would be able to view, like, comment, share photos shared by different users.", "title": "" }, { "docid": "129a42c825850acd12b2f90a0c65f4ea", "text": "Vertical fractures in teeth can present difficulties in diagnosis. There are, however, many specific clinical and radiographical signs which, when present, can alert clinicians to the existence of a fracture. In this review, the diagnosis of vertical root fractures is discussed in detail, and examples are presented of clinical and radiographic signs associated with these fractured teeth. Treatment alternatives are discussed for both posterior and anterior teeth.", "title": "" }, { "docid": "6557347e1c0ebf014842c9ae2c77dbed", "text": "----------------------------------------------------------------------ABSTRACT-------------------------------------------------------------Steganography is derived from the Greek word steganos which literally means “Covered” and graphy means “Writing”, i.e. covered writing. Steganography refers to the science of “invisible” communication. For hiding secret information in various file formats, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. The Least Significant Bit (LSB) embedding technique suggests that data can be hidden in the least significant bits of the cover image and the human eye would be unable to notice the hidden image in the cover file. This technique can be used for hiding images in 24-Bit, 8-Bit, Gray scale format. This paper explains the LSB Embedding technique and Presents the evaluation for various file formats.", "title": "" }, { "docid": "da27ccc6467cd913a7a5124c5e08c6f4", "text": "The aggressive optimization of heavily used kernels is an important problem in high-performance computing. However, both general purpose compilers and highly specialized tools such as superoptimizers often do not have sufficient static knowledge of restrictions on program inputs that could be exploited to produce the very best code. For many applications, the best possible code is conditionally correct: the optimized kernel is equal to the code that it replaces only under certain preconditions on the kernel's inputs. The main technical challenge in producing conditionally correct optimizations is in obtaining non-trivial and useful conditions and proving conditional equivalence formally in the presence of loops. We combine abstract interpretation, decision procedures, and testing to yield a verification strategy that can address both of these problems. This approach yields a superoptimizer for x86 that in our experiments produces binaries that are often multiple times faster than those produced by production compilers.", "title": "" }, { "docid": "b266a1490455f8a1708471bf7069f7e9", "text": "Stevia rebaudiana, a perennial herb from the Asteraceae family, is known to the scientific world for its sweetness and steviol glycosides (SGs). SGs are the secondary metabolites responsible for the sweetness of Stevia. They are synthesized by SG biosynthesis pathway operating in the leaves. Most of the genes encoding the enzymes of this pathway have been cloned and characterized from Stevia. Out of various SGs, stevioside and rebaudioside A are the major metabolites. SGs including stevioside have also been synthesized by enzymes and microbial agents. These are non-mutagenic, non-toxic, antimicrobial, and do not show any remarkable side-effects upon consumption. Stevioside has many medical applications and its role against diabetes is most important. SGs have made Stevia an important part of the medicinal world as well as the food and beverage industry. This article presents an overview on Stevia and the importance of SGs.", "title": "" }, { "docid": "862641bf4c8efa627cd38a1fd5b561dc", "text": "WeChat is the largest acquaintance social networking platform in China, which has about 938 million monthly active user accounts. WeChat Moments, known as Friends Circle, serves social networking functions in which users can view information shared by friends. This paper addresses the problem of analyzing the patterns of cascading behavior in WeChat Moments. We obtain 229021 information cascades from WeChat Moments, in which more than 5 million users are involved during 45 days. We analyze these cascades from four aspects to understand the patterns of cascading behavior in WeChat Moments, including the patterns of diffusion structure, temporal dynamic, spatial dynamic and user behavior. In addition, the correlations between these patterns are examined. Our findings contribute to promoting products, predicting and even regulating public opinion.", "title": "" }, { "docid": "95be4f5132cde3c637c5ee217b5c8405", "text": "In recent years, information communication and computation technologies are deeply converging, and various wireless access technologies have been successful in deployment. It can be predicted that the upcoming fifthgeneration mobile communication technology (5G) can no longer be defined by a single business model or a typical technical characteristic. 5G is a multi-service and multitechnology integrated network, meeting the future needs of a wide range of big data and the rapid development of numerous businesses, and enhancing the user experience by providing smart and customized services. In this paper, we propose a cloud-based wireless network architecture with four components, i.e., mobile cloud, cloud-based radio access network (Cloud RAN), reconfigurable network and big data centre, which is capable of providing a virtualized, reconfigurable, smart wireless network.", "title": "" }, { "docid": "ba57149e82718bad622df36852906531", "text": "The classical psychedelic drugs, including psilocybin, lysergic acid diethylamide and mescaline, were used extensively in psychiatry before they were placed in Schedule I of the UN Convention on Drugs in 1967. Experimentation and clinical trials undertaken prior to legal sanction suggest that they are not helpful for those with established psychotic disorders and should be avoided in those liable to develop them. However, those with so-called 'psychoneurotic' disorders sometimes benefited considerably from their tendency to 'loosen' otherwise fixed, maladaptive patterns of cognition and behaviour, particularly when given in a supportive, therapeutic setting. Pre-prohibition studies in this area were sub-optimal, although a recent systematic review in unipolar mood disorder and a meta-analysis in alcoholism have both suggested efficacy. The incidence of serious adverse events appears to be low. Since 2006, there have been several pilot trials and randomised controlled trials using psychedelics (mostly psilocybin) in various non-psychotic psychiatric disorders. These have provided encouraging results that provide initial evidence of safety and efficacy, however the regulatory and legal hurdles to licensing psychedelics as medicines are formidable. This paper summarises clinical trials using psychedelics pre and post prohibition, discusses the methodological challenges of performing good quality trials in this area and considers a strategic approach to the legal and regulatory barriers to licensing psychedelics as a treatment in mainstream psychiatry. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.", "title": "" }, { "docid": "2665314258f4b7f59a55702166f59fcc", "text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.", "title": "" }, { "docid": "b387d7b1f17cdbca1260ef25fe4448bc", "text": "This paper derives the transfer function from error voltage to duty cycle, which captures the quasi-digital behavior of the closed-current loop for pulsewidth modulated (PWM) dc-dc converters operating in continuous-conduction mode (CCM) using peak current-mode (PCM) control, the current-loop gain, the transfer function from control voltage to duty cycle (closed-current loop transfer function), and presents experimental verification. The sample-and-hold effect, or quasi-digital (discrete) behavior in the current loop with constant-frequency PCM in PWM dc-dc converters is described in a manner consistent with the physical behavior of the circuit. Using control theory, a transfer function from the error voltage to the duty cycle that captures the quasi-digital behavior is derived. This transfer function has a pole that can be in either the left-half plane or right-half plane, and captures the sample-and-hold effect accurately, enabling the characterization of the current-loop gain and closed-current loop for PWM dc-dc converters with PCM. The theoretical and experimental response results were in excellent agreement, confirming the validity of the transfer functions derived. The closed-current loop characterization can be used for the design of a controller for the outer voltage loop.", "title": "" }, { "docid": "9bcf45278e391a6ab9a0b33e93d82ea9", "text": "Non-orthogonal multiple access (NOMA) is a potential enabler for the development of 5G and beyond wireless networks. By allowing multiple users to share the same time and frequency, NOMA can scale up the number of served users, increase spectral efficiency, and improve user-fairness compared to existing orthogonal multiple access (OMA) techniques. While single-cell NOMA has drawn significant attention recently, much less attention has been given to multi-cell NOMA. This article discusses the opportunities and challenges of NOMA in a multi-cell environment. As the density of base stations and devices increases, inter-cell interference becomes a major obstacle in multi-cell networks. As such, identifying techniques that combine interference management approaches with NOMA is of great significance. After discussing the theory behind NOMA, this article provides an overview of the current literature and discusses key implementation and research challenges, with an emphasis on multi-cell NOMA.", "title": "" }, { "docid": "38f30f6070b7ca3abca54d50cba88c31", "text": "Dengue virus produces a mild acute febrile illness, dengue fever (DF) and a severe illness, dengue hemorrhagic fever (DHF). The characteristic feature of DHF is increased capillary permeability leading to extensive plasma leakage in serous cavities resulting in shock. The pathogenesis of DHF is not fully understood. This paper presents a cascade of cytokines, that in our view, may lead to DHF. The main feature is the early generation of a unique cytokine, human cytotoxic factor (hCF) that initiates a series of events leading to a shift from Th1-type response in mild illness to a Th2-type response resulting in severe DHF. The shift from Th1 to Th2 is regulated by the relative levels of interferon-gamma and interleukin (IL)-10 and between IL-12 and transforming growth factor-beta, which showed an inverse relationship in patients with DF.", "title": "" }, { "docid": "d9605c1cde4c40d69c2faaea15eb466c", "text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.", "title": "" } ]
scidocsrr
c2a9299362a4c90a08009361b2feec36
Sparse coding based visual tracking: Review and experimental comparison
[ { "docid": "7fdb4e14a038b11bb0e92917d1e7ce70", "text": "Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.", "title": "" }, { "docid": "483ab105bfe99c867690891f61bb0336", "text": "In this paper we propose a robust object tracking algorithm using a collaborative model. As the main challenge for object tracking is to account for drastic appearance change, we propose a robust appearance model that exploits both holistic templates and local representations. We develop a sparsity-based discriminative classifier (SD-C) and a sparsity-based generative model (SGM). In the S-DC module, we introduce an effective method to compute the confidence value that assigns more weights to the foreground than the background. In the SGM module, we propose a novel histogram-based method that takes the spatial information of each patch into consideration with an occlusion handing scheme. Furthermore, the update scheme considers both the latest observations and the original template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem. Numerous experiments on various challenging videos demonstrate that the proposed tracker performs favorably against several state-of-the-art algorithms.", "title": "" }, { "docid": "42bad17aa74d4dc972b48f054656de48", "text": "We present a method for learning image representations using a two-layer sparse coding scheme at the pixel level. The first layer encodes local patches of an image. After pooling within local regions, the first layer codes are then passed to the second layer, which jointly encodes signals from the region. Unlike traditional sparse coding methods that encode local patches independently, this approach accounts for high-order dependency among patterns in a local image neighborhood. We develop algorithms for data encoding and codebook learning, and show in experiments that the method leads to more invariant and discriminative image representations. The algorithm gives excellent results for hand-written digit recognition on MNIST and object recognition on the Caltech101 benchmark. This marks the first time that such accuracies have been achieved using automatically learned features from the pixel level, rather than using hand-designed descriptors.", "title": "" } ]
[ { "docid": "000bdac12cd4254500e22b92b1906174", "text": "In this paper we address the topic of generating automatically accurate, meaning preserving and syntactically correct paraphrases of natural language sentences. The design of methods and tools for paraphrasing natural language text is a core task of natural language processing and is quite useful in many applications and procedures. We present a methodology and a tool developed that performs deep analysis of natural language sentences and generate paraphrases of them. The tool performs deep analysis of the natural language sentence and utilizes sets of paraphrasing techniques that can be used to transform structural parts of the dependency tree of a sentence to an equivalent form and also change sentence words with their synonyms and antonyms. In the evaluation study the performance of the method is examined and the accuracy of the techniques is assessed in terms of syntactic correctness and meaning preserving. The results collected are very promising and show the method to be accurate and able to generate quality paraphrases.", "title": "" }, { "docid": "10994a99bb4da87a34d835720d005668", "text": "Wireless sensor networks (WSNs), consisting of a large number of nodes to detect ambient environment, are widely deployed in a predefined area to provide more sophisticated sensing, communication, and processing capabilities, especially concerning the maintenance when hundreds or thousands of nodes are required to be deployed over wide areas at the same time. Radio frequency identification (RFID) technology, by reading the low-cost passive tags installed on objects or people, has been widely adopted in the tracing and tracking industry and can support an accurate positioning within a limited distance. Joint utilization of WSN and RFID technologies is attracting increasing attention within the Internet of Things (IoT) community, due to the potential of providing pervasive context-aware applications with advantages from both fields. WSN-RFID convergence is considered especially promising in context-aware systems with indoor positioning capabilities, where data from deployed WSN and RFID systems can be opportunistically exploited to refine and enhance the collected data with position information. In this papera, we design and evaluate a hybrid system which combines WSN and RFID technologies to provide an indoor positioning service with the capability of feeding position information into a general-purpose IoT environment. Performance of the proposed system is evaluated by means of simulations and a small-scale experimental set-up. The performed analysis demonstrates that the joint use of heterogeneous technologies can increase the robustness and the accuracy of the indoor positioning systems.", "title": "" }, { "docid": "99b5e24ed06352ab52d31165682248db", "text": "In recent years, the study of radiation pattern reconfigurable antennas has made great progress. Radiation pattern reconfigurable antennas have more advantages and better prospects compared with conventional antennas. They can be used to avoid noisy environments, maneuver away from electronic jamming, improve system gain and security, save energy by directing signals only towards the intended direction, and increase the number of subscribers by having a broad pattern in the wireless communication system. The latest researches of the radiation pattern reconfigurable antennas are analyzed and summarized in this paper to present the characteristics and classification. The trend of radiation pattern reconfigurable antennas' development is given at the end of the paper.", "title": "" }, { "docid": "12b9f018386e374acf8132979e00d831", "text": "With the progress of science and technology, wheeled robots have been widely used in various fields to help people complete tasks. However, when performing a task, the robot and the object could be damaged if the robot collides with the object due to negligence of the operator. Therefore, the safety issue is the most important consideration. This paper presents a multifunction system, including Arduino controller, rotary ultrasonic obstacle avoidance and WIFI control system. Red warning led and buzzers on the fuselage will alert operators is time of danger and feedback warning information to the phone. The system is proved to be feasible by experiment.", "title": "" }, { "docid": "ec2872e944aefd3fb667ff2072e38c95", "text": "Searching for relevant documents from a vast amount of scientific data is a challenging problem that requires a close interaction between the user, the search interface and the search engine. This extended abstract summarizes recent research on Intent Radar, an interactive search user interface that allows the user to directly interact with her estimated search intent, and in this way direct the search in an intuitive way without the need to type specific queries. In user experiments, Intent Radar improves task performance and quality of retrieved information without compromising the task execution time. This workshop paper presents to the ICML Crowdsourcing and Human Computing workshop audience our work recently published in CIKM 2013 and IUI 2013 as well as a short discussion of ongoing work on the topic. ICML ’14 Workshop: Crowdsourcing and Human Computing, Beijing, China, 2014. Copyright 2014 by the author(s).", "title": "" }, { "docid": "cf38afa95362a4a86d88787fbf3d91ef", "text": "Pneumatic muscles with similar characteristics to biological muscles have been widely used in robots, and thus are promising drivers for frog inspired robots. However, the application and nonlinearity of the pneumatic system limit the advance. On the basis of the swimming mechanism of the frog, a frog-inspired robot based on pneumatic muscles is developed. To realize the independent tasks by the robot, a pneumatic system with internal chambers, micro air pump, and valves is implemented. The micro pump is used to maintain the pressure difference between the source and exhaust chambers. The pneumatic muscles are controlled by high-speed switch valves which can reduce the robot cost, volume, and mass. A dynamic model of the pneumatic system is established for the simulation to estimate the system, including the chamber, muscle, and pneumatic circuit models. The robot design is verified by the robot swimming experiments and the dynamic model is verified through the experiments and simulations of the pneumatic system. The simulation results are compared to analyze the functions of the source pressure, internal volume of the muscle, and circuit flow rate which is proved the main factor that limits the response of muscle pressure. The proposed research provides the application of the pneumatic muscles in the frog inspired robot and the pneumatic model to study muscle controller.", "title": "" }, { "docid": "e2d63fece5536aa4668cd5027a2f42b9", "text": "To ensure integrity, trust, immutability and authenticity of software and information (cyber data, user data and attack event data) in a collaborative environment, research is needed for cross-domain data communication, global software collaboration, sharing, access auditing and accountability. Blockchain technology can significantly automate the software export auditing and tracking processes. It allows to track and control what data or software components are shared between entities across multiple security domains. Our blockchain-based solution relies on role-based and attribute-based access control and prevents unauthorized data accesses. It guarantees integrity of provenance data on who updated what software module and when. Furthermore, our solution detects data leakages, made behind the scene by authorized blockchain network participants, to unauthorized entities. Our approach is used for data forensics/provenance, when the identity of those entities who have accessed/ updated/ transferred the sensitive cyber data or sensitive software is determined. All the transactions in the global collaborative software development environment are recorded in the blockchain public ledger and can be verified any time in the future. Transactions can not be repudiated by invokers. We also propose modified transaction validation procedure to improve performance and to protect permissioned IBM Hyperledger-based blockchains from DoS attacks, caused by bursts of invalid transactions.", "title": "" }, { "docid": "effd296da8b20f02658ddb2eb6210fc1", "text": "Multimegawatt wind-turbine systems, often organized in a wind park, are the backbone of the power generation based on renewable-energy systems. This paper reviews the most-adopted wind-turbine systems, the adopted generators, the topologies of the converters, the generator control and grid connection issues, as well as their arrangement in wind parks.", "title": "" }, { "docid": "adc2a84e58fb00ccf85828b2d6e7c7cd", "text": "Noise is an indispensable part in signal processing that we encounter every day. The study of reducing noise arises from the need to achieve stronger signal to noise ratios. It is any unwanted disturbance that hampers the desired response while keeping the source sound. The different sources may include speech, music played through a device such as a mobile, IPod, computer, or no sound at all. Active noise cancellation involves creating a supplementary signal that DE constructively interferes with the output ambient noise. The cancellation of noise can be efficiently accomplished by using adaptive algorithms. An adaptive filter is one that self-adjusts the coefficients of transfer function according to an algorithm driven by an error signal. The adaptive filter uses feedback in the form of an error signal to define its transfer function to match changing parameters. The adaptive filtering techniques can be used for a wide range of applications, including echo cancellation, adaptive channel equalization, adaptive line enhancer, and adaptive beam forming. In last few years, a lot of algorithms have been developed for eradicating the distortion from the signals. This paper presents analysis of two algorithms namely, Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and gives comparative study on various governing factors such as stability, computational complexity, filter order, robustness and rate of convergence. It further represents the effect of error with alteration in amplitude of noise signal fixating reference signal and desired signal. The algorithms are developed in MATLAB Keywords– Anti noise, Adaptive filter, LMS, NLMS, Rate of convergence, Noise cancellation, Filter", "title": "" }, { "docid": "242c5d237b2bca8b6008e4c9a2196322", "text": "In recent years, a growing number of occupational therapists have integrated video game technologies, such as the Nintendo Wii, into rehabilitation programs. 'Wiihabilitation', or the use of the Wii in rehabilitation, has been successful in increasing patients' motivation and encouraging full body movement. The non-rehabilitative focus of Wii applications, however, presents a number of problems: games are too difficult for patients, they mainly target upper-body gross motor functions, and they lack support for task customization, grading, and quantitative measurements. To overcome these problems, we have designed a low-cost, virtual-reality based system. Our system, Virtual Wiihab, records performance and behavioral measurements, allows for activity customization, and uses auditory, visual, and haptic elements to provide extrinsic feedback and motivation to patients.", "title": "" }, { "docid": "2e9d6ad38bd51fbd7af165e4b9262244", "text": "BACKGROUND\nThe assessment of blood lipids is very frequent in clinical research as it is assumed to reflect the lipid composition of peripheral tissues. Even well accepted such relationships have never been clearly established. This is particularly true in ophthalmology where the use of blood lipids has become very common following recent data linking lipid intake to ocular health and disease. In the present study, we wanted to determine in humans whether a lipidomic approach based on red blood cells could reveal associations between circulating and tissue lipid profiles. To check if the analytical sensitivity may be of importance in such analyses, we have used a double approach for lipidomics.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nRed blood cells, retinas and optic nerves were collected from 9 human donors. The lipidomic analyses on tissues consisted in gas chromatography and liquid chromatography coupled to an electrospray ionization source-mass spectrometer (LC-ESI-MS). Gas chromatography did not reveal any relevant association between circulating and ocular fatty acids except for arachidonic acid whose circulating amounts were positively associated with its levels in the retina and in the optic nerve. In contrast, several significant associations emerged from LC-ESI-MS analyses. Particularly, lipid entities in red blood cells were positively or negatively associated with representative pools of retinal docosahexaenoic acid (DHA), retinal very-long chain polyunsaturated fatty acids (VLC-PUFA) or optic nerve plasmalogens.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nLC-ESI-MS is more appropriate than gas chromatography for lipidomics on red blood cells, and further extrapolation to ocular lipids. The several individual lipid species we have identified are good candidates to represent circulating biomarkers of ocular lipids. However, further investigation is needed before considering them as indexes of disease risk and before using them in clinical studies on optic nerve neuropathies or retinal diseases displaying photoreceptors degeneration.", "title": "" }, { "docid": "eed8ebf50451614b14dc9e23c603b4bc", "text": "Digital halftoning remains an active area of research with a plethora of new and enhanced methods. While several fine overviews exist, this purpose of this paper is to review retrospectively the basic classes of techniques. Halftoning algorithms are presented by the nature of the appearance of resulting patterns, including white noise, recursive tessellation, the classical screen, and blue noise. The metric of radially averaged power spectra is reviewed, and special attention is paid to frequency domain characteristics. The paper concludes with a look at the components that comprise a complete image rendering system. In particular when the number of output levels is not restricted to be a power of 2. A very efficient means of multilevel dithering is presented based on scaling order-dither arrays. The case of real-time video rendering is considered where the YUV-to-RGB conversion is incorporated in the dithering system. Example illustrations are included for each of the techniques described.", "title": "" }, { "docid": "c17650cb6a46ba3ef192345c99e7e6b6", "text": "Traffic dynamics are often modeled by complex dynamical systems for which classical analysis tools can struggle to provide tractable policies used by transportation agencies and planners. In light of the introduction of automated vehicles into transportation systems, there is a new need for understanding the impacts of automation on transportation networks. The present article formulates and approaches the mixed-autonomy traffic control problem (where both automated and human-driven vehicles are present) using the powerful framework of deep reinforcement learning (RL). The resulting policies and emergent behaviors in mixed-autonomy traffic settings provide insight for the potential for automation of traffic through mixed fleets of automated and manned vehicles. Modelfree learning methods are shown to naturally select policies and behaviors previously designed by model-driven approaches, such as stabilization and platooning, known to improve ring road efficiency and to even exceed a theoretical velocity limit. Remarkably, RL succeeds at maximizing velocity by effectively leveraging the structure of the human driving behavior to form an efficient vehicle spacing for an intersection network. We describe our results in the context of existing control theoretic results for stability analysis and mixed-autonomy analysis. This article additionally introduces state equivalence classes to improve the sample complexity for the learning methods.", "title": "" }, { "docid": "790de0f792c81b9e26676f800e766759", "text": "The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.", "title": "" }, { "docid": "b1da294b1d8f270cb2bfe0074231209e", "text": "The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.", "title": "" }, { "docid": "a804d188b4fd2b89efaf072d96ef1023", "text": "Current state-of-the-art sports statistics compare players and teams to league average performance. For example, metrics such as “Wins-above-Replacement” (WAR) in baseball [1], “Expected Point Value” (EPV) in basketball [2] and “Expected Goal Value” (EGV) in soccer [3] and hockey [4] are now commonplace in performance analysis. Such measures allow us to answer the question “how does this player or team compare to the league average?” Even “personalized metrics” which can answer how a “player’s or team’s current performance compares to its expected performance” have been used to better analyze and improve prediction of future outcomes [5].", "title": "" }, { "docid": "9b1cf7cb855ba95693b90efacc34ac6d", "text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.", "title": "" }, { "docid": "85566c0da230598e4e3ec3d5428fdac3", "text": "Babesiosis is a tick-borne disease of cattle caused by the protozoan parasites. The causative agents of Babesiosis are specific for particular species of animals. In cattle: B. bovis and B. bigemina are the common species involved in babesiosis. Rhipicephalus (Boophilus) spp., the principal vectors of B. bovis and B. bigemina, are widespread in tropical and subtropical countries. Babesia multiplies in erythrocytes by asynchronous binary fission, resulting in considerable pleomorphism. Babesia produces acute disease by two principle mechanism; hemolysis and circulatory disturbance. Affected animals suffered from marked rise in body temperature, loss of appetite, cessation of rumination, labored breathing, emaciation, progressive hemolytic anemia, various degrees of jaundice (Icterus). Lesions include an enlarged soft and pulpy spleen, a swollen liver, a gall bladder distended with thick granular bile, congested dark-coloured kidneys and generalized anemia and jaundice. The disease can be diagnosis by identification of the agent by using direct microscopic examination, nucleic acid-based diagnostic assays, in vitro culture and animal inoculation as well as serological tests like indirect fluorescent antibody, complement fixation and Enzyme-linked immunosorbent assays tests. Babesiosis occurs throughout the world. However, the distribution of the causative protozoa is governed by the geographical and seasonal distribution of the insect vectors. Recently Babesia becomes the most widespread parasite due to exposure of 400 million cattle to infection through the world, with consequent heavy economic losses such as mortality, reduction in meat and milk yield and indirectly through control measures of ticks. Different researches conducted in Ethiopia reveal the prevalence of the disease in different parts of the country. The most commonly used compounds for the treatment of babesiosis are diminazene diaceturate, imidocarb, and amicarbalide. Active prevention and control of Babesiosis is achieved by three main methods: immunization, chemoprophylaxis and vector control.", "title": "" }, { "docid": "950759f015897a7e3e4948f736788c76", "text": "The characterization of complex air traffic situations is an important issue in air traffic management (ATM). Within the current ground-based ATM system, complexity metrics have been introduced with the goal of evaluating the difficulty experienced by air traffic controllers in guaranteeing the appropriate aircraft separation in a sector. The rapid increase in air travel demand calls for new generation ATM systems that can safely and efficiently handle higher levels of traffic. To this purpose, part of the responsibility for separation maintenance will be delegated to the aircraft, and trajectory management functions will be further automated and distributed. The evolution toward an autonomous aircraft framework envisages new tasks where assessing complexity may be valuable and requires a whole new perspective in the definition of suitable complexity metrics. This paper presents a critical analysis of the existing approaches for modeling and predicting air traffic complexity, examining their portability to autonomous ATM systems. Possible applications and related requirements will be discussed.", "title": "" }, { "docid": "d4896aa12be18aea9a6639422ee12d92", "text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.", "title": "" } ]
scidocsrr
898db191ed140cce001a89574c1ce0f2
A Case Study for Grain Quality Assurance Tracking based on a Blockchain Business Network
[ { "docid": "ce9487df62f75872d7111a26972feca7", "text": "In this chapter we provide an overview of the concept of blockchain technology and its potential to disrupt the world of banking through facilitating global money remittance, smart contracts, automated banking ledgers and digital assets. In this regard, we first provide a brief overview of the core aspects of this technology, as well as the second-generation contract-based developments. From there we discuss key issues that must be considered in developing such ledger based technologies in a banking context.", "title": "" }, { "docid": "930b48ac25cb646322406c98bf0ae383", "text": "The core technology of Bitcoin, the blockchain, has recently emerged as a disruptive innovation with a wide range of applications, potentially able to redesign our interactions in business, politics and society at large. Although scholarly interest in this subject is growing, a comprehensive analysis of blockchain applications from a political perspective is severely lacking to date. This paper aims to fill this gap and it discusses the key points of blockchain-based decentralized governance, which challenges to varying degrees the traditional mechanisms of State authority, citizenship and democracy. In particular, the paper verifies to which extent blockchain and decentralization platforms can be considered as hyper-political tools, capable to manage social interactions on large scale and dismiss traditional central authorities. The analysis highlights risks related to a dominant position of private powers in distributed ecosystems, which may lead to a general disempowerment of citizens and to the emergence of a stateless global society. While technological utopians urge the demise of any centralized institution, this paper advocates the role of the State as a necessary central point of coordination in society, showing that decentralization through algorithm-based consensus is an organizational theory, not a stand-alone political theory.", "title": "" } ]
[ { "docid": "c81967de1aee76b9937cbdcba3e07996", "text": "The combination of strength (ST) and plyometric training (PT) has been shown to be effective for improving sport-specific performance. However, there is no consensus about the most effective way to combine these methods in the same training session to produce greater improvements in neuromuscular performance of soccer players. Thus, the purpose of this study was to compare the effects of different combinations of ST and PT sequences on strength, jump, speed, and agility capacities of elite young soccer players. Twenty-seven soccer players (age: 18.9 ± 0.6 years) participated in an 8-week resistance training program and were divided into 3 groups: complex training (CP) (ST before PT), traditional training (TD) (PT before ST), and contrast training (CT) (ST and PT performed alternately, set by set). The experimental design took place during the competitive period of the season. The ST composed of half-squat exercises performed at 60-80% of 1 repetition maximum (1RM); the PT composed of drop jump exercises executed in a range from 30 to 45 cm. After the experimental period, the maximum dynamic strength (half-squat 1RM) and vertical jump ability (countermovement jump height) increased similarly and significantly in the CP, TD, and CT (48.6, 46.3, and 53% and 13, 14.2, and 14.7%, respectively). Importantly, whereas the TD group presented a significant decrease in sprinting speed in 10 (7%) and 20 m (6%), the other groups did not show this response. Furthermore, no significant alterations were observed in agility performance in any experimental group. In conclusion, in young soccer players, different combinations and sequences of ST and PT sets result in similar performance improvements in muscle strength and jump ability. However, it is suggested that the use of the CP and CT methods is more indicated to maintain/maximize the sprint performance of these athletes.", "title": "" }, { "docid": "fab72d1223fa94e918952b8715e90d30", "text": "A novel wideband crossed dipole loaded with four parasitic elements is investigated in this letter. The printed crossed dipole is incorporated with a pair of vacant quarter rings to feed the antenna. The antenna is backed by a metallic plate to provide an unidirectional radiation pattern with a wide axial-ratio (AR) bandwidth. To verify the proposed design, a prototype is fabricated and measured. The final design with an overall size of $0.46\\ \\lambda_{0}\\times 0.46\\ \\lambda_{0}\\times 0.23\\ \\lambda_{0} (> \\lambda_{0}$ is the free-space wavelength of circularly polarized center frequency) yields a 10-dB impedance bandwidth of approximately 62.7% and a 3-dB AR bandwidth of approximately 47.2%. In addition, the proposed antenna has a stable broadside gain of 7.9 ± 0.5 dBi within passband.", "title": "" }, { "docid": "558b2036fb15953743f8477fd5e4a138", "text": "According to recent estimates, about 90% of consumer received emails are machine-generated. Such messages include shopping receipts, promotional campaigns, newsletters, booking confirmations, etc. Most such messages are created by populating a fixed template with a small amount of personalized information, such as name, salutation, reservation numbers, dates, etc. Web mail providers (Gmail, Hotmail, Yahoo) are leveraging the structured nature of such emails to extract salient information and use it to improve the user experience: e.g. by automatically entering reservation data into a user calendar, or by sending alerts about upcoming shipments. To facilitate these extraction tasks it is helpful to classify templates according to their category, e.g. restaurant reservations or bill reminders, since each category triggers a particular user experience. Recent research has focused on discovering the causal thread of templates, e.g. inferring that a shopping order is usually followed by a shipping confirmation, an airline booking is followed by a confirmation and then by a “ready to check in” message, etc. Gamzu et al. took this idea one step further by implementing a method to predict the template category of future emails for a given user based on previously received templates. The motivation is that predicting future emails has a wide range of potential applications, including better user experiences (e.g. warning users of items ordered but not shipped), targeted advertising (e.g. users that recently made a flight reservation may be interested in hotel reservations), and spam classification (a message that is part of a legitimate causal thread is unlikely to be spam). The gist of the Gamzu et al. approach is modeling the problem as a Markov chain, where the nodes are templates or temporal events (e.g. the first day of the month). This paper expands on their work by investigating the use of neural networks for predicting the category of emails that will arrive during a fixed-sized time window in the future. We consider two types of neural networks: multilayer perceptrons (MLP), a type of feedforward neural network; and long short-term memory (LSTM), a type of recurrent neural network. For each type of neural network, we explore the effects The work was completed at Google Research. c ©2017 International World Wide Web Conference Committee (IW3C2), published under Creative Commons CC-BY-NC-ND 2.0 License. WWW 2017 Companion,, April 3–7, 2017, Perth, Austraila. ACM 978-1-4503-4914-7/17/04. http://dx.doi.org/10.1145/3041021.3055166 of varying their configuration (e.g. number of layers or number of neurons) and hyper-parameters (e.g. drop-out ratio). We find that the prediction accuracy of neural networks vastly outperforms the Markov chain approach, and that LSTMs perform slightly better than MLPs. We offer some qualitative interpretation of our findings and identify some promising future directions.", "title": "" }, { "docid": "a45b4d0237fdcfedf973ec639b1a1a36", "text": "We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.", "title": "" }, { "docid": "50ea6bc9342f9fd1bdf5d46d80dcc775", "text": "Title of Document: BRIDGING THE ATTACHMENT TRANSMISSION GAP WITH MATERNAL MIND-MINDEDNESS AND INFANT TEMPERAMENT Laura Jernigan Sherman, Master of Science, 2009 Directed By: Professor Jude Cassidy, Psychology The goal of this study was to test (a) whether maternal mind-mindedness (MM) mediates the link between maternal attachment (from the Adult Attachment Interview) and infant attachment (in the Strange Situation), and (b) whether infant temperament moderates this model of attachment transmission. Eighty-four racially diverse, economically stressed mothers and their infants were assessed three times: newborn, 5, and 12 months. Despite robust meta-analytic findings supporting attachment concordance for mothers and infants in community samples, this sample was characterized by low attachment concordance. Maternal attachment was unrelated to maternal MM; and, maternal MM was related to infant attachment differences for ambivalent infants only. Infant irritability did not moderate the model. Possible reasons for the discordant attachment patterns and the remaining findings are discussed in relation to theory and previous research. BRIDGING THE ATTACHMENT TRANSMISSION GAP WITH MATERNAL MIND-MINDEDNESS AND INFANT TEMPERAMENT", "title": "" }, { "docid": "5e7d5a86a007efd5d31e386c862fef5c", "text": "This systematic review examined the published scientific research on the psychosocial impact of cleft lip and palate (CLP) among children and adults. The primary objective of the review was to determine whether having CLP places an individual at greater risk of psychosocial problems. Studies that examined the psychosocial functioning of children and adults with repaired non-syndromal CLP were suitable for inclusion. The following sources were searched: Medline (January 1966-December 2003), CINAHL (January 1982-December 2003), Web of Science (January 1981-December 2003), PsycINFO (January 1887-December 2003), the reference section of relevant articles, and hand searches of relevant journals. There were 652 abstracts initially identified through database and other searches. On closer examination of these, only 117 appeared to meet the inclusion criteria. The full text of these papers was examined, with only 64 articles finally identified as suitable for inclusion in the review. Thirty of the 64 studies included a control group. The studies were longitudinal, cross-sectional, or retrospective in nature.Overall, the majority of children and adults with CLP do not appear to experience major psychosocial problems, although some specific problems may arise. For example, difficulties have been reported in relation to behavioural problems, satisfaction with facial appearance, depression, and anxiety. A few differences between cleft types have been found in relation to self-concept, satisfaction with facial appearance, depression, attachment, learning problems, and interpersonal relationships. With a few exceptions, the age of the individual with CLP does not appear to influence the occurrence or severity of psychosocial problems. However, the studies lack the uniformity and consistency required to adequately summarize the psychosocial problems resulting from CLP.", "title": "" }, { "docid": "33296736553ceaab2e113b62c05a803c", "text": "In cases of child abuse, usually, the parents are initial suspects. A common explanation of the parents is that the injuries were caused by a sibling. Child-on-child violence is reported to be very rare in children less than 5 years of age, and thorough investigation by the police, child protective services, and medicolegal examinations are needed to proof or disproof the parents' statement. We report two cases of physical abuse of infants by small children.", "title": "" }, { "docid": "9ad040dc3a1bcd498436772768903525", "text": "Memory B and plasma cells (PCs) are generated in the germinal center (GC). Because follicular helper T cells (TFH cells) have high expression of the immunoinhibitory receptor PD-1, we investigated the role of PD-1 signaling in the humoral response. We found that the PD-1 ligands PD-L1 and PD-L2 were upregulated on GC B cells. Mice deficient in PD-L2 (Pdcd1lg2−/−), PD-L1 and PD-L2 (Cd274−/−Pdcd1lg2−/−) or PD-1 (Pdcd1−/−) had fewer long-lived PCs. The mechanism involved more GC cell death and less TFH cell cytokine production in the absence of PD-1; the effect was selective, as remaining PCs had greater affinity for antigen. PD-1 expression on T cells and PD-L2 expression on B cells controlled TFH cell and PC numbers. Thus, PD-1 regulates selection and survival in the GC, affecting the quantity and quality of long-lived PCs.", "title": "" }, { "docid": "93278184377465ec1b870cd54dc49a93", "text": "We advocate the usage of 3D Zernike invariants as descriptors for 3D shape retrieval. The basis polynomials of this representation facilitate computation of invariants under rotation, translation and scaling. Some theoretical results have already been summarized in the past from the aspect of pattern recognition and shape analysis. We provide practical analysis of these invariants along with algorithms and computational details. Furthermore, we give a detailed discussion on influence of the algorithm parameters like the conversion into a volumetric function, number of utilized coefficients, etc. As is revealed by our study, the 3D Zernike descriptors are natural extensions of recently introduced spherical harmonics based descriptors. We conduct a comparison of 3D Zernike descriptors against these regarding computational aspects and shape retrieval performance using several quality measures and based on experiments on the Princeton Shape Benchmark.", "title": "" }, { "docid": "0a9a94bd83dfbbba2815f8575f1cb8a3", "text": "To create with an autonomous mobile robot a 3D volumetric map of a scene it is necessary to gage several 3D scans and to merge them into one consistent 3D model. This paper provides a new solution to the simultaneous localization and mapping (SLAM) problem with six degrees of freedom. Robot motion on natural surfaces has to cope with yaw, pitch and roll angles, turning pose estimation into a problem in six mathematical dimensions. A fast variant of the Iterative Closest Points algorithm registers the 3D scans in a common coordinate system and relocalizes the robot. Finally, consistent 3D maps are generated using a global relaxation. The algorithms have been tested with 3D scans taken in the Mathies mine, Pittsburgh, PA. Abandoned mines pose significant problems to society, yet a large fraction of them lack accurate 3D maps.", "title": "" }, { "docid": "522e384f4533ca656210561be9afbdab", "text": "Every software program that interacts with a user requires a user interface. Model-View-Controller (MVC) is a common design pattern to integrate a user interface with the application domain logic. MVC separates the representation of the application domain (Model) from the display of the application's state (View) and user interaction control (Controller). However, studying the literature reveals that a variety of other related patterns exists, which we denote with Model-View- (MV) design patterns. This paper discusses existing MV patterns classified in three main families: Model-View-Controller (MVC), Model-View-View Model (MVVM), and Model-View-Presenter (MVP). We take a practitioners' point of view and emphasize the essentials of each family as well as the differences. The study shows that the selection of patterns should take into account the use cases and quality requirements at hand, and chosen technology. We illustrate the selection of a pattern with an example of our practice. The study results aim to bring more clarity in the variety of MV design patterns and help practitioners to make better grounded decisions when selecting patterns.", "title": "" }, { "docid": "d509f695435ba51813164ee98512bf06", "text": "In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines the most essential data mining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend. OntoDM-core is available at http://www.ontodm.com .", "title": "" }, { "docid": "58d8e3bd39fa470d1dfa321aeba53106", "text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.", "title": "" }, { "docid": "f68e447acd30cab6c2c68affb8c58d0c", "text": "This paper presents a Doppler radar sensor system with camera-aided random body movement cancellation (RBMC) techniques for noncontact vital sign detection. The camera measures the subject's random body motion that is provided for the radar system to perform RBMC and extract the uniform vital sign signals of respiration and heartbeat. Three RBMC strategies are proposed: 1) phase compensation at radar RF front-end, 2) phase compensation for baseband complex signals, and 3) movement cancellation for demodulated signals. Both theoretical analysis and radar simulation have been carried out to validate the proposed RBMC techniques. An experiment was carried out to measure a subject person who was breathing normally but randomly moving his body back and forth. The experimental result reveals that the proposed radar system is effective for RBMC.", "title": "" }, { "docid": "330de15c472bd403f2572f3bdcce2d52", "text": "Programmers repeatedly reuse code snippets. Retyping boilerplate code, and rediscovering how to correctly sequence API calls, programmers waste time. In this paper, we develop techniques that automatically synthesize code snippets upon a programmer’s request. Our approach is based on discovering snippets located in repositories; we mine repositories offline and suggest discovered snippets to programmers. Upon request, our synthesis procedure uses programmer’s current code to find the best fitting snippets, which are then presented to the programmer. The programmer can then either learn the proper API usage or integrate the synthesized snippets directly into her code. We call this approach interactive code snippet synthesis through repository mining. We show that this approach reduces the time spent developing code for 32% in our experiments.", "title": "" }, { "docid": "066d3a381ffdb2492230bee14be56710", "text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.", "title": "" }, { "docid": "a80a539bf4e233e9dbde52426bf890d3", "text": "Innovative technology approaches have been increasingly investigated for the last two decades aiming at human-being long-term monitoring. However, current solutions suffer from critical limitations. In this paper, a complete system for contactless health-monitoring in home environment is presented. For the first time, radar, wireless communications, and data processing techniques are combined, enabling contactless fall detection and tagless localization. Practical limitations are considered and properly dealt with. Experimental tests, conducted with human volunteers in a realistic room setting, demonstrate an adequate detection of the target's absolute distance and a success rate of 94.3% in distinguishing fall events from normal movements. The volunteers were free to move about the whole room with no constraints in their movements.", "title": "" }, { "docid": "90bf404069bd3dfff1e6b108dafffe4c", "text": "To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When participants were engaged in habitual behavior, defined as behavior that had been performed almost daily in stable contexts, they were likely to think about issues unrelated to their behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior, or actions performed less often or in shifting contexts, participants' thoughts tended to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior.", "title": "" }, { "docid": "8410b8b76ab690ed4389efae15608d13", "text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).", "title": "" }, { "docid": "a5ff7c80c36f354889e3f48e94052195", "text": "A meta-analysis examined emotion recognition within and across cultures. Emotions were universally recognized at better-than-chance levels. Accuracy was higher when emotions were both expressed and recognized by members of the same national, ethnic, or regional group, suggesting an in-group advantage. This advantage was smaller for cultural groups with greater exposure to one another, measured in terms of living in the same nation, physical proximity, and telephone communication. Majority group members were poorer at judging minority group members than the reverse. Cross-cultural accuracy was lower in studies that used a balanced research design, and higher in studies that used imitation rather than posed or spontaneous emotional expressions. Attributes of study design appeared not to moderate the size of the in-group advantage.", "title": "" } ]
scidocsrr
46046b1d727162cdcd8c16be79005ed7
Leimu : Gloveless Music Interaction Using a Wrist Mounted Leap Motion
[ { "docid": "619e3893a731ffd0ed78c9dd386a1dff", "text": "The introduction of new gesture interfaces has been expanding the possibilities of creating new Digital Musical Instruments (DMIs). Leap Motion Controller was recently launched promising fine-grained hand sensor capabilities. This paper proposes a preliminary study and evaluation of this new sensor for building new DMIs. Here, we list a series of gestures, recognized by the device, which could be theoretically used for playing a large number of musical instruments. Then, we present an analysis of precision and latency of these gestures as well as a first case study integrating Leap Motion with a virtual music keyboard.", "title": "" } ]
[ { "docid": "23e5520226bc76f67d0a1e9ef98a4bb2", "text": "This report analyzes the modelling of default intensities and probabilities in single-firm reduced-form models, and reviews the three main approaches to incorporating default dependencies within the framework of reduced models. The first approach, the conditionally independent defaults (CID), introduces credit risk dependence between firms through the dependence of the firms’ intensity processes on a common set of state variables. Contagion models extend the CID approach to account for the empirical observation of default clustering. There exist periods in which the firms’ credit risk is increased and in which the majority of the defaults take place. Finally, default dependencies can also be accounted for using copula functions. The copula approach takes as given the marginal default probabilities of the different firms and plugs them into a copula function, which provides the model with the default dependence structure. After a description of copulas, we present two different approaches of using copula functions in intensity models, and discuss the issues of the choice and calibration of the copula function. ∗This report is a revised version of the Master’s Thesis presented in partial fulfillment of the 2002-2003 MSc in Financial Mathematics at King’s College London. I thank my supervisor Lane P. Hughston and everyone at the Financial Mathematics Group at King’s College, particularly Giulia Iori and Mihail Zervos. Financial support by Banco de España is gratefully acknowledged. Any errors are the exclusive responsibility of the author. CEMFI, Casado del Alisal 5, 28014 Madrid, Spain. Email: elizalde@cemfi.es.", "title": "" }, { "docid": "32287cfcf9978e04bea4ab5f01a6f5da", "text": "OBJECTIVE\nThe purpose of this study was to examine the relationship of performance on the Developmental Test of Visual-Motor Integration (VMI; Beery, 1997) to handwriting legibility in children attending kindergarten. The relationship of using lined versus unlined paper on letter legibility, based on a modified version of the Scale of Children's Readiness in PrinTing (Modified SCRIPT; Weil & Cunningham Amundson, 1994) was also investigated.\n\n\nMETHOD\nFifty-four typically developing kindergarten students were administered the VMI; 30 students completed the Modified SCRIPT with unlined paper, 24 students completed the Modified SCRIPT with lined paper. Students were assessed in the first quarter of the kindergarten school year and scores were analyzed using correlational and nonparametric statistical measures.\n\n\nRESULTS\nStrong positive relationships were found between VMI assessment scores and student's ability to legibly copy letterforms. Students who could copy the first nine forms on the VMI performed significantly better than students who could not correctly copy the first nine VMI forms on both versions of the Modified SCRIPT.\n\n\nCONCLUSION\nVisual-motor integration skills were shown to be related to the ability to copy letters legibly. These findings support the research of Weil and Cunningham Amundson. Findings from this study also support the conclusion that there is no significant difference in letter writing legibility between students who use paper with or without lines.", "title": "" }, { "docid": "fbf57d773bcdd8096e77246b3f785a96", "text": "The explosion of online content has made the management of such content non-trivial. Web-related tasks such as web page categorization, news filtering, query categorization, tag recommendation, etc. often involve the construction of multi-label categorization systems on a large scale. Existing multi-label classification methods either do not scale or have unsatisfactory performance. In this work, we propose MetaLabeler to automatically determine the relevant set of labels for each instance without intensive human involvement or expensive cross-validation. Extensive experiments conducted on benchmark data show that the MetaLabeler tends to outperform existing methods. Moreover, MetaLabeler scales to millions of multi-labeled instances and can be deployed easily. This enables us to apply the MetaLabeler to a large scale query categorization problem in Yahoo!, yielding a significant improvement in performance.", "title": "" }, { "docid": "e2880e705775f865486ad6f60dfbebb4", "text": "The relationship between persistent pain and self-directed, non-reactive awareness of present-moment experience (i.e., mindfulness) was explored in one of the dominant psychological theories of chronic pain - the fear-avoidance model[53]. A heterogeneous sample of 104 chronic pain outpatients at a multidisciplinary pain clinic in Australia completed psychometrically sound self-report measures of major variables in this model: Pain intensity, negative affect, pain catastrophizing, pain-related fear, pain hypervigilance, and functional disability. Two measures of mindfulness were also used, the Mindful Attention Awareness Scale [4] and the Five-Factor Mindfulness Questionnaire [1]. Results showed that mindfulness significantly negatively predicts each of these variables, accounting for 17-41% of their variance. Hierarchical multiple regression analysis showed that mindfulness uniquely predicts pain catastrophizing when other variables are controlled, and moderates the relationship between pain intensity and pain catastrophizing. This is the first clear evidence substantiating the strong link between mindfulness and pain catastrophizing, and suggests mindfulness might be added to the fear-avoidance model. Implications for the clinical use of mindfulness in screening and intervention are discussed.", "title": "" }, { "docid": "f5d8c506c9f25bff429cea1ed4c84089", "text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.", "title": "" }, { "docid": "144c11393bef345c67595661b5b20772", "text": "BACKGROUND\nAppropriate placement of the bispectral index (BIS)-vista montage for frontal approach neurosurgical procedures is a neuromonitoring challenge. The standard bifrontal application interferes with the operative field; yet to date, no other placements have demonstrated good agreement. The purpose of our study was to compare the standard BIS montage with an alternate BIS montage across the nasal dorsum for neuromonitoring.\n\n\nMATERIALS AND METHODS\nThe authors performed a prospective study, enrolling patients and performing neuromonitoring using both the standard and the alternative montage on each patient. Data from the 2 placements were compared and analyzed using a Bland-Altman analysis, a Scatter plot analysis, and a matched-pair analysis.\n\n\nRESULTS\nOverall, 2567 minutes of data from each montage was collected on 28 subjects. Comparing the overall difference in score, the alternate BIS montage score was, on average, 2.0 (6.2) greater than the standard BIS montage score (P<0.0001). The Bland-Altman analysis revealed a difference in score of -2.0 (95% confidence interval, -14.1, 10.1), with 108/2567 (4.2%) of the values lying outside of the limit of agreement. The scatter plot analysis overall produced a trend line with the equation y=0.94x+0.82, with an R coefficient of 0.82.\n\n\nCONCLUSIONS\nWe determined that the nasal montage produces values that have slightly more variability compared with that ideally desired, but the variability is not clinically significant. In cases where the standard BIS-vista montage would interfere with the operative field, an alternative positioning of the BIS montage across the nasal bridge and under the eye can be used.", "title": "" }, { "docid": "980dc3d4b01caac3bf56df039d5ca513", "text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.", "title": "" }, { "docid": "5faef1f7afae4ccb3a701a11f60ac80b", "text": "State of the art deep learning models have made steady progress in the fields of computer vision and natural language processing, at the expense of growing model sizes and computational complexity. Deploying these models on low power and mobile devices poses a challenge due to their limited compute capabilities and strict energy budgets. One solution that has generated significant research interest is deploying highly quantized models that operate on low precision inputs and weights less than eight bits, trading off accuracy for performance. These models have a significantly reduced memory footprint (up to 32x reduction) and can replace multiply-accumulates with bitwise operations during compute intensive convolution and fully connected layers. Most deep learning frameworks rely on highly engineered linear algebra libraries such as ATLAS or Intel’s MKL to implement efficient deep learning operators. To date, none of the popular deep learning directly support low precision operators, partly due to a lack of optimized low precision libraries. In this paper we introduce a work flow to quickly generate high performance low precision deep learning operators for arbitrary precision that target multiple CPU architectures and include optimizations such as memory tiling and vectorization. We present an extensive case study on low power ARM Cortex-A53 CPU, and show how we can generate 1-bit, 2-bit convolutions with speedups up to 16x over an optimized 16-bit integer baseline and 2.3x better than handwritten implementations.", "title": "" }, { "docid": "754dc26aa595c2c759a34540af369eac", "text": "In recent years, the increasing popularity of outsourcing data to third-party cloud servers sparked a major concern towards data breaches. A standard measure to thwart this problem and to ensure data confidentiality is data encryption. Nevertheless, organizations that use traditional encryption techniques face the challenge of how to enable untrusted cloud servers perform search operations while the actually outsourced data remains confidential. Searchable encryption is a powerful tool that attempts to solve the challenge of querying data outsourced at untrusted servers while preserving data confidentiality. Whereas the literature mainly considers searching over an unstructured collection of files, this paper explores methods to execute SQL queries over encrypted databases. We provide a complete framework that supports private search queries over encrypted SQL databases, in particular for PostgreSQL and MySQL databases. We extend the solution for searchable encryption designed by Curtmola et al., to the case of SQL databases. We also provide features for evaluating range and boolean queries. We finally propose a framework for implementing our construction, validating its", "title": "" }, { "docid": "c57a8e7e15d6b216e451c77fafce271a", "text": "We study rank aggregation algorithms that take as input the opinions of players over their peers, represented as rankings, and output a social ordering of the players (which reflects, e.g., relative contribution to a project or fit for a job). To prevent strategic behavior, these algorithms must be impartial, i.e., players should not be able to influence their own position in the output ranking. We design several randomized algorithms that are impartial and closely emulate given (nonimpartial) rank aggregation rules in a rigorous sense. Experimental results further support the efficacy and practicability of our algorithms.", "title": "" }, { "docid": "26cd7a502fcbf2455b58365299dc8432", "text": "Derivative traders are usually required to scan through hundreds, even thousands of possible trades on a daily-basis; a concrete case is the so-called Mid-Curve Calendar Spread (MCCS). The actual procedure in place is full of pitfalls and a more systematic approach where more information at hand is crossed and aggregated to find good trading picks can be highly useful and undoubtedly increase the trader’s productivity. Therefore, in this work we propose an MCCS Recommendation System based on a stacking approach through Neural Networks. In order to suggest that such approach is methodologically and computationally feasible, we used a list of 15 different types of US Dollar MCCSs regarding expiration, forward and swap tenure. For each MCCS, we used 10 years of historical data ranging weekly from Sep/06 to Sep/16. Then, we started the modelling stage by: (i) fitting the base learners using as the input sensitivity metrics linked with the MCCS at time t, and its subsequent annualized returns as the output; (ii) feeding the prediction from each base model to a particular stacker; and (iii) making predictions and comparing different modelling methodologies by a set of performance metrics and benchmarks. After establishing a backtesting engine and setting performance metrics, our results suggest that our proposed Neural Network stacker compared favourably to other combination procedures.", "title": "" }, { "docid": "354500ae7e1ad1c6fd09438b26e70cb0", "text": "Dietary exposures can have consequences for health years or decades later and this raises questions about the mechanisms through which such exposures are 'remembered' and how they result in altered disease risk. There is growing evidence that epigenetic mechanisms may mediate the effects of nutrition and may be causal for the development of common complex (or chronic) diseases. Epigenetics encompasses changes to marks on the genome (and associated cellular machinery) that are copied from one cell generation to the next, which may alter gene expression, but which do not involve changes in the primary DNA sequence. These include three distinct, but closely inter-acting, mechanisms including DNA methylation, histone modifications and non-coding microRNAs (miRNA) which, together, are responsible for regulating gene expression not only during cellular differentiation in embryonic and foetal development but also throughout the life-course. This review summarizes the growing evidence that numerous dietary factors, including micronutrients and non-nutrient dietary components such as genistein and polyphenols, can modify epigenetic marks. In some cases, for example, effects of altered dietary supply of methyl donors on DNA methylation, there are plausible explanations for the observed epigenetic changes, but to a large extent, the mechanisms responsible for diet-epigenome-health relationships remain to be discovered. In addition, relatively little is known about which epigenomic marks are most labile in response to dietary exposures. Given the plasticity of epigenetic marks and their responsiveness to dietary factors, there is potential for the development of epigenetic marks as biomarkers of health for use in intervention studies.", "title": "" }, { "docid": "cac3d6893f1d311e0014b1afa22d903b", "text": "Canny algorithm can be used in extracting the object’s contour clearly by setting the appropriate parameters. The Otsu algorithm can calculate the high threshold value which is significant to the Canny algorithm, and then this threshold value can be used in the Canny algorithm to detect the object’s edge. From the exprimental result, the Otsu algorithm can be applied in choosing the threshold value which can be used in Canny algorithm, and this method improves the effect of extracting the edge of the Canny algorithm, and achieves the expect result finally.", "title": "" }, { "docid": "6c149f1f6e9dc859bf823679df175afb", "text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.", "title": "" }, { "docid": "9c16bf2fb7ceba2bf872ca3d1475c6d9", "text": "Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.", "title": "" }, { "docid": "0cbc2eb794f44b178a54d97aeff69c19", "text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.", "title": "" }, { "docid": "0f66b62ddfd89237bb62fb6b60a7551a", "text": "BACKGROUND\nClinicians' expanding use of cosmetic restorative procedures has generated greater interest in the determination of esthetic guidelines and standards. The overall esthetic impact of a smile can be divided into four specific areas: gingival esthetics, facial esthetics, microesthetics and macroesthetics. In this article, the authors focus on the principles of macroesthetics, which represents the relationships and ratios of relating multiple teeth to each other, to soft tissue and to facial characteristics.\n\n\nCASE DESCRIPTION\nThe authors categorize macroesthetic criteria based on two reference points: the facial midline and the amount and position of tooth reveal. The facial midline is a critical reference position for determining multiple design criteria. The amount and position of tooth reveal in various views and lip configurations also provide valuable guidelines in determining esthetic tooth positions and relationships.\n\n\nCLINICAL IMPLICATIONS\nEsthetics is an inherently subjective discipline. By understanding and applying simple esthetic rules, tools and strategies, dentists have a basis for evaluating natural dentitions and the results of cosmetic restorative procedures. Macroesthetic components of teeth and their relationship to each other can be influenced to produce more natural and esthetically pleasing restorative care.", "title": "" }, { "docid": "624d645054e730855eed9001e4c4bbc4", "text": "In this paper, we argue that some tasks (e.g., meeting support) require more flexible hypermedia systems and we describe a prototype hypermedia system, DOLPHIN, that implements more flexibility. As part of the argument, we present a theoretical design space for information structuring systems and locate existing hypertext systems within it. The dimensions of the space highlight a system's internal representation of structure and the user's actions in creating structure. Second, we describe an empirically derived range of activities connected to conducting group meetings, including the pre- and post-preparation phases, and argue that hyptetext systems need to be more flexible in order to support this range of activities. Finally, we describe a hypermedia prototype, DOLPHIN, which implements this kind of flexible support for meetings. DOLPHIN supports different degrees of formality (e.g., handwriting and sketches as well as typed nodes and links are supported), coexistence of different structures (e.g., handwriting and nodes can exist on the same page) and mutual transformations between them (e.g., handwriting can be turned into nodes and vice versa).", "title": "" }, { "docid": "745562de56499ff0030f35afa8d84b7f", "text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.", "title": "" }, { "docid": "c24156b6c9b8f5c04fe40e1c6814d115", "text": "This paper presents a compact SIW (substrate integrated waveguide) 3×3 Butler matrix (BM) for 5G mobile applications. The detailed structuring procedures, parameter determinations of each involved component are provided. To validate the 3×3 BM, a slot array is designed. The cascading simulations and prototype measurements are also carried out. The overall performance and dimension show that it can be used for 5G mobile devices. The measured S-parameters agree well with the simulated ones. The measured gains are in the range of 8.1 dBi ∼ 11.1 dBi, 7.1 dBi ∼ 9.8 dBi and 8.9 dBi ∼ 11 dBi for port 1∼3 excitations.", "title": "" } ]
scidocsrr
9b166795a9d3d46a6a821ac097c0aa64
Evaluating opportunistic networks in disaster scenarios
[ { "docid": "413d407b4e2727d18419c9537f2e556f", "text": "This paper describes the design of an automated triage and emergency management information system. The prototype system is capable of monitoring and assessing physiological parameters of individuals, transmitting pertinent medical data to and from multiple echelons of medical service, and providing filtered data for command and control applications. The system employs wireless networking, portable computing devices, and reliable messaging technology as a framework for information analysis, information movement, and decision support capabilities. The embedded medical model and physiological status assessment are based on input from humans and a pulse oximetry device. The physiological status determination methodology follows NATO defined guidelines for remote triage and is implemented using an approach based on fuzzy logic. The approach described can be used in both military and civilian", "title": "" } ]
[ { "docid": "451650e545bf94d5198b8cc491e36aa1", "text": "The proliferation of web services within the last two years enables organizations to assimilate software and services from different companies and locations into an integrated service capable of streamlining important processes. Widespread adoption of web services has not yet occurred across all industries. To better understand the key determinants of web services adoption at the firm level, a conceptual model of factors impacting web services adoption was developed. The conceptual model was grounded in the technology-organization-environment (TOE) framework (Tomatzky and Fleischer, 1990) to support the formulation of eleven propositions that may affect adoption and continued utilization of web services. Specifically, factors for each of the contexts within the TOE framework were formulated and supported including: (1) technological factors (security concerns; reliability; deployability); (2) organizational factors (firm size; firm scope; technological knowledge; perceived benefits); and, (3) environmental factors (competitive pressure; regulatory influence; dependent partner readiness; trust in the web service provider). A summary of the relationships between the key constructs in the model and recommendations for future research are provided.", "title": "" }, { "docid": "e7a6082f1b6c441ebdde238cc8eb21c2", "text": "We present the forensic analysis of the artifacts generated on Android smartphones by ChatSecure, a secure Instant Messaging application that provides strong encryption for transmitted and locally-stored data to ensure the privacy of its users. We show that ChatSecure stores local copies of both exchanged messages and files into two distinct, AES-256 encrypted databases, and we devise a technique able to decrypt them when the secret passphrase, chosen by the user as the initial step of the encryption process, is known. Furthermore, we show how this passphrase can be identified and extracted from the volatile memory of the device, where it persists for the entire execution of ChatSecure after having been entered by the user, thus allowing one Please, cite as: Cosimo Anglano, Massimo Canonico, Marco Guazzone, “Forensic Analysis of the ChatSecure Instant Messaging Application on Android Smartphones,” Digital Investigation, Volume 19, December 2016, Pages 44–59, DOI: 10.1016/j.diin.2016.10.001 Link to publisher: http://dx.doi.org/10.1016/j.diin.2016.10.001 ∗Corresponding author. Address: viale T. Michel 11, 15121 Alessandria (Italy). Phone: +39 0131 360188. Email addresses: cosimo.anglano@uniupo.it (Cosimo Anglano), massimo.canonico@uniupo.it (Massimo Canonico), marco.guazzone@uniupo.it (Marco Guazzone) Preprint submitted to Digital Investigation October 24, 2016 to carry out decryption even if the passphrase is not revealed by the user. Finally, we discuss how to analyze and correlate the data stored in the databases used by ChatSecure to identify the IM accounts used by the user and his/her buddies to communicate, as well as to reconstruct the chronology and contents of the messages and files that have been exchanged among them. For our study we devise and use an experimental methodology, based on the use of emulated devices, that provides a very high degree of reproducibility of the results, and we validate the results it yields against those obtained from real smartphones.", "title": "" }, { "docid": "9c1283e21e1a55c8ae0c8a183b61b6c6", "text": "Nowadays image steganography has major role in confidential medical image communication. When the medical image is transmitted through insecure public network, there is a chance for tampering medical images. Therefore, it is crucial to check the integrity of medical images to prevent any unauthorized modification. To check the integrity we calculate cryptographic hash function of ROI (Region Of Interest) by using SHA algorithm. The hash value (H1) will be embedded in the RONI using discrete wavelet transform. By comparing the hash value at receiver side, we can check the integrity of medical image. If any tampering occurs, the hash function does not match. This paper proposes a new method to improve the security. The modified medical image is embedded in an ordinary looking image by spatial reversible steganography method. It helps to conceal the existence of secret medical data. It ensures that the eavesdroppers will not have any suspicion that medical image is hidden in that image. This combined approach will give enhanced security.", "title": "" }, { "docid": "9f3803ae394163e32fe81784b671de92", "text": "A smart community is a distributed system consisting of a set of smart homes which utilize the smart home scheduling techniques to enable customers to automatically schedule their energy loads targeting various purposes such as electricity bill reduction. Smart home scheduling is usually implemented in a decentralized fashion inside a smart community, where customers compete for the community level renewable energy due to their relatively low prices. Typically there exists an aggregator as a community wide electricity policy maker aiming to minimize the total electricity bill among all customers. This paper develops a new renewable energy aware pricing scheme to achieve this target. We establish the proof that under certain assumptions the optimal solution of decentralized smart home scheduling is equivalent to that of the centralized technique, reaching the theoretical lower bound of the community wide total electricity bill. In addition, an advanced cross entropy optimization technique is proposed to compute the pricing scheme of renewable energy, which is then integrated in smart home scheduling. The simulation results demonstrate that our pricing scheme facilitates the reduction of both the community wide electricity bill and individual electricity bills compared to the uniform pricing. In particular, the community wide electricity bill can be reduced to only 0.06 percent above the theoretic lower bound.", "title": "" }, { "docid": "7866c0cdaa038f08112e629580c445cb", "text": "Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers’ efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with humanmachine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90% accuracy in classification among the 100% and the combination of 0% and 50% datasets. Further combining the PPG signals provide 81.7% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training.", "title": "" }, { "docid": "2c15bef67e6bdbfaf66e1164f8dddf52", "text": "Social behavior is ordinarily treated as being under conscious (if not always thoughtful) control. However, considerable evidence now supports the view that social behavior often operates in an implicit or unconscious fashion. The identifying feature of implicit cognition is that past experience influences judgment in a fashion not introspectively known by the actor. The present conclusion--that attitudes, self-esteem, and stereotypes have important implicit modes of operation--extends both the construct validity and predictive usefulness of these major theoretical constructs of social psychology. Methodologically, this review calls for increased use of indirect measures--which are imperative in studies of implicit cognition. The theorized ordinariness of implicit stereotyping is consistent with recent findings of discrimination by people who explicitly disavow prejudice. The finding that implicit cognitive effects are often reduced by focusing judges' attention on their judgment task provides a basis for evaluating applications (such as affirmative action) aimed at reducing such unintended discrimination.", "title": "" }, { "docid": "7c17cb4da60caf8806027273c4c10708", "text": "Recently, IEEE 802.11ax Task Group has adapted OFDMA as a new technique for enabling multi-user transmission. It has been also decided that the scheduling duration should be same for all the users in a multi-user OFDMA so that the transmission of the users should end at the same time. In order to realize that condition, the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which the scheduling duration is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework by taking into account the padding overhead, airtime fairness and energy consumption of the users. We analytically investigate our resource allocation problems through Lyapunov optimization techniques and show that our algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate. We also calculate the overhead of our algorithms in a realistic setup and propose solutions for the implementation issues.", "title": "" }, { "docid": "521699fc8fc841e8ac21be51370b439f", "text": "Scene understanding is an essential technique in semantic segmentation. Although there exist several datasets that can be used for semantic segmentation, they are mainly focused on semantic image segmentation with large deep neural networks. Therefore, these networks are not useful for real time applications, especially in autonomous driving systems. In order to solve this problem, we make two contributions to semantic segmentation task. The first contribution is that we introduce the semantic video dataset, the Highway Driving dataset, which is a densely annotated benchmark for a semantic video segmentation task. The Highway Driving dataset consists of 20 video sequences having a 30Hz frame rate, and every frame is densely annotated. Secondly, we propose a baseline algorithm that utilizes a temporal correlation. Together with our attempt to analyze the temporal correlation, we expect the Highway Driving dataset to encourage research on semantic video segmentation.", "title": "" }, { "docid": "920c977ce3ed5f310c97b6fcd0f5bef4", "text": "In this paper, different automatic registration schemes base d on different optimization techniques in conjunction with different similarity measures are compared in term s of accuracy and efficiency. Results from every optimizat ion procedure are quantitatively evaluated with respect to t he manual registration, which is the standard registration method used in clinical practice. The comparison has shown automatic regi st ation schemes based on CD consist of an accurate and reliable method that can be used in clinical ophthalmology, as a satisfactory alternative to the manual method. Key-Words: multimodal image registration, optimization algorithms, sim ilarity metrics, retinal images", "title": "" }, { "docid": "1c36c9a0fbef4380e87e6a30bc4f9eac", "text": "We address relation classification in the context of slot filling, the task of finding and evaluating fillers like “Steve Jobs” for the slot X in “X founded Apple”. We propose a convolutional neural network which splits the input sentence into three parts according to the relation arguments and compare it to state-ofthe-art and traditional approaches of relation classification. Finally, we combine different methods and show that the combination is better than individual approaches. We also analyze the effect of genre differences on performance.", "title": "" }, { "docid": "0fed6d4a16e8071a6b39db70350b711a", "text": "Cloud manufacturing: a new manufacturing paradigm Lin Zhang a b , Yongliang Luo a b , Fei Tao a b , Bo Hu Li a b c , Lei Ren a b , Xuesong Zhang a b , Hua Guo a b , Ying Cheng a b , Anrui Hu a b & Yongkui Liu a b a School of Automation Science and Electrical Engineering, Beihang University , Beijing , 100191 , P.R. , China b Engineering Research Center of Complex Product Advanced Manufacturing Systems, Ministry of Education, Beihang University , Beijing , 100191 , P.R. , China c Beijing Simulation Center , Beijing 100854 , P.R. , China Published online: 21 May 2012.", "title": "" }, { "docid": "d9ad51299d4afb8075bd911b6655cf16", "text": "To assess whether the passive leg raising test can help in predicting fluid responsiveness. Nonsystematic review of the literature. Passive leg raising has been used as an endogenous fluid challenge and tested for predicting the hemodynamic response to fluid in patients with acute circulatory failure. This is now easy to perform at the bedside using methods that allow a real time measurement of systolic blood flow. A passive leg raising induced increase in descending aortic blood flow of at least 10% or in echocardiographic subaortic flow of at least 12% has been shown to predict fluid responsiveness. Importantly, this prediction remains very valuable in patients with cardiac arrhythmias or spontaneous breathing activity. Passive leg raising allows reliable prediction of fluid responsiveness even in patients with spontaneous breathing activity or arrhythmias. This test may come to be used increasingly at the bedside since it is easy to perform and effective, provided that its effects are assessed by a real-time measurement of cardiac output.", "title": "" }, { "docid": "d80a58ef393c1f311a829190d7981853", "text": "With the increasing numbers of Cloud Service Providers and the migration of the Grids to the Cloud paradigm, it is necessary to be able to leverage these new resources. Moreover, a large class of High Performance Computing (hpc) applications can run these resources without (or with minor) modifications. But using these resources come with the cost of being able to interact with these new resource providers. In this paper we introduce the design of a hpc middleware that is able to use resources coming from an environment that compose of multiple Clouds as well as classical hpc resources. Using the Diet middleware, we are able to deploy a large-scale, distributed hpc platform that spans across a large pool of resources aggregated from different providers. Furthermore, we hide to the end users the difficulty and complexity of selecting and using these new resources even when new Cloud Service Providers are added to the pool. Finally, we validate the architecture concept through cosmological simulation ramses. Thus we give a comparison of 2 well-known Cloud Computing Software: OpenStack and OpenNebula. Key-words: Cloud, IaaS, OpenNebula, Multi-Clouds, DIET, OpenStack, RAMSES, cosmology ∗ ENS de Lyon, France, Email: FirstName.LastName@ens-lyon.fr † ENSI de Bourges, France, Email: FirstName.LastName@ensi-bourges.fr ‡ INRIA, France, Email: FirstName.LastName@inria.fr Comparison on OpenStack and OpenNebula performance to improve multi-Cloud architecture on cosmological simulation use case Résumé : Avec l’augmentation du nombre de fournisseurs de service Cloud et la migration des applications depuis les grilles de calcul vers le Cloud, il est ncessaire de pouvoir tirer parti de ces nouvelles ressources. De plus, une large classe des applications de calcul haute performance peuvent s’excuter sur ces ressources sans modifications (ou avec des modifications mineures). Mais utiliser ces ressources vient avec le cot d’tre capable d’intragir avec des nouveaux fournisseurs de ressources. Dans ce papier, nous introduisons la conception d’un nouveau intergiciel hpc qui permet d’utiliser les ressources qui proviennent d’un environement compos de plusieurs Clouds comme des ressources classiques. En utilisant l’intergiciel Diet, nous sommes capable de dployer une plateforme hpc distribue et large chelle qui s’tend sur un large ensemble de ressources aggrges entre plusieurs fournisseurs Cloud. De plus, nous cachons l’utilisateur final la difficult et la complexit de slectionner et d’utiliser ces nouvelles ressources quand un nouveau fournisseur de service Cloud est ajout dans l’ensemble. Finalement, nous validons notre concept d’architecture via une application de simulation cosmologique ramses. Et nous fournissons une comparaison entre 2 intergiciels de Cloud: OpenStack et OpenNebula. Mots-clés : Cloud, IaaS, OpenNebula, Multi-Clouds, DIET, OpenStack, RAMSES, cosmologie Comparaison de performance entre OpenStack et OpenNebula et les architectures multi-Cloud: Application la cosmologie.3", "title": "" }, { "docid": "c3f3ed8a363d8dcf9ac1efebfa116665", "text": "We report a new phenomenon associated with language comprehension: the action-sentence compatibility effect (ACE). Participants judged whether sentences were sensible by making a response that required moving toward or away from their bodies. When a sentence implied action in one direction (e.g., \"Close the drawer\" implies action away from the body), the participants had difficulty making a sensibility judgment requiring a response in the opposite direction. The ACE was demonstrated for three sentences types: imperative sentences, sentences describing the transfer of concrete objects, and sentences describing the transfer of abstract entities, such as \"Liz told you the story.\" These dataare inconsistent with theories of language comprehension in which meaning is represented as a set of relations among nodes. Instead, the data support an embodied theory of meaning that relates the meaning of sentences to human action.", "title": "" }, { "docid": "f78ba23b912c6875587c9f00d45676b4", "text": "OBJECTIVES\nThe aim of this study was to assess the impact of the advanced technology of the new ExAblate 2100 system (Insightec Ltd, Haifa, Israel) for magnetic resonance imaging (MRI)-guided focused ultrasound surgery on treatment outcomes in patients with symptomatic uterine fibroids, as measured by the nonperfused volume ratio.\n\n\nMATERIALS AND METHODS\nThis is a retrospective analysis of 115 women (mean age, 42 years; range, 27-54 years) with symptomatic fibroids who consecutively underwent MRI-guided focused ultrasound treatment in a single center with the new generation ExAblate 2100 system from November 2010 to June 2011. Mean ± SD total volume and number of treated fibroids (per patient) were 89 ± 94 cm and 2.2 ± 1.7, respectively. Patient baseline characteristics were analyzed regarding their impact on the resulting nonperfused volume ratio.\n\n\nRESULTS\nMagnetic resonance imaging-guided focused ultrasound treatment was technically successful in 115 of 123 patients (93.5%). In 8 patients, treatment was not possible because of bowel loops in the beam pathway that could not be mitigated (n = 6), patient movement (n = 1), and system malfunction (n = 1). Mean nonperfused volume ratio was 88% ± 15% (range, 38%-100%). Mean applied energy level was 5400 ± 1200 J, and mean number of sonications was 74 ± 27. No major complications occurred. Two cases of first-degree skin burn resolved within 1 week after the intervention. Of the baseline characteristics analyzed, only the planned treatment volume had a statistically significant impact on nonperfused volume ratio.\n\n\nCONCLUSIONS\nWith technological advancement, the outcome of MRI-guided focused ultrasound treatment in terms of the nonperfused volume ratio can be enhanced with a high safety profile, markedly exceeding results reported in previous clinical trials.", "title": "" }, { "docid": "fa826e5846cdee91192beecd1a52bb3a", "text": "ABSTRA CT Recommender systemsusepeople’ s opinionsaboutitemsin an information domainto help peoplechooseother items. Thesesystemshave succeededin domainsas diverse as movies, news articles,Web pages,andwines. The psychological literatureonconformitysuggeststhatin thecourseof helpingpeoplemake choices,thesesystemsprobablyaffect users’opinionsof the items. If opinionsare influencedby recommendations, they might be lessvaluablefor making recommendations for otherusers.Further, manipulatorswho seekto makethesystemgenerateartificially highor low recommendationsmight benefitif their efforts influenceusers to changethe opinionsthey contribute to the recommender . Westudytwo aspectsof recommender systeminterfacesthat may affect users’opinions: the rating scaleandthe display of predictionsat thetime usersrateitems.We find thatusers rate fairly consistentlyacrossrating scales. Userscan be manipulated,though,tendingto rate toward the prediction thesystemshows, whetherthepredictionis accurateor not. However, userscan detectsystemsthat manipulatepredictions. We discusshow designersof recommendersystems might reactto thesefindings.", "title": "" }, { "docid": "08d6f4265c96d0d63e0b54555fd3f403", "text": "The control accuracies of the injection time and fuel quantity are the most important factors affecting the fuel economy and emissions of an engine equipped with a high-pressure common-rail fuel injection system. This paper presents an intelligent dual-voltage driving control that can simultaneously reduce the response time of the injector and improve the accuracy of the injected fuel quantity. The designed method employs a novel boost dc–dc circuit and does not require an additional independent dc–dc module that includes an inductor switch MOSFET and a controller. During the time interval between two continuous injections, the low-side MOSFET serves as a dc–dc switch, and the software in the controller controls the dual-voltage dc–dc, which uses the injector as a charging inductor of the dc–dc module. The operation principle of this intelligent driving circuit is described. Experimental results show that this method can not only significantly decrease the response time of the injector but also improve the stability of the fuel injection process.", "title": "" }, { "docid": "38037437ce3e86cda024f81cbd81cd6f", "text": "BACKGROUND\nIt is widely known that more boys are born during and immediately after wars, but there has not been any ultimate (evolutionary) explanation for this 'returning soldier effect'. Here, I suggest that the higher sex ratios during and immediately after wars might be a byproduct of the fact that taller soldiers are more likely to survive battle and that taller parents are more likely to have sons.\n\n\nMETHODS\nI analyze a large sample of British Army service records during World War I.\n\n\nRESULTS\nSurviving soldiers were on average more than one inch (3.33 cm) taller than fallen soldiers.\n\n\nCONCLUSIONS\nConservative estimates suggest that the one-inch height advantage alone is more than twice as sufficient to account for all the excess boys born in the UK during and after World War I. While it remains unclear why taller soldiers are more likely to survive battle, I predict that the returning soldier effect will not happen in more recent and future wars.", "title": "" }, { "docid": "587f58f291732bfb8954e34564ba76fd", "text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.", "title": "" }, { "docid": "1738a8ccb1860e5b85e2364f437d4058", "text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.", "title": "" } ]
scidocsrr
0f346a1d1d04da6a96d60bf82899aaaf
Viewstamped Replication: A New Primary Copy Method to Support Highly-Available Distributed Systems
[ { "docid": "7530de11afdbb1e09c363644b0866bcb", "text": "The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistent orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols in the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.", "title": "" } ]
[ { "docid": "8a6c3614d35b21a3e6c077d20309a0bd", "text": "A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions. Each of these languages employs its own probabilistic primitives, and comes with a particular syntax, semantics and inference procedure. This makes it hard to understand the underlying programming concepts and appreciate the differences between the different languages. To obtain a better understanding of probabilistic programming, we identify a number of core programming concepts underlying the primitives used by various probabilistic languages, discuss the execution mechanisms that they require and use these to position and survey state-of-the-art probabilistic languages and their implementation. While doing so, we focus on probabilistic extensions of logic programming languages such as Prolog, which have been considered for over 20 years.", "title": "" }, { "docid": "0fc69998ed7ef751abd62cae41495cf0", "text": "Hedonic systems represent a multibillion-dollar industry and play an important role in how people recreate, socialize, and even conduct business. A key goal of hedonic system design is to promote positive affect—a variable known to influence cognitive beliefs, trust, disclosure, adoption, and purchase intentions. Yet, little research has identified or explained how stimuli from design features lead to positive affect in hedonic systems. This article introduces a new theoretical model, the Hedonic Affect Model (HAM), which is a comprehensive and generalizable model explaining the causes of positive and negative affect in a hedonic software context. HAM outlines three stages that provide an explanation of how stimuli lead to positive affect in hedonic contexts. In stage 1, HAM specifies group and individual interaction inputs that are likely to play a role in users' hedonic evaluations of a system. Stage 2 explains how the interaction inputs and intrinsic motivation influence hedonic performance perceptions. Stage 3 explains how performance expectations and perceived performance lead to a positive disconfirmation and influence users' affect.", "title": "" }, { "docid": "3a2168e93c1f8025e93de1a7594e17d5", "text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.", "title": "" }, { "docid": "755c4c452a535f30e53f0e9e77f71d20", "text": "Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video superresolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video superresolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video superresolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.", "title": "" }, { "docid": "c95e58c054855c60b16db4816c626ecb", "text": "Markerless tracking of human pose is a hard yet relevant problem. In this paper, we derive an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. The key idea is to combine an accurate generative model — which is achievable in this setting using programmable graphics hardware — with a discriminative model that provides data-driven evidence about body part locations. In each filter iteration, we apply a form of local model-based search that exploits the nature of the kinematic chain. As fast movements and occlusion can disrupt the local search, we utilize a set of discriminatively trained patch classifiers to detect body parts. We describe a novel algorithm for propagating this noisy evidence about body part locations up the kinematic chain using the un-scented transform. The resulting distribution of body configurations allows us to reinitialize the model-based search. We provide extensive experimental results on 28 real-world sequences using automatic ground-truth annotations from a commercial motion capture system.", "title": "" }, { "docid": "14fa72af2a1a4264b2e84e6c810df326", "text": "This paper presents a clustering approach that simultaneously identifies product features and groups them into aspect categories from online reviews. Unlike prior approaches that first extract features and then group them into categories, the proposed approach combines feature and aspect discovery instead of chaining them. In addition, prior work on feature extraction tends to require seed terms and focus on identifying explicit features, while the proposed approach extracts both explicit and implicit features, and does not require seed terms. We evaluate this approach on reviews from three domains. The results show that it outperforms several state-of-the-art methods on both tasks across all three domains.", "title": "" }, { "docid": "f266646478196476fb93ea507ea6e23e", "text": "The aim of this paper is to develop a human tracking system that is resistant to environmental changes and covers wide area. Simply structured floor sensors are low-cost and can track people in a wide area. However, the sensor reading is discrete and missing; therefore, footsteps do not represent the precise location of a person. A Markov chain Monte Carlo method (MCMC) is a promising tracking algorithm for these kinds of signals. We applied two prediction models to the MCMC: a linear Gaussian model and a highly nonlinear bipedal model. The Gaussian model was efficient in terms of computational cost while the bipedal model discriminated people more accurate than the Gaussian model. The Gaussian model can be used to track a number of people, and the bipedal model can be used in situations where more accurate tracking is required.", "title": "" }, { "docid": "8264e76269150c828d07439983ad3582", "text": "BACKGROUND\nSkin diseases caused by mites and insects living in domestic environments have been rarely systematically studied.\n\n\nOBJECTIVES\nTo study patients with dermatitis induced by arthropods in domestic environment describing their clinical features, isolating culprit arthropods and relating the clinical features to the parasitological data.\n\n\nMETHODS\nThe study was performed in 105 subjects with clinical and anamnestic data compatible with the differential diagnosis of ectoparasitoses in domestic environments. Clinical data and arthropods findings obtained by indoor dust direct examination were studied.\n\n\nRESULTS\nIndoor dust direct examination demonstrated possible arthropods infestation in 98 subjects (93.3%), more frequently mites (56.1%) (mainly Pyemotes ventricosus and Glycyphagus domesticus) than insects (43.9%) (mainly Formicidae and Bethylidae). Strophulus (46.9%) and urticaria-like eruption (36.7%) in upper limbs and trunk with severe extent were prevalent. Itch was mostly severe (66.3%) and continuous (55.1%). Ectoparasitoses occurred frequently with acute course in summer (44.9%) and spring (30.6%).\n\n\nCONCLUSIONS\nPossible correlation between clinical and aetiological diagnosis of arthropods ectoparasitoses in domestic environments needs the close cooperation between dermatologist and parasitologist. This is crucial to successfully and definitely resolve skin lesions by eradicating the factors favouring infestation.", "title": "" }, { "docid": "806de7b7ca26cff0dc8a69a290195ec2", "text": "Coplanar-waveguide (CPW)-fed microstrip bandpass filters are proposed with capacitive couplings suitably introduced at the input/output (I/O) ports, as well as between the resonators for spurious suppression. By adopting these capacitive couplings, several open stubs are established so that adjustable multiple transmission zeros may independently be created to suppress several unwanted spurious passbands, thereby extending the stopband and improving the rejection level. In this study, the capacitive couplings required at the I/O ports, as well as across the resonators, are realized by the broadside-coupled transition structures between the top microstrip layer and the bottom CPW layer so that the I/O ports may properly be matched and the spurious responses may effectively be suppressed. Specifically, a fifth-order bandpass filter, centered at f0=1.33 GHz with a stopband extended up to 8.67 GHz (6.52 f0) and a rejection level better than 30 dB, is implemented and carefully examined", "title": "" }, { "docid": "b439f7ebf1db9072c732df0fb77fcd67", "text": "A novel pseudo-continuous conduction mode (PCCM) boost power-factor-correction (PFC) converter and its corresponding control strategy are proposed in this paper. Connecting a power switch in parallel with the inductor makes the boost converter operate in PCCM, which provides an additional degree of control freedom to realize PFC control. Therefore, a simple and fast voltage control loop of the PCCM boost PFC converter can be designed to realize output-voltage regulation. The additional degree of control freedom introduced by inductor current freewheeling operation of the PCCM boost PFC converter has been exploited through the dead-zone control technique, which can be dynamically adjusted in accordance with the output-voltage ripple. Compared with continuous conduction mode (CCM) boost PFC converter, the controller of the proposed PCCM boost PFC converter is much simpler. Moreover, the PCCM boost PFC converter benefits with reduced inductor-current ripple and improved power factor compared with boost PFC converter operating in discontinuous conduction mode (DCM). Analysis, simulation, and a 400-W prototype of the PCCM boost PFC converter have been presented and compared with those of boost PFC converters operating in conventional CCM and DCM. The results show that the dynamic response of the PCCM boost PFC converter is significantly faster than the existing boost PFC converter.", "title": "" }, { "docid": "4ad09f27848c5f47de5bb58a522c28a3", "text": "The rapid development of deep learning are enabling a plenty of novel applications such as image and speech recognition for embedded systems, robotics or smart wearable devices. However, typical deep learning models like deep convolutional neural networks (CNNs) consume so much on-chip storage and high-throughput compute resources that they cannot be easily handled by mobile or embedded devices with thrifty silicon and power budget. In order to enable large CNN models in mobile or more cutting-edge devices for IoT or cyberphysics applications, we proposed an efficient on-chip memory architecture for CNN inference acceleration, and showed its application to our in-house general-purpose deep learning accelerator. The redesigned on-chip memory subsystem, Memsqueezer, includes an active weight buffer set and data buffer set that embrace specialized compression methods to reduce the footprint of CNN weight and data set respectively. The Memsqueezer buffer can compress the data and weight set according to their distinct features, and it also includes a built-in redundancy detection mechanism that actively scans through the work-set of CNNs to boost their inference performance by eliminating the data redundancy. In our experiment, it is shown that the CNN accelerators with Memsqueezer buffers achieves more than 2x performance improvement and reduces 80% energy consumption on average over the conventional buffer design with the same area budget.", "title": "" }, { "docid": "df897e1cc540c976b2fd109ce552ed58", "text": "This paper proposes the design and development of an integrated serial elastic actuator (SEA) which is used as the elbow joint in the soft humanoid arm. First, requirements of SEA are illustrated. And six different types of elastic elements (e.g. the discoid element, the cylindrical) are analyzed using the design of experiment (DOE) method. And then, a discoid elastic element with larger flexibility is developed. Based on this, an integrated SEA containing the position sensors and the torque sensor is developed. And a 7-degree-of-freedom humanoid arm with joint flexibilities is designed. Finally, experiments of SEA demonstrate the approximation of the stiffness between the desired SEA and the prototype.", "title": "" }, { "docid": "cbf5019b1363b20c15c284d6d76f3281", "text": "Matching articulated shapes represented by voxel-sets reduces to maximal sub-graph isomorphism when each set is described by a weighted graph. Spectral graph theory can be used to map these graphs onto lower dimensional spaces and match shapes by aligning their embeddings in virtue of their invariance to change of pose. Classical graph isomorphism schemes relying on the ordering of the eigenvalues to align the eigenspaces fail when handling large data-sets or noisy data. We derive a new formulation that finds the best alignment between two congruent K-dimensional sets of points by selecting the best subset of eigenfunctions of the Laplacian matrix. The selection is done by matching eigenfunction signatures built with histograms, and the retained set provides a smart initialization for the alignment problem with a considerable impact on the overall performance. Dense shape matching casted into graph matching reduces then, to point registration of embeddings under orthogonal transformations; the registration is solved using the framework of unsupervised clustering and the EM algorithm. Maximal subset matching of non identical shapes is handled by defining an appropriate outlier class. Experimental results on challenging examples show how the algorithm naturally treats changes of topology, shape variations and different sampling densities.", "title": "" }, { "docid": "6f80ca376936dc6f682a3a16587d87b3", "text": "System Dynamics is often used to explore issues that are characterised by uncertainties. This paper discusses first of all different types of uncertainties that system dynamicists need to deal with and the tools they already use to deal with these uncertainties. From this discussion it is concluded that stand-alone System Dynamics is often not sufficient to deal with uncertainties. Then, two venues for improving the capacity of System Dynamics to deal with uncertainties are discussed, in both cases, by matching System Dynamics with other method(ologie)s: first with Multi-Attribute Multiple Criteria Decision Analysis, and finally with Exploratory Modelling.", "title": "" }, { "docid": "a5a53221aa9ccda3258223b9ed4e2110", "text": "Accurate and reliable inventory forecasting can save an organization from overstock, under-stock and no stock/stock-out situation of inventory. Overstocking leads to high cost of storage and its maintenance, whereas under-stocking leads to failure to meet the demand and losing profit and customers, similarly stock-out leads to complete halt of production or sale activities. Inventory transactions generate data, which is a time-series data having characteristic volume, speed, range and regularity. The inventory level of an item depends on many factors namely, current stock, stock-on-order, lead-time, annual/monthly target. In this paper, we present a perspective of treating Inventory management as a problem of Genetic Programming based on inventory transactions data. A Genetic Programming — Symbolic Regression (GP-SR) based mathematical model is developed and subsequently used to make forecasts using Holt-Winters Exponential Smoothing method for time-series modeling. The GP-SR model evolves based on RMSE as the fitness function. The performance of the model is measured in terms of RMSE and MAE. The estimated values of item demand from the GP-SR model is finally used to simulate a time-series and forecasts are generated for inventory required on a monthly time horizon.", "title": "" }, { "docid": "e8b0536f5d749b5f6f5651fe69debbe1", "text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.", "title": "" }, { "docid": "c1aa4f3fa007ced39cc92aeb28bcaf86", "text": "Personal cloud storage services such as Dropbox and OneDrive are popular among Internet users. They help in sharing content and backing up data by relying on the cloud to store files. The rise of mobile terminals and the presence of new providers question whether the usage of cloud storage is evolving. This knowledge is essential to understand the workload these services need to handle, their performance, and implications. In this paper we present a comprehensive characterization of personal cloud storage services. Relying on traces collected for one month in an operational network, we show that users of each service present distinct behaviors. Dropbox is now threatened by competitors, with OneDrive and Google Drive reaching large market shares. However, the popularity of the latter services seems to be driven by their integration into Windows and Android. Indeed, around 50% of their users do not produce any workload. Considering performance, providers show distinct trade-offs, with bottlenecks that hardly allow users to fully exploit their access line bandwidth. Finally, usage of cloud services is now ordinary among mobile users, thanks to the automatic backup of pictures and media files.", "title": "" }, { "docid": "83ee7b71813ead9656e2972e700ade24", "text": "In many visual domains (like fashion, furniture, etc.) the search for products on online platforms requires matching textual queries to image content. For example, the user provides a search query in natural language (e.g.,pink floral top) and the results obtained are of a different modality (e.g., the set of images of pink floral tops). Recent work on multimodal representation learning enables such cross-modal matching by learning a common representation space for text and image. While such representations ensure that the n-dimensional representation of pink floral top is very close to representation of corresponding images, they do not ensure that the first k1 (< n) dimensions correspond to color, the next k2 (< n) correspond to style and so on. In other words, they learn entangled representations where each dimension does not correspond to a specific attribute. We propose two simple variants which can learn disentangled common representations for the fashion domain wherein each dimension would correspond to a specific attribute (color, style, silhoutte, etc.). Our proposed variants can be integrated with any existing multimodal representation learning method. We use a large fashion dataset of over 700K fashion items crawled from multiple fashion e-commerce portals to evaluate the learned representations on four different applications from the fashion domain, namely, cross-modal image retrieval, visual search, image tagging, and query expansion. Our experimental results show that the proposed variants lead to better performance for each of these applications while learning disentangled representations.", "title": "" }, { "docid": "16d6862cf891e5219aae10d5fcd6ce92", "text": "This paper describes the Power System Analysis Toolbox (PSAT), an open source Matlab and GNU/Octave-based software package for analysis and design of small to medium size electric power systems. PSAT includes power flow, continuation power flow, optimal power flow, small-signal stability analysis, and time-domain simulation, as well as several static and dynamic models, including nonconventional loads, synchronous and asynchronous machines, regulators, and FACTS. PSAT is also provided with a complete set of user-friendly graphical interfaces and a Simulink-based editor of one-line network diagrams. Basic features, algorithms, and a variety of case studies are presented in this paper to illustrate the capabilities of the presented tool and its suitability for educational and research purposes.", "title": "" }, { "docid": "0b507193ca68d05a3432a9e735df5d95", "text": "Capturing image with defocused background by using a large aperture is a widely used technique in digital single-lens reflex (DSLR) camera photography. It is also desired to provide this function to smart phones. In this paper, a new algorithm is proposed to synthesize such an effect for a single portrait image. The foreground portrait is detected using a face prior based salient object detection algorithm. Then with an improved gradient domain guided image filter, the details in the foreground are enhanced while the background pixels are blurred. In this way, the background objects are defocused and thus the foreground objects are emphasized. The resultant image looks similar to image captured using a camera with a large aperture. The proposed algorithm can be adopted in smart phones, especially for the front cameras of smart phones.", "title": "" } ]
scidocsrr
5b88a4d27325f018f820df5efc8ad455
Self-tuned deep super resolution
[ { "docid": "4918abc325eae43369e9173c2c75706b", "text": "We propose a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super-resolution approaches- learning from an external database and learning from self-examples. Our in-place self-similarity refines the recently proposed local self-similarity by proving that a patch in the upper scale image have good matches around its origin location in the lower scale image. Based on the in-place examples, a first-order approximation of the nonlinear mapping function from low-to high-resolution image patches is learned. Extensive experiments on benchmark and real-world images demonstrate that our algorithm can produce natural-looking results with sharp edges and preserved fine details, while the current state-of-the-art algorithms are prone to visual artifacts. Furthermore, our model can easily extend to deal with noise by combining the regression results on multiple in-place examples for robust estimation. The algorithm runs fast and is particularly useful for practical applications, where the input images typically contain diverse textures and they are potentially contaminated by noise or compression artifacts.", "title": "" }, { "docid": "c0d794e7275e7410998115303bf0cf79", "text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.", "title": "" } ]
[ { "docid": "5cb4a7a6486eaba444b88b7a48e9cea8", "text": "UNLABELLED\nThis Guideline is an official statement of the European Society of Gastrointestinal Endoscopy (ESGE). The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system 1 2 was adopted to define the strength of recommendations and the quality of evidence.\n\n\nMAIN RECOMMENDATIONS\n1 ESGE recommends endoscopic en bloc resection for superficial esophageal squamous cell cancers (SCCs), excluding those with obvious submucosal involvement (strong recommendation, moderate quality evidence). Endoscopic mucosal resection (EMR) may be considered in such lesions when they are smaller than 10 mm if en bloc resection can be assured. However, ESGE recommends endoscopic submucosal dissection (ESD) as the first option, mainly to provide an en bloc resection with accurate pathology staging and to avoid missing important histological features (strong recommendation, moderate quality evidence). 2 ESGE recommends endoscopic resection with a curative intent for visible lesions in Barrett's esophagus (strong recommendation, moderate quality evidence). ESD has not been shown to be superior to EMR for excision of mucosal cancer, and for that reason EMR should be preferred. ESD may be considered in selected cases, such as lesions larger than 15 mm, poorly lifting tumors, and lesions at risk for submucosal invasion (strong recommendation, moderate quality evidence). 3 ESGE recommends endoscopic resection for the treatment of gastric superficial neoplastic lesions that possess a very low risk of lymph node metastasis (strong recommendation, high quality evidence). EMR is an acceptable option for lesions smaller than 10 - 15 mm with a very low probability of advanced histology (Paris 0-IIa). However, ESGE recommends ESD as treatment of choice for most gastric superficial neoplastic lesions (strong recommendation, moderate quality evidence). 4 ESGE states that the majority of colonic and rectal superficial lesions can be effectively removed in a curative way by standard polypectomy and/or by EMR (strong recommendation, moderate quality evidence). ESD can be considered for removal of colonic and rectal lesions with high suspicion of limited submucosal invasion that is based on two main criteria of depressed morphology and irregular or nongranular surface pattern, particularly if the lesions are larger than 20 mm; or ESD can be considered for colorectal lesions that otherwise cannot be optimally and radically removed by snare-based techniques (strong recommendation, moderate quality evidence).", "title": "" }, { "docid": "2c9ee5e88db62a4de92dbefb72bb61de", "text": "Surveys are probably the most commonly-used research method world-wide. Survey work is visible not only because we see many examples of it in software engineering research, but also because we are often asked to participate in surveys in our private capacity, as electors, consumers, or service users. This widespread use of surveys may give us the impression that surveybased research is straightforward, an easy option for researchers to gather important information about products, context, processes, workers and more. In our personal experience with applying and evaluating research methods and their results, we certainly did not expect to encounter major problems with a survey that we planned, to investigate issues associated with technology adoption. This article and subsequent ones in this series describe how wrong we were. We do not want to give the impression that there is any way of turning a bad survey into a good one; if a survey is a lemon, it stays a lemon. However, we believe that learning from our mistakes is the way to make lemonade from lemons. So this series of articles shares with you our lessons learned, in the hope of improving survey research in software engineering.", "title": "" }, { "docid": "e36d1db375d661820be1da3307b99fbb", "text": "The goal of this thesis is to establish a system for the automatic syntactic analysis of real-world text. Syntactic analysis in this thesis denotes computation of in-depth syntactic structures that are grounded in syntactic theories like Head-Driven Phrase Structure Grammar (HPSG). Since syntactic structures provide essential components for computing meanings of natural language sentences, the establishment of syntactic analyzers is a starting point for intelligent natural language processing. Syntactic analyzers are strongly demanded in natural language processing applications, including question answering, dialog systems, and text mining. To date, however, few syntactic analyzers can process naturally occurring sentences such as newswire texts. This task involves two significant obstacles. One is the scalability of a grammar to analyze realworld texts. Grammar theories that successfully worked in a toy system could not be applied to the analysis of real-world sentences. Despite intensive research on syntactic analysis, development of wide-coverage grammars is almost impractical. This is due to the inherent difficulty in scaling up a grammar; as a grammar becomes larger, the maintenance of the consistency of the grammar is more difficult. Modern syntactic theories, which are called lexicalized grammars, explain diverse syntactic structures with various combinations of lexical entries to express word-specific constraints and linguistic principles to represent generic syntactic regularities. However, grammar writers cannot simulate in their mind all possible combinations of lexical entries and linguistic principles. Notably, a number of lexical entries are required to treat real-world sentences, and the consistent expansion of lexical entries creates a bottleneck in the scaling up of lexicalized grammars. The problem is further deteriorated by the complicated data structures required in linguistic theories to express in-depth syntactic regularity. The first proposal of this thesis is a new methodology for the development of lexicalized grammars. The method is corpus-oriented, in the sense that the objective of the grammar development is the construction of an annotated corpus, i.e., a treebank, rather than a lexicon. This methodology supports an inexpensive development of lexicalized grammars owing to the systematic control of grammar inconsistencies and the reuse of existing linguistic resources. First, grammar developers define linguistic principles that conform to a target syntactic theory, i.e., HPSG in our case. Next, existing linguistic resources, such as Penn Treebank, are converted into an HPSG treebank. The major work of grammar developers is to maintain the conversion process with the help of consistency checking by principles. That is, because conflicts in a grammar are automatically detected as violations of principle applications to a treebank, grammar writers can easily identify sources of inconsistencies. When we have a sufficient treebank of HPSG, a lexicon is collected from terminal nodes of HPSG syntactic structures in the treebank. Lexicon collection is completely deterministic; that is, treebank construction theoretically subsumes lexicon development. The other obstacle is the modeling of preference of natural language syntax. Since linguistic research on syntax has focused on structural regularity, modeling of preference was not respected. However, it is indispensable for automatic syntactic analysis because applications usually require disambiguated or ranked parse results. Since probabilistic models attained great success in CFG", "title": "" }, { "docid": "1338d4ca40f05bebd978959719acd59a", "text": "The reliability of file systems depends in part on how well they propagate errors. We develop a static analysis technique, EDP, that analyzes how file systems and storage device drivers propagate error codes. Running our EDP analysis on all file systems and 3 major storage device drivers in Linux 2.6, we find that errors are often incorrectly propagated; 1153 calls (13%) drop an error code without handling it. We perform a set of analyses to rank the robustness of each subsystem based on the completeness of its error propagation; we find that many popular file systems are less robust than other available choices. We confirm that write errors are neglected more often than read errors. We also find that many violations are not cornercase mistakes, but perhaps intentional choices. Finally, we show that inter-module calls play a part in incorrect error propagation, but that chained propagations do not. In conclusion, error propagation appears complex and hard to perform correctly in modern systems.", "title": "" }, { "docid": "973fa990e13734f060ae13b138e99c39", "text": "Parallel algorithm for line and circle drawing that are based on J.E. Bresenham's line and circle algorithms (see Commun. ACM, vol.20, no.2, p.100-6 (1977)) are presented. The new algorithms are applicable on raster scan CRTs, incremental pen plotters, and certain types of printers. The line algorithm approaches a perfect speedup of P as the line length approaches infinity, and the circle algorithm approaches a speedup greater than 0.9P as the circle radius approaches infinity. It is assumed that the algorithm are run in a multiple-instruction-multiple-data (MIMD) environment, that the raster memory is shared, and that the processors are dedicated and assigned to the task (of line or circle drawing).<<ETX>>", "title": "" }, { "docid": "b77d297feeff92a2e7b03bf89b5f20db", "text": "Dependability evaluation main objective is to assess the ability of a system to correctly function over time. There are many possible approaches to the evaluation of dependability: in these notes we are mainly concerned with dependability evaluation based on probabilistic models. Starting from simple probabilistic models with very efficient solution methods we shall then come to the main topic of the paper: how Petri nets can be used to evaluate the dependability of complex systems.", "title": "" }, { "docid": "3de7dd15d2b8bb5d08eb548bf3f19230", "text": "Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT). To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.", "title": "" }, { "docid": "672ac3cd042179cf797b97ac7359ed3e", "text": "Many time series data mining problems require subsequence similarity search as a subroutine. Dozens of similarity/distance measures have been proposed in the last decade and there is increasing evidence that Dynamic Time Warping (DTW) is the best measure across a wide range of domains. Given DTW’s usefulness and ubiquity, there has been a large community-wide effort to mitigate its relative lethargy. Proposed speedup techniques include early abandoning strategies, lower-bound based pruning, indexing and embedding. In this work we argue that we are now close to exhausting all possible speedup from software, and that we must turn to hardware-based solutions. With this motivation, we investigate both GPU (Graphics Processing Unit) and FPGA (Field Programmable Gate Array) based acceleration of subsequence similarity search under the DTW measure. As we shall show, our novel algorithms allow GPUs to achieve two orders of magnitude speedup and FPGAs to produce four orders of magnitude speedup. We conduct detailed case studies on the classification of astronomical observations and demonstrate that our ideas allow us to tackle problems that would be untenable otherwise.", "title": "" }, { "docid": "f9afdab6f3cac70d6680b02b32f37b49", "text": "Marx generators can produce high voltage pulses using multiple identical stages that operate at a fraction of the total output voltage, without the need for a step-up transformer that limits the pulse risetimes and lowers the efficiency of the system. Each Marx stage includes a capacitor or pulse forming network, and a high voltage switch. Typically, these switches are spark gaps resulting in Marx generators with low repetition rates and limited lifetimes. The development of economical, compact, high voltage, high di/dt, and fast turn-on solid-state switches make it easy to build economical, long lifetime, high voltage Marx generators capable of high pulse repetition rates. We have constructed a Marx generator using our 24 kV thyristor based switches, which are capable of conducting 14 kA peak currents with ringing discharges at >25 kA/mus rate of current risetimes. The switches have short turn-on delays, less than 200 ns, low timing jitters, and are triggered by a single 10 V isolated trigger pulse. This paper will include a description of a 4-stage solid-state Marx and triggering system, as well as show data from operation at 15 kV charging voltage. The Marx was used to drive a one-stage argon ion accelerator", "title": "" }, { "docid": "066ceafff23aef8c0c6101dcd367f018", "text": "We introduce a new scene graph generation method called image-level attentional context modeling (ILAC). Our model includes an attentional graph network that effectively propagates contextual information across the graph using image-level features. Whereas previous works use an object-centric context, we build an imagelevel context agent to encode the scene properties. The proposed method comprises a single-stream network that iteratively refines the scene graph with a nested graph neural network. We demonstrate that our approach achieves competitive performance with the state-of-the-art for scene graph generation on the Visual Genome dataset, while requiring fewer parameters than other methods. We also show that ILAC can improve regular object detectors by incorporating relational image-level information.", "title": "" }, { "docid": "02bae85905793e75950acbe2adcc6a7b", "text": "The olfactory system is an essential part of human physiology, with a rich evolutionary history. Although humans are less dependent on chemosensory input than are other mammals (Niimura 2009, Hum. Genomics 4:107-118), olfactory function still plays a critical role in health and behavior. The detection of hazards in the environment, generating feelings of pleasure, promoting adequate nutrition, influencing sexuality, and maintenance of mood are described roles of the olfactory system, while other novel functions are being elucidated. A growing body of evidence has implicated a role for olfaction in such diverse physiologic processes as kin recognition and mating (Jacob et al. 2002a, Nat. Genet. 30:175-179; Horth 2007, Genomics 90:159-175; Havlicek and Roberts 2009, Psychoneuroendocrinology 34:497-512), pheromone detection (Jacob et al. 200b, Horm. Behav. 42:274-283; Wyart et al. 2007, J. Neurosci. 27:1261-1265), mother-infant bonding (Doucet et al. 2009, PLoS One 4:e7579), food preferences (Mennella et al. 2001, Pediatrics 107:E88), central nervous system physiology (Welge-Lüssen 2009, B-ENT 5:129-132), and even longevity (Murphy 2009, JAMA 288:2307-2312). The olfactory system, although phylogenetically ancient, has historically received less attention than other special senses, perhaps due to challenges related to its study in humans. In this article, we review the anatomic pathways of olfaction, from peripheral nasal airflow leading to odorant detection, to epithelial recognition of these odorants and related signal transduction, and finally to central processing. Olfactory dysfunction, which can be defined as conductive, sensorineural, or central (typically related to neurodegenerative disorders), is a clinically significant problem, with a high burden on quality of life that is likely to grow in prevalence due to demographic shifts and increased environmental exposures.", "title": "" }, { "docid": "ce282fba1feb109e03bdb230448a4f8a", "text": "The goal of two-sample tests is to assess whether two samples, SP ∼ P and SQ ∼ Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis “P = Q” is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.", "title": "" }, { "docid": "4aed26d5f35f6059f4afe8cc7225f6a8", "text": "The rapid and quick growth of smart mobile devices has caused users to demand pervasive mobile broadband services comparable to the fixed broadband Internet. In this direction, the research initiatives on 5G networks have gained accelerating momentum globally. 5G Networks will act as a nervous system of the digital society, economy, and everyday peoples life and will enable new future Internet of Services paradigms such as Anything as a Service, where devices, terminals, machines, also smart things and robots will become innovative tools that will produce and will use applications, services and data. However, future Internet will exacerbate the need for improved QoS/QoE, supported by services that are orchestrated on-demand and that are capable of adapt at runtime, depending on the contextual conditions, to allow reduced latency, high mobility, high scalability, and real time execution. A new paradigm called Fog Computing, or briefly Fog has emerged to meet these requirements. Fog Computing extends Cloud Computing to the edge of the network, reduces service latency, and improves QoS/QoE, resulting in superior user-experience. This paper provides a survey of 5G and Fog Computing technologies and their research directions, that will lead to Beyond-5G Network in the Fog.", "title": "" }, { "docid": "9a1e0edc4d5eb8a2cbf7fa0c6640f0bc", "text": "The classical SVM is an optimization problem minimizing the hinge losses of mis-classified samples with the regularization term. When the sample size is small or data has noise, it is possible that the classifier obtained with training data may not generalize well to population, since the samples may not accurately represent the true population distribution. We propose a distributionally-robust framework for Support Vector Machines (DR-SVMs). We build an ambiguity set for the population distribution based on samples using the Kantorovich metric. DR-SVMs search the classifier that minimizes the sum of regularization term and the hinge loss function for the worst-case population distribution among the ambiguity set. We provide semi-infinite programming formulation of the DR-SVMs and propose a cutting-plane algorithm to solve the problem. Computational results on simulated data and real data from University of California, Irvine Machine Learning Repository show that the DR-SVMs outperform the SVMs in terms of the Area Under Curve (AUC) measures on several test problems.", "title": "" }, { "docid": "c860c9006751ee614464eaa5737da843", "text": "We propose a simple yet effective approach to learning bilingual word embeddings (BWEs) from non-parallel document-aligned data (based on the omnipresent skip-gram model), and its application to bilingual lexicon induction (BLI). We demonstrate the utility of the induced BWEs in the BLI task by reporting on benchmarking BLI datasets for three language pairs: (1) We show that our BWE-based BLI models significantly outperform the MuPTM-based and context-counting models in this setting, and obtain the best reported BLI results for all three tested language pairs; (2) We also show that our BWE-based BLI models outperform other BLI models based on recently proposed BWEs that require parallel data for bilingual training.", "title": "" }, { "docid": "44f5908740475159c3b4da1de11fad04", "text": "Insufficient sleep, poor sleep quality and sleepiness are common problems in children and adolescents being related to learning, memory and school performance. The associations between sleep quality (k=16 studies, N=13,631), sleep duration (k=17 studies, N=15,199), sleepiness (k=17, N=19,530) and school performance were examined in three separate meta-analyses including influential factors (e.g., gender, age, parameter assessment) as moderators. All three sleep variables were significantly but modestly related to school performance. Sleepiness showed the strongest relation to school performance (r=-0.133), followed by sleep quality (r=0.096) and sleep duration (r=0.069). Effect sizes were larger for studies including younger participants which can be explained by dramatic prefrontal cortex changes during (early) adolescence. Concerning the relationship between sleep duration and school performance age effects were even larger in studies that included more boys than in studies that included more girls, demonstrating the importance of differential pubertal development of boys and girls. Longitudinal and experimental studies are recommended in order to gain more insight into the different relationships and to develop programs that can improve school performance by changing individuals' sleep patterns.", "title": "" }, { "docid": "113c07908c1f22c7671553c7f28c0b3f", "text": "Nearly 80% of children in the United States have at least 1 sibling, indicating that the birth of a baby sibling is a normative ecological transition for most children. Many clinicians and theoreticians believe the transition is stressful, constituting a developmental crisis for most children. Yet, a comprehensive review of the empirical literature on children's adjustment over the transition to siblinghood (TTS) has not been done for several decades. The current review summarizes research examining change in first borns' adjustment to determine whether there is evidence that the TTS is disruptive for most children. Thirty studies addressing the TTS were found, and of those studies, the evidence did not support a crisis model of developmental transitions, nor was there overwhelming evidence of consistent changes in firstborn adjustment. Although there were decreases in children's affection and responsiveness toward mothers, the results were more equivocal for many other behaviors (e.g., sleep problems, anxiety, aggression, regression). An inspection of the scientific literature indicated there are large individual differences in children's adjustment and that the TTS can be a time of disruption, an occasion for developmental advances, or a period of quiescence with no noticeable changes. The TTS may be a developmental turning point for some children that portends future psychopathology or growth depending on the transactions between children and the changes in the ecological context over time. A developmental ecological systems framework guided the discussion of how child, parent, and contextual factors may contribute to the prediction of firstborn children's successful adaptation to the birth of a sibling.", "title": "" }, { "docid": "41b17931c63d053bd0a339beab1c0cfc", "text": "The investigation and development of new methods from diverse perspectives to shed light on portfolio choice problems has never stagnated in financial research. Recently, multi-armed bandits have drawn intensive attention in various machine learning applications in online settings. The tradeoff between exploration and exploitation to maximize rewards in bandit algorithms naturally establishes a connection to portfolio choice problems. In this paper, we present a bandit algorithm for conducting online portfolio choices by effectually exploiting correlations among multiple arms. Through constructing orthogonal portfolios from multiple assets and integrating with the upper confidence bound bandit framework, we derive the optimal portfolio strategy that represents the combination of passive and active investments according to a risk-adjusted reward function. Compared with oft-quoted trading strategies in finance and machine learning fields across representative real-world market datasets, the proposed algorithm demonstrates superiority in both risk-adjusted return and cumulative wealth.", "title": "" }, { "docid": "6d13952afa196a6a77f227e1cc9f43bd", "text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.", "title": "" }, { "docid": "f0ef7e240b794ffbab6c628ca2648ddd", "text": "One of the key technologies of future automobiles is the parking assist or automatic parking control. Control problems of a car-like vehicle are not easy because of nonholonomic velocity constraints. This paper proposes a parking control strategy which is composed of an open loop path planner and a feedback tracking controller. By employing a trajectory tracking controller for a 2 wheeled robot, a car-like vehicle can be successfully controlled to the desired configuration. Experimental results with a radio controlled model car clearly show that the proposed control scheme is practically useful", "title": "" } ]
scidocsrr
5d6a4d76f598b0ae08c744b10ea1f5de
Experimental Verification of the Oscillating Paddling Gait for an ePaddle-EGM Amphibious Locomotion Mechanism
[ { "docid": "975bc281e14246e29da61495e1e5dae1", "text": "We have introduced the biomechanical research on snakes and developmental research on snake-like robots that we have been working on. We could not introduce everything we developed. There were also a smaller snake-like active endoscope; a large-sized snake-like inspection robot for nuclear reactor related facility, Koryu, 1 m in height, 3.5 m in length, and 350 kg in weight; and several other snake-like robots. Development of snake-like robots is still one of our latest research topics. We feel that the technical difficulties in putting snake-like robots into practice have almost been overcome by past research, so we believe that such practical use of snake-like robots can be realized soon.", "title": "" } ]
[ { "docid": "b06844c98f1b46e6d3bd583aacd76015", "text": "The task of network management and monitoring relies on an accurate characterization of network traffic generated by different applications and network protocols. We employ three supervisedmachine learning (ML) algorithms, BayesianNetworks, Decision Trees and Multilayer Perceptrons for the flow-based classification of six different types of Internet traffic including peer-to-peer (P2P) and content delivery (Akamai) traffic. The dependency of the traffic classification performance on the amount and composition of training data is investigated followed by experiments that show that ML algorithms such as Bayesian Networks and Decision Trees are suitable for Internet traffic flow classification at a high speed, and prove to be robust with respect to applications that dynamically change their source ports. Finally, the importance of correctly classified training instances is highlighted by an experiment that is conducted with wrongly labeled training data. © 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cfec86113f10f5466ab4778d498eee94", "text": "The platooning of connected and automated vehicles (CAVs) is expected to have a transformative impact on road transportation, e.g., enhancing highway safety, improving traffic utility, and reducing fuel consumption. Requiring only local information, distributed control schemes are scalable approaches to the coordination of multiple CAVs without using centralized communication and computation. From the perspective of multi-agent consensus control, this paper introduces a decomposition framework to model, analyze, and design the platoon system. In this framework, a platoon is naturally decomposed into four interrelated components, i.e., 1) node dynamics, 2) information flow network, 3) distributed controller, and 4) geometry formation. The classic model of each component is summarized according to the results of the literature survey; four main performance metrics, i.e., internal stability, stability margin, string stability, and coherence behavior, are discussed in the same fashion. Also, the basis of typical distributed control techniques is presented, including linear consensus control, distributed robust control, distributed sliding mode control, and distributed model predictive control.", "title": "" }, { "docid": "ff8958f18c9ff2b4c97c1815deb4dd8c", "text": "The combination of a sedentary lifestyle and excess energy intake has led to an increased prevalence of obesity which constitutes a major risk factor for several co-morbidities including type 2 diabetes and cardiovascular diseases. Intensive research during the last two decades has revealed that a characteristic feature of obesity linking it to insulin resistance is the presence of chronic low-grade inflammation being indicative of activation of the innate immune system. Recent evidence suggests that activation of the innate immune system in the course of obesity is mediated by metabolic signals, such as free fatty acids (FFAs), being elevated in many obese subjects, through activation of pattern recognition receptors thereby leading to stimulation of critical inflammatory signaling cascades, like IκBα kinase/nuclear factor-κB (IKK/NF- κB), endoplasmic reticulum (ER) stress-induced unfolded protein response (UPR) and NOD-like receptor P3 (NLRP3) inflammasome pathway, that interfere with insulin signaling. Exercise is one of the main prescribed interventions in obesity management improving insulin sensitivity and reducing obesity- induced chronic inflammation. This review summarizes current knowledge of the cellular recognition mechanisms for FFAs, the inflammatory signaling pathways triggered by excess FFAs in obesity and the counteractive effects of both acute and chronic exercise on obesity-induced activation of inflammatory signaling pathways. A deeper understanding of the effects of exercise on inflammatory signaling pathways in obesity is useful to optimize preventive and therapeutic strategies to combat the increasing incidence of obesity and its comorbidities.", "title": "" }, { "docid": "ac48dad2fd7798c670618b7917d023f5", "text": "In classification or prediction tasks, data imbalance problem is frequently observed when most of instances belong to one majority class. Data imbalance problem has received considerable attention in machine learning community because it is one of the main causes that degrade the performance of classifiers or predictors. In this paper, we propose geometric mean based boosting algorithm (GMBoost) to resolve data imbalance problem. GMBoost enables learning with consideration of both majority and minority classes because it uses the geometric mean of both classes in error rate and accuracy calculation. To evaluate the performance of GMBoost, we have applied GMBoost to bankruptcy prediction task. The results and their comparative analysis with AdaBoost and cost-sensitive boosting indicate that GMBoost has the advantages of high prediction power and robust learning capability in imbalanced data as well as balanced data distribution. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ac37ca6b8bb12305ac6e880e6e7c336a", "text": "In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.", "title": "" }, { "docid": "38fa8db9d32fd8cf7d43a9db62f7b8e1", "text": "In this paper will be presented a methodology for implementation of the Line Impedance Stabilization Network (LISN), of commutable symmetric kind, as the specified on Standard IEC CISPR 16-1, using low cost easily acquirable components in the electro-electronic business. The Line Impedance Stabilization Network is used for conducted EMI tests in equipment’s witch current is not above 16 A.", "title": "" }, { "docid": "ff67540fcba29de05415c77744d3a21d", "text": "Using Youla Parametrization and Linear Matrix Inequalities (LMI) a Multiobjective Robust Control (MRC) design for continuous linear time invariant (LTI) systems with bounded uncertainties is described. The design objectives can be a combination of H∞-, H2-performances, constraints on the control signal, etc.. Based on an initial stabilizing controller all stabilizing controllers for the uncertain system can be described by the Youla parametrization. Given this representation, all objectives can be formulated by independent Lyapunov functions, increasing the degree of freedom for the control design.", "title": "" }, { "docid": "1e6583ec7a290488cd8e672ab59158b9", "text": "Evidence-based guidelines for the management of patients with Lyme disease, human granulocytic anaplasmosis (formerly known as human granulocytic ehrlichiosis), and babesiosis were prepared by an expert panel of the Infectious Diseases Society of America. These updated guidelines replace the previous treatment guidelines published in 2000 (Clin Infect Dis 2000; 31[Suppl 1]:1-14). The guidelines are intended for use by health care providers who care for patients who either have these infections or may be at risk for them. For each of these Ixodes tickborne infections, information is provided about prevention, epidemiology, clinical manifestations, diagnosis, and treatment. Tables list the doses and durations of antimicrobial therapy recommended for treatment and prevention of Lyme disease and provide a partial list of therapies to be avoided. A definition of post-Lyme disease syndrome is proposed.", "title": "" }, { "docid": "65a87f693d78e69c01d812fef7e9e85a", "text": "MDPL has been proposed as a masked logic style that counteracts DPA attacks. Recently, it has been shown that the so-called “early propagation effect” might reduce the security of this logic style significantly. In the light of these findings, a 0.13 μm prototype chip that includes the implementation of an 8051-compatible microcontroller in MDPL has been analyzed. Attacks on the measured power traces of this implementation show a severe DPA leakage. In this paper, the results of a detailed analysis of the reasons for this leakage are presented. Furthermore, a proposal is made on how to improve MDPL with respect to the identified problems.", "title": "" }, { "docid": "4cc52c8b6065d66472955dff9200b71f", "text": "Over the past few years there has been an increasing focus on the development of features for resource management within the Linux kernel. The addition of the fair group scheduler has enabled the provisioning of proportional CPU time through the specification of group weights. Since the scheduler is inherently workconserving in nature, a task or a group can consume excess CPU share in an otherwise idle system. There are many scenarios where this extra CPU share can cause unacceptable utilization or latency. CPU bandwidth provisioning or limiting approaches this problem by providing an explicit upper bound on usage in addition to the lower bound already provided by shares. There are many enterprise scenarios where this functionality is useful. In particular are the cases of payper-use environments, and latency provisioning within non-homogeneous environments. This paper details the requirements behind this feature, the challenges involved in incorporating into CFS (Completely Fair Scheduler), and the future development road map for this feature. 1 CPU as a manageable resource Before considering the aspect of bandwidth provisioning let us first review some of the basic existing concepts currently arbitrating entity management within the scheduler. There are two major scheduling classes within the Linux CPU scheduler, SCHED_RT and SCHED_NORMAL. When runnable, entities from the former, the real-time scheduling class, will always be elected to run over those from the normal scheduling class. Prior to v2.6.24, the scheduler had no notion of any entity larger than that of single task1. The available management APIs reflected this and the primary control of bandwidth available was nice(2). In v2.6.24, the completely fair scheduler (CFS) was merged, replacing the existing SCHED_NORMAL scheduling class. This new design delivered weight based scheduling of CPU bandwidth, enabling arbitrary partitioning. This allowed support for group scheduling to be added, managed using cgroups through the CPU controller sub-system. This support allows for the flexible creation of scheduling groups, allowing the fraction of CPU resources received by a group of tasks to be arbitrated as a whole. The addition of this support has been a major step in scheduler development, enabling Linux to align more closely with enterprise requirements for managing this resouce. The hierarchies supported by this model are flexible, and groups may be nested within groups. Each group entity’s bandwidth is provisioned using a corresponding shares attribute which defines its weight. Similarly, the nice(2) API was subsumed to control the weight of an individual task entity. Figure 1 shows the hierarchical groups that might be created in a typical university server to differentiate CPU bandwidth between users such as professors, students, and different departments. One way to think about shares is that it provides lowerbound provisioning. When CPU bandwidth is scheduled at capacity, all runnable entities will receive bandwidth in accordance with the ratio of their share weight. It’s key to observe here that not all entities may be runnable 1Recall that under Linux any kernel-backed thread is considered individual task entity, there is no typical notion of a process in scheduling context.", "title": "" }, { "docid": "39fc05dfc0faeb47728b31b6053c040a", "text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.", "title": "" }, { "docid": "432852d6521a79dfd516d1bcf800d86e", "text": "TiNb6O17 and TiNb2O7 were synthesized using a solid-state method. The techniques were used to assess the electrochemical performance and lithium diffusion kinetics of TiNb6O17 related to the unit cell volume with TiNb2O7. The charge-discharge curves and cyclic voltammetry revealed TiNb6O17 to have a similar redox potential to TiNb2O7 as well as a high discharge capacity. The rate performance of TiNb6O17 was measured using a rate capability test. SSCV and EIS showed that TiNb6O17 had higher lithium diffusion coefficients during the charging. From GITT, the lithium diffusion coefficients at the phase transition region showed the largest increase from TiNb2O7 to TiNb6O17.", "title": "" }, { "docid": "270a3ea6a0546cc7e26d78c27dfe1343", "text": "This paper presents a new modulation strategy of the three-phase DC-DC Boost converter with high frequency isolation. This work has as its main objective to present a solution to deal with the forbidden region found in the study of the referred converter. The proposed modulation keeps the main characteristics of the original topology, which are: the voltage and current frequencies in the output filter are three times the switching frequency, input current ripple reduction, high frequency isolation, moreover, when the converter works within duty ratio up to 1/3, it makes it possible to precharge the output capacitor, reducing the start-up converter' currents. A theoretical analysis of the operating stages and the experimental results are presented to a prototype operating as a Flyback-Boost with switching frequency of 20 kHz, input voltage from 24 V to 120 V, output voltage from 52 V to 450 V and output power from 650 W to 2025 W.", "title": "" }, { "docid": "a0ee42eabf32de3b0307e9fbdfbaf857", "text": "To leverage modern hardware platforms to their fullest, more and more database systems embrace compilation of query plans to native code. In the research community, there is an ongoing debate about the best way to architect such query compilers. This is perceived to be a difficult task, requiring techniques fundamentally different from traditional interpreted query execution. \n We aim to contribute to this discussion by drawing attention to an old but underappreciated idea known as Futamura projections, which fundamentally link interpreters and compilers. Guided by this idea, we demonstrate that efficient query compilation can actually be very simple, using techniques that are no more difficult than writing a query interpreter in a high-level language. Moreover, we demonstrate how intricate compilation patterns that were previously used to justify multiple compiler passes can be realized in one single, straightforward, generation pass. Key examples are injection of specialized index structures, data representation changes such as string dictionaries, and various kinds of code motion to reduce the amount of work on the critical path.\n We present LB2: a high-level query compiler developed in this style that performs on par with, and sometimes beats, the best compiled query engines on the standard TPC-H benchmark.", "title": "" }, { "docid": "d323138667599e3035e47c1f1d4b60d4", "text": "Neuronal dynamics unfolding within the cerebral cortex exhibit complex spatial and temporal patterns even in the absence of external input. Here we use a computational approach in an attempt to relate these features of spontaneous cortical dynamics to the underlying anatomical connectivity. Simulating nonlinear neuronal dynamics on a network that captures the large-scale interregional connections of macaque neocortex, and applying information theoretic measures to identify functional networks, we find structure-function relations at multiple temporal scales. Functional networks recovered from long windows of neural activity (minutes) largely overlap with the underlying structural network. As a result, hubs in these long-run functional networks correspond to structural hubs. In contrast, significant fluctuations in functional topology are observed across the sequence of networks recovered from consecutive shorter (seconds) time windows. The functional centrality of individual nodes varies across time as interregional couplings shift. Furthermore, the transient couplings between brain regions are coordinated in a manner that reveals the existence of two anticorrelated clusters. These clusters are linked by prefrontal and parietal regions that are hub nodes in the underlying structural network. At an even faster time scale (hundreds of milliseconds) we detect individual episodes of interregional phase-locking and find that slow variations in the statistics of these transient episodes, contingent on the underlying anatomical structure, produce the transfer entropy functional connectivity and simulated blood oxygenation level-dependent correlation patterns observed on slower time scales.", "title": "" }, { "docid": "439bb6492a1be10c9e9a40fb914b2c5f", "text": "Data mining technology provides a user oriented approach to novel and hidden information in the data. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. Data mining in healthcare medicine deals with learning models to predict patients’ disease. Data mining applications can greatly benefit all parties involved in the healthcare industry. For example, data mining can help healthcare insurers detect fraud and abuse, healthcare organizations make customer relationship management decisions, physicians identify effective treatments and best practices, and patients receive better and more affordable healthcare services. The huge amounts of data generated by healthcare transactions are too complex and voluminous to be processed and analyzed by traditional methods. Data mining provides the methodology and technology to transform these mounds of data into useful information for decision making. The main aim of this survey is, analysis of the uniqueness of medical data mining, overview of Healthcare Decision Support Systems currently used in medicine, identification and selection of the most common data mining algorithms implemented in the modern HDSS, comparison between different algorithms in Data mining.", "title": "" }, { "docid": "e9b3ddc114998e25932819e3281e2e0c", "text": "We study the problem of jointly aligning sentence constituents and predicting their similarities. While extensive sentence similarity data exists, manually generating reference alignments and labeling the similarities of the aligned chunks is comparatively onerous. This prompts the natural question of whether we can exploit easy-to-create sentence level data to train better aligners. In this paper, we present a model that learns to jointly align constituents of two sentences and also predict their similarities. By taking advantage of both sentence and constituent level data, we show that our model achieves state-of-the-art performance at predicting alignments and constituent similarities.", "title": "" }, { "docid": "3b9b49f8c2773497f8e05bff4a594207", "text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%AP@0.5 on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.", "title": "" }, { "docid": "5eaea95d0e1febd8ee37c3b92f962ca0", "text": "In the last few decades, there have been extensive studies on analysis and investigation of disc brake vibrations done by many researchers around the world on the possibility of eliminating brake vibration to improve vehicle users’ comfort. Despite these efforts, still no general solution exists. Therefore, it is one of the most important issues that require a detailed and in-depth study for investigation brake noise and vibration. Research on brake noise and vibration has been conducted using theoretical, experimental and numerical approaches. Experimental methods can provide real measured data and they are trustworthy. This paper aims to focus on experimental investigations and summarized recent studies on automotive disc brake noise and vibration for measuring instable frequencies and mode shapes for the system in vibration and to verify possible numerical solutions. Finally, the critical areas where further research directions are needed for reducing vibration of disc brake are suggested in the conclusions.", "title": "" } ]
scidocsrr
cba477ae81d28d334ed6184c60b345d3
BoostClean: Automated Error Detection and Repair for Machine Learning
[ { "docid": "4fa6343567b96be083e342bf11ee093f", "text": "Data cleaning is frequently an iterative process tailored to the requirements of a specific analysis task. The design and implementation of iterative data cleaning tools presents novel challenges, both technical and organizational, to the community. In this paper, we present results from a user survey (N = 29) of data analysts and infrastructure engineers from industry and academia. We highlight three important themes: (1) the iterative nature of data cleaning, (2) the lack of rigor in evaluating the correctness of data cleaning, and (3) the disconnect between the analysts who query the data and the infrastructure engineers who design the cleaning pipelines. We conclude by presenting a number of recommendations for future work in which we envision an interactive data cleaning system that accounts for the observed challenges.", "title": "" }, { "docid": "4b90fefa981e091ac6a5d2fd83e98b66", "text": "This paper explores an analysis-aware data cleaning architecture for a large class of SPJ SQL queries. In particular, we propose QuERy, a novel framework for integrating entity resolution (ER) with query processing. The aim of QuERy is to correctly and efficiently answer complex queries issued on top of dirty data. The comprehensive empirical evaluation of the proposed solution demonstrates its significant advantage in terms of efficiency over the traditional techniques for the given problem settings.", "title": "" } ]
[ { "docid": "526a687b663b488b5c5cddc1107a0865", "text": "Ricin toxin-binding subunit B (RTB) is a galactosebinding lectin protein. In the present study, we investigated the effects of RTB on inducible nitric oxide (NO) synthase (iNOS), interleukin (IL)-6 and tumor necrosis factor (TNF)-α, as well as the signal transduction mechanisms involved in recombinant RTB-induced macrophage activation. RAW264.7 macrophages were treated with RTB. The results revealed that the mRNA and protein expression of iNOS was increased in the recombinant RTB-treated macrophages. TNF-α production was observed to peak at 20 h, whereas the production of IL-6 peaked at 24 h. In another set of cultures, the cells were co-incubated with RTB and the tyrosine kinase inhibitor, genistein, the phosphatidylinositol 3-kinase (PI3K) inhibitor, LY294002, the p42/44 inhibitor, PD98059, the p38 inhibitor, SB203580, the JNK inhibitor, SP600125, the protein kinase C (PKC) inhibitor, staurosporine, the JAK2 inhibitor, tyrphostin (AG490), or the NOS inhibitor, L-NMMA. The recombinant RTB-induced production of NO, TNF-α and IL-6 was inhibited in the macrophages treated with the pharmacological inhibitors genistein, LY294002, staurosporine, AG490, SB203580 and BAY 11-7082, indicating the possible involvement of protein tyrosine kinases, PI3K, PKC, JAK2, p38 mitogen-activated protein kinase (MAPK) and nuclear factor (NF)-κB in the above processes. A phosphoprotein analysis identified tyrosine phosphorylation targets that were uniquely induced by recombinant RTB and inhibited following treatment with genistein; some of these proteins are associated with the downstream cascades of activated JAK-STAT and NF-κB receptors. Our data may help to identify the most important target molecules for the development of novel drug therapies.", "title": "" }, { "docid": "9af4c955b7c08ca5ffbfabc9681f9525", "text": "The emergence of deep neural networks (DNNs) as a state-of-the-art machine learning technique has enabled a variety of artificial intelligence applications for image recognition, speech recognition and translation, drug discovery, and machine vision. These applications are backed by large DNN models running in serving mode on a cloud computing infrastructure to process client inputs such as images, speech segments, and text segments. Given the compute-intensive nature of large DNN models, a key challenge for DNN serving systems is to minimize the request response latencies. This paper characterizes the behavior of different parallelism techniques for supporting scalable and responsive serving systems for large DNNs. We identify and model two important properties of DNN workloads: 1) homogeneous request service demand and 2) interference among requests running concurrently due to cache/memory contention. These properties motivate the design of serving deep learning systems fast (SERF), a dynamic scheduling framework that is powered by an interference-aware queueing-based analytical model. To minimize response latency for DNN serving, SERF quickly identifies and switches to the optimal parallel configuration of the serving system by using both empirical and analytical methods. Our evaluation of SERF using several well-known benchmarks demonstrates its good latency prediction accuracy, its ability to correctly identify optimal parallel configurations for each benchmark, its ability to adapt to changing load conditions, and its efficiency advantage (by at least three orders of magnitude faster) over exhaustive profiling. We also demonstrate that SERF supports other scheduling objectives and can be extended to any general machine learning serving system with the similar parallelism properties as above.", "title": "" }, { "docid": "380380bd46d854febd0bf12e50ec540b", "text": "STUDY DESIGN\nExperimental laboratory study.\n\n\nOBJECTIVES\nTo quantify and compare electromyographic signal amplitude of the gluteus maximus and gluteus medius muscles during exercises of varying difficulty to determine which exercise most effectively recruits these muscles.\n\n\nBACKGROUND\nGluteal muscle weakness has been proposed to be associated with lower extremity injury. Exercises to strengthen the gluteal muscles are frequently used in rehabilitation and injury prevention programs without scientific evidence regarding their ability to activate the targeted muscles.\n\n\nMETHODS\nSurface electromyography was used to quantify the activity level of the gluteal muscles in 21 healthy, physically active subjects while performing 12 exercises. Repeated-measures analyses of variance were used to compare normalized mean signal amplitude levels, expressed as a percent of a maximum voluntary isometric contraction (MVIC), across exercises.\n\n\nRESULTS\nSignificant differences in signal amplitude among exercises were noted for the gluteus medius (F5,90 = 7.9, P<.0001) and gluteus maximus (F5,95 = 8.1, P<.0001). Gluteus medius activity was significantly greater during side-lying hip abduction (mean +/- SD, 81% +/- 42% MVIC) compared to the 2 types of hip clam (40% +/- 38% MVIC, 38% +/- 29% MVIC), lunges (48% +/- 21% MVIC), and hop (48% +/- 25% MVIC) exercises. The single-limb squat and single-limb deadlift activated the gluteus medius (single-limb squat, 64% +/- 25% MVIC; single-limb deadlift, 59% +/- 25% MVIC) and maximus (single-limb squat, 59% +/- 27% MVIC; single-limb deadlift, 59% +/- 28% MVIC) similarly. The gluteus maximus activation during the single-limb squat and single-limb deadlift was significantly greater than during the lateral band walk (27% +/- 16% MVIC), hip clam (34% +/- 27% MVIC), and hop (forward, 35% +/- 22% MVIC; transverse, 35% +/- 16% MVIC) exercises.\n\n\nCONCLUSION\nThe best exercise for the gluteus medius was side-lying hip abduction, while the single-limb squat and single-limb deadlift exercises led to the greatest activation of the gluteus maximus. These results provide information to the clinician about relative activation of the gluteal muscles during specific therapeutic exercises that can influence exercise progression and prescription. J Orthop Sports Phys Ther 2009;39(7):532-540, Epub 24 February 2009. doi:10.2519/jospt.2009.2796.", "title": "" }, { "docid": "9497731525a996844714d5bdbca6ae03", "text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.", "title": "" }, { "docid": "a09d03e2de70774f443d2da88a32b555", "text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs) [1]. Brain-computer interfaces are devices that process a user’s brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted non-disabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming.", "title": "" }, { "docid": "dab84197dec153309bb45368ab730b12", "text": "Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the conditional relations is often a tedious and error-prone task. This article provides an overview of methods used to probe interaction effects and describes a unified collection of freely available online resources that researchers can use to obtain significance tests for simple slopes, compute regions of significance, and obtain confidence bands for simple slopes across the range of the moderator in the MLR, HLM, and LCA contexts. Plotting capabilities are also provided.", "title": "" }, { "docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7", "text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "3ce203d713a0060cc3c1466d62c9bd36", "text": "This paper describes successful applications of discriminative lexicon models to the statistical machine translation (SMT) systems into morphologically complex languages. We extend the previous work on discriminatively trained lexicon models to include more contextual information in making lexical selection decisions by building a single global log-linear model of translation selection. In offline experiments, we show that the use of the expanded contextual information, including morphological and syntactic features, help better predict words in three target languages with complex morphology (Bulgarian, Czech and Korean). We also show that these improved lexical prediction models make a positive impact in the end-to-end SMT scenario from English to these languages.", "title": "" }, { "docid": "39b5095283fd753013c38459a93246fd", "text": "OBJECTIVE\nTo determine whether cannabis use in adolescence predisposes to higher rates of depression and anxiety in young adulthood.\n\n\nDESIGN\nSeven wave cohort study over six years.\n\n\nSETTING\n44 schools in the Australian state of Victoria.\n\n\nPARTICIPANTS\nA statewide secondary school sample of 1601 students aged 14-15 followed for seven years.\n\n\nMAIN OUTCOME MEASURE\nInterview measure of depression and anxiety (revised clinical interview schedule) at wave 7.\n\n\nRESULTS\nSome 60% of participants had used cannabis by the age of 20; 7% were daily users at that point. Daily use in young women was associated with an over fivefold increase in the odds of reporting a state of depression and anxiety after adjustment for intercurrent use of other substances (odds ratio 5.6, 95% confidence interval 2.6 to 12). Weekly or more frequent cannabis use in teenagers predicted an approximately twofold increase in risk for later depression and anxiety (1.9, 1.1 to 3.3) after adjustment for potential baseline confounders. In contrast, depression and anxiety in teenagers predicted neither later weekly nor daily cannabis use.\n\n\nCONCLUSIONS\nFrequent cannabis use in teenage girls predicts later depression and anxiety, with daily users carrying the highest risk. Given recent increasing levels of cannabis use, measures to reduce frequent and heavy recreational use seem warranted.", "title": "" }, { "docid": "c188731b9047bbbe70c35690a5a584ab", "text": "Resource Managers like YARN and Mesos have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault tolerance, task scheduling and coordination) and reimplement common mechanisms (e.g., caching, bulk-data transfers). This article presents REEF, a development framework that provides a control plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource reuse for data caching and state management abstractions that greatly ease the development of elastic data processing pipelines on cloud platforms that support a Resource Manager service. We illustrate the power of REEF by showing applications built atop: a distributed shell application, a machine-learning framework, a distributed in-memory caching system, and a port of the CORFU system. REEF is currently an Apache top-level project that has attracted contributors from several institutions and it is being used to develop several commercial offerings such as the Azure Stream Analytics service.", "title": "" }, { "docid": "d2c36f67971c22595bc483ebb7345404", "text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.", "title": "" }, { "docid": "f571329b93779ae073184d9d63eb0c6c", "text": "Retailers are now the dominant partners in most suply systems and have used their positions to re-engineer operations and partnership s with suppliers and other logistic service providers. No longer are retailers the pass ive recipients of manufacturer allocations, but instead are the active channel con trollers organizing supply in anticipation of, and reaction to consumer demand. T his paper reflects on the ongoing transformation of retail supply chains and logistics. If considers this transformation through an examination of the fashion, grocery and selected other retail supply chains, drawing on practical illustrations. Current and fut ure challenges are then discussed. Introduction Retailers were once the passive recipients of produ cts allocated to stores by manufacturers in the hope of purchase by consumers and replenished o nly at the whim and timing of the manufacturer. Today, retailers are the controllers of product supply in anticipation of, and reaction to, researched, understood, and real-time customer demand. Retailers now control, organise, and manage the supply chain from producti on to consumption. This is the essence of the retail logistics and supply chain transforma tion that has taken place since the latter part of the twentieth century. Retailers have become the channel captains and set the pace in logistics. Having extended their channel control and focused on corporate effi ci ncy and effectiveness, retailers have", "title": "" }, { "docid": "8670b853d3991a8244add8aeb38f8e54", "text": "TOPLESS are tetrameric plant corepressors of the conserved Tup1/Groucho/TLE (transducin-like enhancer of split) family. We show that they interact through their TOPLESS domains (TPDs) with two functionally important ethylene response factor–associated amphiphilic repression (EAR) motifs of the rice strigolactone signaling repressor D53: the universally conserved EAR-3 and the monocot-specific EAR-2. We present the crystal structure of the monocot-specific EAR-2 peptide in complex with the TOPLESS-related protein 2 (TPR2) TPD, in which the EAR-2 motif binds the same TPD groove as jasmonate and auxin signaling repressors but makes additional contacts with a second TPD site to mediate TPD tetramer-tetramer interaction. We validated the functional relevance of the two TPD binding sites in reporter gene assays and in transgenic rice and demonstrate that EAR-2 binding induces TPD oligomerization. Moreover, we demonstrate that the TPD directly binds nucleosomes and the tails of histones H3 and H4. Higher-order assembly of TPD complexes induced by EAR-2 binding markedly stabilizes the nucleosome-TPD interaction. These results establish a new TPD-repressor binding mode that promotes TPD oligomerization and TPD-nucleosome interaction, thus illustrating the initial assembly of a repressor-corepressor-nucleosome complex.", "title": "" }, { "docid": "66ba9c32c29e905a018aab3a25733fd1", "text": "Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societal goals for equality of representation in this space.", "title": "" }, { "docid": "f7a1eaa86a81b104a9ae62dc87c495aa", "text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.", "title": "" }, { "docid": "b19fb7f7471d3565e79dbaab3572bb4d", "text": "Self-enucleation or oedipism is a specific manifestation of psychiatric illness distinct from the milder forms of self-inflicted ocular injury. In this article, we discuss the previously unreported medical complication of subarachnoid hemorrhage accompanying self-enucleation. The diagnosis was suspected from the patient's history and was confirmed by computed tomographic scan of the head. This complication may be easily missed in the overtly psychotic patient. Specific steps in the medical management of self-enucleation are discussed, and medical complications of self-enucleation are reviewed.", "title": "" }, { "docid": "18da4e2cd0745e400002d24117834fd8", "text": "This paper examines the possible influence of podcasting on the traditional lecture in higher education. Firstly, it explores some of the benefits and limitations of the lecture as one of the dominant forms of teaching in higher education. The review then moves to explore the emergence of podcasting in education and the purpose of its use, before examining recent relevant literature about podcasting for supporting, enhancing, and indeed replacing the traditional lecture. The review identifies three broad types of use of podcasting: substitutional, supplementary and creative use. Podcasting appears to be most commonly used to provide recordings of past lectures to students for the purposes of review and revision (substitutional use). The second most common use was in providing additional material, often in the form of study guides and summary notes, to broaden and deepen students’ understanding (supplementary use). The third and least common use reported in the literature involved the creation of student generated podcasts (creative use). The review examines three key questions: What are the educational uses of podcasting in teaching and learning in higher education? Can podcasting facilitate more flexible and mobile learning? In what ways will podcasting influence the traditional lecture? These questions are discussed in the final section of the paper, with reference to future policies and practices.", "title": "" }, { "docid": "007634725171f426691246c419f067ad", "text": "A flexible multidelay block frequency domain (MDF) adaptive filter is presented. The distinct feature of the MDF adaptive filter is to allow one to choose the size of an FFT tailored to the efficient use of a hardware, rather than the requirement of a specific application. The MDF adaptive filter also requires less memory and so reduces the requirement and cost of a hardware. In performance, the MDF adaptive filter introduces smaller block delay and is faster,.ideal for a time-varying system such as modeling an acoustic path in a teleconference room. This is achieved by using smaller block size, updating the weight vectors more often, and reducing the total execution time of the adaptive process. The MDF adaptive filter compares favorably to other frequency domain adaptive filters when its adaptation speed and misadjustment are tested in computer simulations.", "title": "" } ]
scidocsrr
de2559aeee6553227557fe5226813203
Hybrid Particle Swarm Optimization-Firefly algorithm (HPSOFF) for combinatorial optimization of non-slicing VLSI floorplanning
[ { "docid": "d57491b0ba1e68597ce2937534983c92", "text": "Inspired by the natural features of the variable size of the population, we present a variable population-size genetic (VPGA) by introducing the “dying probab ility” for the i ndividuals and the “war/disease pro cess” for the population. Based o the VPGA and the particle swarm optimization (PSO) algor ithms, a novel PSO-GA-based hybrid algorithm (PGHA) is a proposed in this paper. Simulation results show that both VPGA and PGHA are effective for the optimization problems  2004 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "81b24cc33a54dcd6ca4af6264ad24a9a", "text": "In many envisioned drone-based applications, drones will communicate with many different smart objects, such as sensors and embedded devices. Securing such communications requires an effective and efficient encryption key establishment protocol. However, the design of such a protocol must take into account constrained resources of smart objects and the mobility of drones. In this paper, a secure communication protocol between drones and smart objects is presented. To support the required security functions, such as authenticated key agreement, non-repudiation, and user revocation, we propose an efficient Certificateless Signcryption Tag Key Encapsulation Mechanism (eCLSC-TKEM). eCLSC-TKEM reduces the time required to establish a shared key between a drone and a smart object by minimizing the computational overhead at the smart object. Also, our protocol improves drone's efficiency by utilizing dual channels which allows many smart objects to concurrently execute eCLSC-TKEM. We evaluate our protocol on commercially available devices, namely AR.Drone2.0 and TelosB, by using a parking management testbed. Our experimental results show that our protocol is much more efficient than other protocols.", "title": "" }, { "docid": "4625d09122eb2e42a201503405f7abfa", "text": "OBJECTIVE\nTo summarize 16 years of National Collegiate Athletic Association (NCAA) injury surveillance data for 15 sports and to identify potential modifiable risk factors to target for injury prevention initiatives.\n\n\nBACKGROUND\nIn 1982, the NCAA began collecting standardized injury and exposure data for collegiate sports through its Injury Surveillance System (ISS). This special issue reviews 182 000 injuries and slightly more than 1 million exposure records captured over a 16-year time period (1988-1989 through 2003-2004). Game and practice injuries that required medical attention and resulted in at least 1 day of time loss were included. An exposure was defined as 1 athlete participating in 1 practice or game and is expressed as an athlete-exposure (A-E).\n\n\nMAIN RESULTS\nCombining data for all sports, injury rates were statistically significantly higher in games (13.8 injuries per 1000 A-Es) than in practices (4.0 injuries per 1000 A-Es), and preseason practice injury rates (6.6 injuries per 1000 A-Es) were significantly higher than both in-season (2.3 injuries per 1000 A-Es) and postseason (1.4 injuries per 1000 A-Es) practice rates. No significant change in game or practice injury rates was noted over the 16 years. More than 50% of all injuries were to the lower extremity. Ankle ligament sprains were the most common injury over all sports, accounting for 15% of all reported injuries. Rates of concussions and anterior cruciate ligament injuries increased significantly (average annual increases of 7.0% and 1.3%, respectively) over the sample period. These trends may reflect improvements in identification of these injuries, especially for concussion, over time. Football had the highest injury rates for both practices (9.6 injuries per 1000 A-Es) and games (35.9 injuries per 1000 A-Es), whereas men's baseball had the lowest rate in practice (1.9 injuries per 1000 A-Es) and women's softball had the lowest rate in games (4.3 injuries per 1000 A-Es).\n\n\nRECOMMENDATIONS\nIn general, participation in college athletics is safe, but these data indicate modifiable factors that, if addressed through injury prevention initiatives, may contribute to lower injury rates in collegiate sports.", "title": "" }, { "docid": "a1670c2db7a933b5778d009032e444ff", "text": "BACKGROUND\nExcess bodyweight, expressed as increased body-mass index (BMI), is associated with the risk of some common adult cancers. We did a systematic review and meta-analysis to assess the strength of associations between BMI and different sites of cancer and to investigate differences in these associations between sex and ethnic groups.\n\n\nMETHODS\nWe did electronic searches on Medline and Embase (1966 to November 2007), and searched reports to identify prospective studies of incident cases of 20 cancer types. We did random-effects meta-analyses and meta-regressions of study-specific incremental estimates to determine the risk of cancer associated with a 5 kg/m2 increase in BMI.\n\n\nFINDINGS\nWe analysed 221 datasets (141 articles), including 282,137 incident cases. In men, a 5 kg/m2 increase in BMI was strongly associated with oesophageal adenocarcinoma (RR 1.52, p<0.0001) and with thyroid (1.33, p=0.02), colon (1.24, p<0.0001), and renal (1.24, p <0.0001) cancers. In women, we recorded strong associations between a 5 kg/m2 increase in BMI and endometrial (1.59, p<0.0001), gallbladder (1.59, p=0.04), oesophageal adenocarcinoma (1.51, p<0.0001), and renal (1.34, p<0.0001) cancers. We noted weaker positive associations (RR <1.20) between increased BMI and rectal cancer and malignant melanoma in men; postmenopausal breast, pancreatic, thyroid, and colon cancers in women; and leukaemia, multiple myeloma, and non-Hodgkin lymphoma in both sexes. Associations were stronger in men than in women for colon (p<0.0001) cancer. Associations were generally similar in studies from North America, Europe and Australia, and the Asia-Pacific region, but we recorded stronger associations in Asia-Pacific populations between increased BMI and premenopausal (p=0.009) and postmenopausal (p=0.06) breast cancers.\n\n\nINTERPRETATION\nIncreased BMI is associated with increased risk of common and less common malignancies. For some cancer types, associations differ between sexes and populations of different ethnic origins. These epidemiological observations should inform the exploration of biological mechanisms that link obesity with cancer.", "title": "" }, { "docid": "42b5d245a0f18cbb532e7f2f890a0de4", "text": "A natural evaluation metric for statistical topic models is the probability of held-out documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean method and empirical likelihood method. In this paper, we demonstrate experimentally that commonly-used methods are unlikely to accurately estimate the probability of held-out documents, and propose two alternative methods that are both accurate and efficient.", "title": "" }, { "docid": "734ca5ac095cc8339056fede2a642909", "text": "The value of depth-first search or \"bacltracking\" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.", "title": "" }, { "docid": "aff3f2e70cb7f6dbff9dad0881e3e86f", "text": "Knowledge graphs holistically integrate information about entities from multiple sources. A key step in the construction and maintenance of knowledge graphs is the clustering of equivalent entities from different sources. Previous approaches for such an entity clustering suffer from several problems, e.g., the creation of overlapping clusters or the inclusion of several entities from the same source within clusters. We therefore propose a new entity clustering algorithm CLIP that can be applied both to create entity clusters and to repair entity clusters determined with another clustering scheme. In contrast to previous approaches, CLIP not only uses the similarity between entities for clustering but also further features of entity links such as the so-called link strength. To achieve a good scalability we provide a parallel implementation of CLIP based on Apache Flink. Our evaluation for different datasets shows that the new approach can achieve substantially higher cluster quality than previous approaches.", "title": "" }, { "docid": "c9bfd3b31a8a95898d45819037341307", "text": "OBJECTIVE\nInvestigation of the effect of a green tea-caffeine mixture on weight maintenance after body weight loss in moderately obese subjects in relation to habitual caffeine intake.\n\n\nRESEARCH METHODS AND PROCEDURES\nA randomized placebo-controlled double blind parallel trial in 76 overweight and moderately obese subjects, (BMI, 27.5 +/- 2.7 kg/m2) matched for sex, age, BMI, height, body mass, and habitual caffeine intake was conducted. A very low energy diet intervention during 4 weeks was followed by 3 months of weight maintenance (WM); during the WM period, the subjects received a green tea-caffeine mixture (270 mg epigallocatechin gallate + 150 mg caffeine per day) or placebo.\n\n\nRESULTS\nSubjects lost 5.9 +/-1.8 (SD) kg (7.0 +/- 2.1%) of body weight (p < 0.001). At baseline, satiety was positively, and in women, leptin was inversely, related to subjects' habitual caffeine consumption (p < 0.01). High caffeine consumers reduced weight, fat mass, and waist circumference more than low caffeine consumers; resting energy expenditure was reduced less and respiratory quotient was reduced more during weight loss (p < 0.01). In the low caffeine consumers, during WM, green tea still reduced body weight, waist, respiratory quotient and body fat, whereas resting energy expenditure was increased compared with a restoration of these variables with placebo (p < 0.01). In the high caffeine consumers, no effects of the green tea-caffeine mixture were observed during WM.\n\n\nDISCUSSION\nHigh caffeine intake was associated with weight loss through thermogenesis and fat oxidation and with suppressed leptin in women. In habitual low caffeine consumers, the green tea-caffeine mixture improved WM, partly through thermogenesis and fat oxidation.", "title": "" }, { "docid": "fec5391c20850ceea7b470c9a9faa09c", "text": "When rewards are sparse and action spaces large, Q-learning with -greedy exploration can be inefficient. This poses problems for otherwise promising applications such as task-oriented dialogue systems, where the primary reward signal, indicating successful completion of a task, requires a complex sequence of appropriate actions. Under these circumstances, a randomly exploring agent might never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialogue systems. First, we introduce an exploration technique based on Thompson sampling, drawing Monte Carlo samples from a Bayes-by-backprop neural network, demonstrating marked improvement over common approaches such as -greedy and Boltzmann exploration. Second, we show that spiking the replay buffer with experiences from a small number of successful episodes, as are easy to harvest for dialogue tasks, can make Q-learning feasible when it might otherwise fail.", "title": "" }, { "docid": "b7b2f1c59dfc00ab6776c6178aff929c", "text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.", "title": "" }, { "docid": "07310c30b78d74a1e237af4dd949d68e", "text": "The vulnerability of face, fingerprint and iris recognition systems to attacks based on morphed biometric samples has been established in the recent past. However, so far a reliable detection of morphed biometric samples has remained an unsolved research challenge. In this work, we propose the first multi-algorithm fusion approach to detect morphed facial images. The FRGCv2 face database is used to create a set of 4,808 morphed and 2,210 bona fide face images which are divided into a training and test set. From a single cropped facial image features are extracted using four types of complementary feature extraction algorithms, including texture descriptors, keypoint extractors, gradient estimators and a deep learning-based method. By performing a score-level fusion of comparison scores obtained by four different types of feature extractors, a detection equal error rate (D-EER) of 2.8% is achieved. Compared to the best single algorithm approach achieving a D-EER of 5.5%, the D-EER of the proposed multi-algorithm fusion system is al- most twice as low, confirming the soundness of the presented approach.", "title": "" }, { "docid": "9595f31effb10fbf5b8dbd9f058e2e6a", "text": "In this work we present a novel approach to recover objects 3D position and occupancy in a generic scene using only 2D object detections from multiple view images. The method reformulates the problem as the estimation of a quadric (ellipsoid) in 3D given a set of 2D ellipses fitted to the object detection bounding boxes in multiple views. We show that a closed-form solution exists in the dual-space using a minimum of three views while a solution with two views is possible through the use of non-linear optimisation and object constraints on the size of the object shape. In order to make the solution robust toward inaccurate bounding boxes, a likely occurrence in object detection methods, we introduce a data preconditioning technique and a non-linear refinement of the closed form solution based on implicit subspace constraints. Results on synthetic tests and on different real datasets, involving challenging scenarios, demonstrate the applicability and potential of our method in several realistic scenarios.", "title": "" }, { "docid": "6fab26c4c8fa05390aa03998a748f87d", "text": "Click prediction is one of the fundamental problems in sponsored search. Most of existing studies took advantage of machine learning approaches to predict ad click for each event of ad view independently. However, as observed in the real-world sponsored search system, user’s behaviors on ads yield high dependency on how the user behaved along with the past time, especially in terms of what queries she submitted, what ads she clicked or ignored, and how long she spent on the landing pages of clicked ads, etc. Inspired by these observations, we introduce a novel framework based on Recurrent Neural Networks (RNN). Compared to traditional methods, this framework directly models the dependency on user’s sequential behaviors into the click prediction process through the recurrent structure in RNN. Large scale evaluations on the click-through logs from a commercial search engine demonstrate that our approach can significantly improve the click prediction accuracy, compared to sequence-independent approaches.", "title": "" }, { "docid": "7014a3c3fa78e0d610388dc08733478e", "text": "The demands placed on today’s organizations and their managers suggest that we have to develop pedagogies combining analytic reasoning with a more exploratory skill set that design practitioners have embraced and business schools have traditionally neglected. Design thinking is an iterative, exploratory process involving visualizing, experimenting, creating, and prototyping of models, and gathering feedback. It is a particularly apt method for addressing innovation and messy, ill-structured situations. We discuss key characteristics of design thinking, link design-thinking characteristics to recent studies of cognition, and note how the repertoire of skills and methods that embody design thinking can address deficits in business school education. ........................................................................................................................................................................", "title": "" }, { "docid": "1947a704719aa9fe5311eccdea52aecc", "text": "Based on the observation that the correlation between observed traffic at two measurement points or traffic stations may be time-varying, attributable to the time-varying speed which subsequently causes variations in the time required to travel between the two points, in this paper, we develop a modified Space-Time Autoregressive Integrated Moving Average (STARIMA) model with time-varying lags for short-term traffic flow prediction. Particularly, the temporal lags in the modified STARIMA change with the time-varying speed at different time of the day or equivalently change with the (time-varying) time required to travel between two measurement points. Firstly, a technique is developed to evaluate the temporal lag in the STARIMA model, where the temporal lag is formulated as a function of the spatial lag (spatial distance) and the average speed. Secondly, an unsupervised classification algorithm based on ISODATA algorithm is designed to classify different time periods of the day according to the variation of the speed. The classification helps to determine the appropriate time lag to use in the STARIMA model. Finally, a STARIMA-based model with time-varying lags is developed for short-term traffic prediction. Experimental results using real traffic data show that the developed STARIMA-based model with time-varying lags has superior accuracy compared with its counterpart developed using the traditional cross-correlation function and without employing time-varying lags.", "title": "" }, { "docid": "5efb42ac41cbe3283d5791e8177bd86d", "text": "Past work has documented and described major patterns of adaptive and maladaptive behavior: the mastery.oriented and the helpless patterns. In this article, we present a research-based model that accounts for these patterns in terms of underlying psychological processes. The model specifies how individuals' implicit theories orient them toward particular goals and how these goals set up the different patterns. Indeed, we show how each feature (cognitive, affective, and behavioral) of the adaptive and maladaptive patterns can be seen to follow directly from different goals. We then examine the generality of the model and use it to illuminate phenomena in a wide variety of domains. Finally, we place the model in its broadest context and examine its implications for our understanding of motivational and personality processes.", "title": "" }, { "docid": "ee81c38d65c6ff2988c5519c77ffb13e", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i", "title": "" }, { "docid": "b9467dabbac0ef26cd6e56bdae8a66e5", "text": "This paper analyses the relation between economic inequality at the macro-level and the political representation of poor citizens in a comparative perspective. More specifically it addresses the research question: Does the level of economic inequality at the time of the election affect how well citizens belonging to the two lowest quintiles of the income distribution are represented by the party system and governments as compared to richer citizens? Using survey data for citizens’ policy preferences and expert placement of political parties, we find that in economically more unequal societies the party system represents relatively poor citizens worse than in more equal societies. This moderating effect of economic inequality is also found for policy congruence between citizens and governments, albeit slightly less clear-cut. ∗ Jan Rosset is a doctoral student at the Swiss Foundation for Research in Social Sciences, c/o University of Lausanne, CH-1015 Lausanne, jan.rosset@fors.unil.ch. Dr Nathalie Giger is a post-doc research fellow at the Mannheim Centre for European Social Research, University of Mannheim, D68131 Mannheim, nathalie.giger@mzes.uni-mannheim.de. Julian Bernauer is a doctoral student at the Department of Politics and Management, University of Konstanz, D-78457 Konstanz, julian.bernauer@uni-konstanz.de. The authors gratefully acknowledge the financial support provided by the EUROCORES Programme of the European Science Foundation. Julian Bernauer has received support from the Heinrich Böll Foundation. We would like to thank Anna Walsdorf for excellent research assistance.", "title": "" }, { "docid": "28a9a05c3074d50f3a6ea401ceac8cd8", "text": "The demand for broadband services has driven research on millimeter-wave frequency band communications for wireless access network due its spectrum availability, and compact size of radio frequency devices. The millimeter-wave signals are affected by losses along the transmission as well as atmospheric attenuation. One of the solution to overcome these problems is by using low-attenuation, electromagnetic interference-free optical fiber. Radio-over-Fiber (ROF) is considered to be cost-effective, practical and relatively flexible system configuration for long-haul transport of mill metric frequency band wireless signals using multicarrier modulation OFDM.", "title": "" }, { "docid": "daf1be97c0e1f6d133b58ca899fbd5af", "text": "Predicting traffic conditions has been recently explored as a way to relieve traffic congestion. Several pioneering approaches have been proposed based on traffic observations of the target location as well as its adjacent regions, but they obtain somewhat limited accuracy due to lack of mining road topology. To address the effect attenuation problem, we propose to take account of the traffic of surrounding locations. We propose an end-to-end framework called DeepTransport, in which Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain spatial-temporal traffic information within a transport network topology. In addition, attention mechanism is introduced to align spatial and temporal information. Moreover, we constructed and released a real-world large traffic condition dataset with 5-minute resolution. Our experiments on this dataset demonstrate our method captures the complex relationship in both temporal and spatial domain. It significantly outperforms traditional statistical methods and a state-of-the-art deep learning method.", "title": "" }, { "docid": "c612ee4ad1b4daa030e86a59543ca53b", "text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.", "title": "" } ]
scidocsrr
26b68d8992c035be3c28b5dab4476ec1
Large-scale FFT on GPU clusters
[ { "docid": "e2d670f48f8c627b8e195bbed32b0692", "text": "GPUs have recently attracted the attention of many application developers as commodity data-parallel coprocessors. The newest generations of GPU architecture provide easier programmability and increased generality while maintaining the tremendous memory bandwidth and computational power of traditional GPUs. This opportunity should redirect efforts in GPGPU research from ad hoc porting of applications to establishing principles and strategies that allow efficient mapping of computation to graphics hardware. In this work we discuss the GeForce 8800 GTX processor's organization, features, and generalized optimization strategies. Key to performance on this platform is using massive multithreading to utilize the large number of cores and hide global memory latency. To achieve this, developers face the challenge of striking the right balance between each thread's resource usage and the number of simultaneously active threads. The resources to manage include the number of registers and the amount of on-chip memory used per thread, number of threads per multiprocessor, and global memory bandwidth. We also obtain increased performance by reordering accesses to off-chip memory to combine requests to the same or contiguous memory locations and apply classical optimizations to reduce the number of executed operations. We apply these strategies across a variety of applications and domains and achieve between a 10.5X to 457X speedup in kernel codes and between 1.16X to 431X total application speedup.", "title": "" } ]
[ { "docid": "3bdc2c6a67976108942efd708af8cb2d", "text": "This contribution introduces a new transmission scheme for multiple-input multiple-output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) systems. The new scheme is efficient and suitable especially for symmetric channels such as the link between two base stations or between two antennas on radio beam transmission. This Thesis presents the performance analysis of V-BLAST based multiple input multiple output orthogonal frequency division multiplexing (MIMO-OFDM) system with respect to bit error rate per signal to noise ratio (BER/SNR) for various detection techniques. A 2X2 MIMO-OFDM system is used for the performance evaluation. The simulation results shows that the performance of V-BLAST based detection techniques is much better than the conventional methods. Alamouti Space Time Block Code (STBC) scheme is used with orthogonal designs over multiple antennas which showed simulated results are identical to expected theoretical results. With this technique both Bit Error Rate (BER) and maximum diversity gain are achieved by increasing number of antennas on either side. This scheme is efficient in all the applications where system capacity is limited by multipath fading.", "title": "" }, { "docid": "42050d2d11a30e003b9d35fad12daa5e", "text": "Document is unavailable: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's \"Publication Services and Products Board Operations Manual,\" IEEE has chosen to exclude this article from distribution. We regret any inconvenience.", "title": "" }, { "docid": "190f22731f5fcae8109307e5cec6162b", "text": "PURPOSE\nThe purpose of this study was to critically analyze important hygienic/secondary prophylactic and biomechanical aspects of removable partial denture (RPD) design.\n\n\nMATERIALS AND METHODS\nThe literature related to traditional biomechanical design and open/hygienic design of RPDs was discussed by the authors at a 2.5-day workshop. The written report was circulated among the authors until a consensus was reached.\n\n\nRESULTS\nThere is little scientific support for most of the traditional design principles of RPDs, nor has patient satisfaction shown any correlation with design factors. However, there is evidence that an open/hygienic design is more important than biomechanical aspects for long-term oral health. The biomechanical importance of some components is questioned, e.g., indirect retention and guiding planes. Alternative connector designs that reduce risks of tissue injury are described. Direct retainers and pontics are discussed in relation to the possibilities they offer for gingival relief.\n\n\nCONCLUSION\nGreater attention should be paid to RPD design principles that minimize the risks of tissue injury and plaque accumulation in accordance with modern concepts of preventive dentistry.", "title": "" }, { "docid": "21139973d721956c2f30e07ed1ccf404", "text": "Representing words into vectors in continuous space can form up a potentially powerful basis to generate high-quality textual features for many text mining and natural language processing tasks. Some recent efforts, such as the skip-gram model, have attempted to learn word representations that can capture both syntactic and semantic information among text corpus. However, they still lack the capability of encoding the properties of words and the complex relationships among words very well, since text itself often contains incomplete and ambiguous information. Fortunately, knowledge graphs provide a golden mine for enhancing the quality of learned word representations. In particular, a knowledge graph, usually composed by entities (words, phrases, etc.), relations between entities, and some corresponding meta information, can supply invaluable relational knowledge that encodes the relationship between entities as well as categorical knowledge that encodes the attributes or properties of entities. Hence, in this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Specifically, we build the relational knowledge and the categorical knowledge into two separate regularization functions, and combine both of them with the original objective function of the skip-gram model. By solving this combined optimization problem using back propagation neural networks, we can obtain word representations enhanced by the knowledge graph. Experiments on popular text mining and natural language processing tasks, including analogical reasoning, word similarity, and topic prediction, have all demonstrated that our model can significantly improve the quality of word representations.", "title": "" }, { "docid": "6b878f3084bd74d963f25b3fd87d0a34", "text": "Cooperative behavior planning for automated vehicles is getting more and more attention in the research community. This paper introduces two dimensions to structure cooperative driving tasks. The authors suggest to distinguish driving tasks by the used communication channels and by the hierarchical level of cooperative skills and abilities. In this manner, this paper presents the cooperative behavior skills of \"Jack\", our automated vehicle driving from Stanford to Las Vegas in January 2015.", "title": "" }, { "docid": "4abae313432bbc338b096275bf3d7816", "text": "Phase change materials (PCM) take advantage of latent heat that can be stored or released from a material over a narrow temperature range. PCM possesses the ability to change their state with a certain temperature range. These materials absorb energy during the heating process as phase change takes place and release energy to the environment in the phase change range during a reverse cooling process. Insulation effect reached by the PCM depends on temperature and time. Recently, the incorporation of PCM in textiles by coating or encapsulation to make thermo-regulated smart textiles has grown interest to the researcher. Therefore, an attempt has been taken to review the working principle of PCM and their applications for smart temperature regulated textiles. Different types of phase change materials are introduced. This is followed by an account of incorporation of PCM in the textile structure are summarized. Concept of thermal comfort, clothing for cold environment, phase change materials and clothing comfort are discussed in this review paper. Some recent applications of PCM incorporated textiles are stated. Finally, the market of PCM in textiles field and some challenges are mentioned in this review paper. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0444b38c0d20c999df4cb1294b5539c3", "text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6e185c8e285611930a91fe9dda44ec9b", "text": "An all-digital dynamically adaptive clock distribution mitigates the impact of high-frequency supply voltage (VCC) droops on microprocessor performance and energy efficiency. The design integrates a tunable-length delay prior to the global clock distribution to prolong the clock-data delay compensation in critical paths during a VCC droop. The tunable-length delay prevents critical-path timing-margin degradation for multiple cycles after the VCC droop occurs, thus allowing a sufficient response time for dynamic adaptation. An on-die dynamic variation monitor detects the onset of the VCC droop to proactively gate the clock at the end of the tunable-length delay to eliminate the clock edges that would otherwise degrade critical-path timing margin. In comparison to a conventional clock distribution, silicon measurements from a 22 nm test chip demonstrate simultaneous throughput gains and energy reductions of 14% and 3% at 1.0 V, 18% and 5% at 0.8 V, and 31% and 15% at 0.6 V, respectively, for a 10% VCC droop.", "title": "" }, { "docid": "273bf17fa1e6ad901a1bf7dbb540ba76", "text": "BAHARAV, AND ARIEH BORUT. Running in cheetahs, gazelles, and goats: energy cost and limb conjguration. Am. J. Physiol. 227(4) : 848-850. 1974.-Functional anatomists have argued that an animal can be built to run cheaply by lightening the distal parts of the limbs and/or by concentrating the muscle mass of the limbs around their pivot points. These arguments assume .that much of the energy expended as animals run at a constant speed goes into alternately accelerating and decelerating the limbs. Gazelles, goats, and cheetahs offer a nice gradation of limb configurations in animals of similar total mass and limb length and, therefore, provide the opportunity to quantify the effect of limb design on the energy cost of running. We found that, despite large differences in limb configuration, the energetic cost of running in cheetahs, gazelles, and goats of about the same mass was nearly identical over a wide range of speeds. Also, the observed energetic cost of running was almost the same as that predicted on the basis of body weight for all three species: cheetah, 0.14 ml 02 (g l km)-’ observed vs. 0.13 ml 02 (g *km)-l predicted; gazelle, 0.16 ml 02 (g *km)-’ observed vs. 0.15 ml 02 (g *km)-’ predicted; and goat, 0.18 ml 02 (g . km)-’ observed vs. 0.14 ml 02 (g *km)-’ predicted. Thus the relationship between body weight and energetic cost of running apparently applies to animals with very different limb configurations and is more general than anticipated. This suggests that most of the energy expended in running at a constant speed is not used to accelerate and decelerate the limbs.", "title": "" }, { "docid": "accbfd3c4caade25329a2a5743559320", "text": "PURPOSE\nThe purpose of this investigation was to assess the frequency of complications of third molar surgery, both intraoperatively and postoperatively, specifically for patients 25 years of age or older.\n\n\nMATERIALS AND METHODS\nThis prospective study evaluated 3,760 patients, 25 years of age or older, who were to undergo third molar surgery by oral and maxillofacial surgeons practicing in the United States. The predictor variables were categorized as demographic (age, gender), American Society of Anesthesiologists classification, chronic conditions and medical risk factors, and preoperative description of third molars (present or absent, type of impaction, abnormalities or association with pathology). Outcome variables were intraoperative and postoperative complications, as well as quality of life issues (days of work missed or normal activity curtailed). Frequencies for data collected were tabulated.\n\n\nRESULTS\nThe sample was provided by 63 surgeons, and was composed of 3,760 patients with 9,845 third molars who were 25 years of age or older, of which 8,333 third molars were removed. Alveolar osteitis was the most frequently encountered postoperative problem (0.2% to 12.7%). Postoperative inferior alveolar nerve anesthesia/paresthesia occurred with a frequency of 1.1% to 1.7%, while lingual nerve anesthesia/paresthesia was calculated as 0.3%. All other complications also occurred with a frequency of less than 1%.\n\n\nCONCLUSION\nThe findings of this study indicate that third molar surgery in patients 25 years of age or older is associated with minimal morbidity, a low incidence of postoperative complications, and minimal impact on the patients quality of life.", "title": "" }, { "docid": "6310989ad025f88412dc5d4ba7ad01af", "text": "The mobile network plays an important role in the evolution of humanity and society. However, due to the increase of users as well as of mobile applications, the current mobile network architecture faces many challenges. In this paper we describe V-Core, a new architecture for the mobile packet core network which is based on Software Defined Networking and Network Function Virtualization. Then, we introduce a MobileVisor which is a machine to slice the above mobile packet core network into different control platforms according to either different mobile operators or different technologies (e.g. 3G or 4G). With our architecture, the mobile network operators can reduce their costs for deployment and operation as well as use network resources efficiently.", "title": "" }, { "docid": "a2d7fc045b1c8706dbfe3772a8f6ef70", "text": "This paper is concerned with the problem of domain adaptation with multiple sources from a causal point of view. In particular, we use causal models to represent the relationship between the features X and class label Y , and consider possible situations where different modules of the causal model change with the domain. In each situation, we investigate what knowledge is appropriate to transfer and find the optimal target-domain hypothesis. This gives an intuitive interpretation of the assumptions underlying certain previous methods and motivates new ones. We finally focus on the case where Y is the cause for X with changing PY and PX|Y , that is, PY and PX|Y change independently across domains. Under appropriate assumptions, the availability of multiple source domains allows a natural way to reconstruct the conditional distribution on the target domain; we propose to model PX|Y (the process to generate effect X from cause Y ) on the target domain as a linear mixture of those on source domains, and estimate all involved parameters by matching the target-domain feature distribution. Experimental results on both synthetic and real-world data verify our theoretical results. Traditional machine learning relies on the assumption that both training and test data are from the same distribution. In practice, however, training and test data are probably sampled under different conditions, thus violating this assumption, and the problem of domain adaptation (DA) arises. Consider remote sensing image classification as an example. Suppose we already have several data sets on which the class labels are known; they are called source domains here. For a new data set, or a target domain, it is usually difficult to find the ground truth reference labels, and we aim to determine the labels by making use of the information from the source domains. Note that those domains are usually obtained in different areas and time periods, and that the corresponding data distribution various due to the change in illumination conditions, physical factors related to ground (e.g., different soil moisture or composition), vegetation, and atmospheric conditions. Other well-known instances of this situation include sentiment data analysis (Blitzer, Dredze, and Pereira 2007) and flow cytometry data analysis (Blanchard, Lee, and Scott 2011). DA approaches have Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many applications in varies areas including natural language processing, computer vision, and biology. For surveys on DA, see, e.g., (Jiang 2008; Pan and Yang 2010; Candela et al. 2009). In this paper, we consider the situation with n source domains on which both the features X and label Y are given, i.e., we are given (x,y) = (x k , y (i) k ) mi k=1, where i = 1, ..., n, and mi is the sample size of the ith source domain. Our goal is to find the classifier for the target domain, on which only the features x = (xk) m k=1 are available. Here we are concerned with a difficult scenario where no labeled point is available in the target domain, known as unsupervised domain adaptation. Since PXY changes across domains, we have to find what knowledge in the source domains should be transferred to the target one. Previous work in domain adaptation has usually assumed that PX changes but PY |X remain the same, i.e., the covariate shift situation; see, e.g., (Shimodaira 2000; Huang et al. 2007; Sugiyama et al. 2008; Ben-David, Shalev-Shwartz, and Urner 2012). It is also known as sample selection bias (particularly on the features X) in (Zadrozny 2004). In practice it is very often that both PX and PY |X change simultaneously across domains. For instance, both of them are likely to change over time and location for a satellite image classification system. If the data distribution changes arbitrarily across domains, clearly knowledge from the sources may not help in predicting Y on the target domain (Rosenstein et al. 2005). One has to find what type of information should be transferred from sources to the target. One possibility is to assume the change in both PX and PY |X is due to the change in PY , while PX|Y remains the same, as known as prior probability shift (Storkey 2009; Plessis and Sugiyama 2012) or target shift (Zhang et al. 2013). The latter further models the change in PX|Y caused by a location-scale (LS) transformation of the features for each class. The constraint of the LS transformation renders PX|Y on the target domain, denoted by P t X|Y , identifiable; however, it might be too restrictive. Fortunately, the availability of multiple source domains provides more hints as to find P t X|Y , as well as P t Y |X . Several algorithms have been proposed to combine knowledge from multiple source domains. For instance, (Mansour, Mohri, and Rostamizadeh 2008) proposed to form the target hypothesis by combining source hypotheses with a distribution weighted rule. (Gao et al. 2008), (Duan et al. 2009), and (Chattopadhyay et al. 2011) combine the predictions made by the source hypotheses, with the weights determined in different ways. An intuitive interpretation of the assumptions underlying those algorithms would facilitate choosing or developing DA methods for the problem at hand. To the best of our knowledge, however, it is still missing in the literature. One of our contributions in this paper is to provide such an interpretation. This paper studies the multi-source DA problem from a causal point of view where we consider the underlying data generating process behind the observed domains. We are particularly interested in what types of information stay the same, what types of information change, and how they change across domains. This enables us to construct the optimal hypothesis for the target domain in various situations. To this end, we use causal models to represent the relationship between X and Y , because they provide a compact description of the properties of the change in the data distribution.1 They, for instance, help characterize transportability of experimental findings (Pearl and Bareinboim 2011) or recoverability from selection bias (Bareinboim, Tian, and Pearl 2014). As another contribution, we further focus on a typical DA scenario where both PY and PX|Y (or the causal mechanism to generate effect X from cause Y ) change across domains, but their changes are independent from each other, as implied by the causal model Y → X . We assume that the source domains contains rich information such that for each class, P t X|Y can be approximated by a linear mixture of PX|Y on source domains. Together with other mild conditions on PX|Y , we then show that P t X|Y , as well as P t Y , is identifiable (or can be uniquely recovered). We present a computationally efficient method to estimate the involved parameters based on kernel mean distribution embedding (Smola et al. 2007; Gretton et al. 2007), followed by several approaches to constructing the target classifier using those parameters. One might wonder how to find the causal information underlying the data to facilitate domain adaptation. We note that in practice, background causal knowledge is usually available, helping formulating how to transfer the knowledge from source domains to the target. Even if this is not the case, multiple source domains with different data distributions may allow one to identify the causal structure, since the causal knowledge can be seen from the change in data distributions; see e.g., (Tian and Pearl 2001). 1 Possible DA Situations and Their Solutions DA can be considered as a learning problem in nonstationary environments (Sugiyama and Kawanabe 2012). It is helpful to find how the data distribution changes; it provides the clues as to find the learning machine for the target domain. The causal model also describes how the components of the joint distribution are related to each other, which, for instance, gives a causal explanation of the behavior of semi-supervised learning (Schölkopf et al. 2012). Table 1: Notation used in this paper. X , Y random variables X , Y domains", "title": "" }, { "docid": "f6985bb539d71f569028bf31a87e4a90", "text": "Tinea capitis favosa, a chronic inflammatory dermatophyte infection of the scalp, affects over 90% of patients with anthropophilic Trichophyton schoenleinii. T. violaceum, T. verrucosum, zoophilic T. mentagrophytes (referred to as ‘var. quinckeanum’), Microsporum canis, and geophilic M. gypseum have also been recovered from favic lesions. Favus is typically a childhood disease, yet adult cases are not uncommon. Interestingly, favus is less contagious than other dermatophytoses, although intrafamilial infections are reported and have been widely discussed in the literature. Clinical presentation of T. schoenleinii infections is variable: this fungus can be isolated from tinea capitis lesions that appear as gray patches, but symptom-free colonization of the scalp also occurs. Although in the past T. schoenleinii was the dominant fungus recovered from dermatophytic scalp lesions, worldwide the incidence has decreased except in China, Nigeria, and Iran. Favus of the glabrous skin and nails are reported less frequently than favus of the scalp. This review discusses the clinical features of favus, as well as the etiological agents, global epidemiology, laboratory diagnosis, and a short history of medical mycology.", "title": "" }, { "docid": "a8cad81570a7391175acdcf82bc9040b", "text": "A model of Convolutional Fuzzy Neural Network for real world objects and scenes images classification is proposed. The Convolutional Fuzzy Neural Network consists of convolutional, pooling and fully-connected layers and a Fuzzy Self Organization Layer. The model combines the power of convolutional neural networks and fuzzy logic and is capable of handling uncertainty and impreciseness in the input pattern representation. The Training of The Convolutional Fuzzy Neural Network consists of three independent steps for three components of the net.", "title": "" }, { "docid": "6c2ac0d096c1bcaac7fd70bd36a5c056", "text": "The purpose of this review is to illustrate the ways in which molecular neurobiological investigations will contribute to an improved understanding of drug addiction and, ultimately, to the development of more effective treatments. Such molecular studies of drug addiction are needed to establish two general types of information: (1) mechanisms of pathophysiology, identification of the changes that drugs of abuse produce in the brain that lead to addiction; and (2) mechanisms of individual risk, identification of specific genetic and environmental factors that increase or decrease an individual's vulnerability for addiction. This information will one day lead to fundamentally new approaches to the treatment and prevention of addictive disorders.", "title": "" }, { "docid": "24bb26da0ce658ff075fc89b73cad5af", "text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.", "title": "" }, { "docid": "acdc89ac9c70d26d1711e392294184df", "text": "Robinow syndrome is a short-limbed dwarfism characterized by abnormal morphogenesis of the face and external genitalia, and vertebral segmentation. The recessive form of Robinow syndrome (RRS; OMIM 268310), particularly frequent in Turkey, has a high incidence of abnormalities of the vertebral column such as hemivertebrae and rib fusions, which is not seen in the dominant form. Some patients have cardiac malformations or facial clefting. We have mapped a gene for RRS to 9q21–q23 in 11 families. Haplotype sharing was observed between three families from Turkey, which localized the gene to a 4.9-cM interval. The gene ROR2, which encodes an orphan membrane-bound tyrosine kinase, maps to this region. Heterozygous (presumed gain of function) mutations in ROR2 were previously shown to cause dominant brachydactyly type B (BDB; ref. 7). In contrast, Ror2−/− mice have a short-limbed phenotype that is more reminiscent of the mesomelic shortening observed in RRS. We detected several homozygous ROR2 mutations in our cohort of RRS patients that are located upstream from those previously found in BDB. The ROR2 mutations present in RRS result in premature stop codons and predict nonfunctional proteins.", "title": "" }, { "docid": "3207a4b3d199db8f43d96f1096e8eb81", "text": "Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).", "title": "" }, { "docid": "054c2e8fa9421c77939091e5adfc07e5", "text": "Visualization is a powerful paradigm for exploratory data analysis. Visualizing large graphs, however, often results in excessive edges crossings and overlapping nodes. We propose a new scalable approach called FACETS that helps users adaptively explore large million-node graphs from a local perspective, guiding them to focus on nodes and neighborhoods that are most subjectively interesting to users. We contribute novel ideas to measure this interestingness in terms of how surprising a neighborhood is given the background distribution, as well as how well it matches what the user has chosen to explore. FACETS uses Jensen-Shannon divergence over information-theoretically optimized histograms to calculate the subjective user interest and surprise scores. Participants in a user study found FACETS easy to use, easy to learn, and exciting to use. Empirical runtime analyses demonstrated FACETS’s practical scalability on large real-world graphs with up to 5 million edges, returning results in fewer than 1.5 seconds.", "title": "" }, { "docid": "959b487a51ae87b2d993e6f0f6201513", "text": "The two-wheel differential drive mobile robots, are one of the simplest and most used structures in mobile robotics applications, it consists of a chassis with two fixed and in-line with each other electric motors. This paper presents new models for differential drive mobile robots and some considerations regarding design, modeling and control solutions. The presented models are to be used to help in facing the two top challenges in developing mechatronic mobile robots system; early identifying system level problems and ensuring that all design requirements are met, as well as, to simplify and accelerate Mechatronics mobile robots design process, including proper selection, analysis, integration and verification of the overall system and sub-systems performance throughout the development process.", "title": "" } ]
scidocsrr
a2dc4d239c4116d9fac9ab82878724d3
ZINC: A Free Tool to Discover Chemistry for Biology
[ { "docid": "f752f66cbd7a43c3d45940a8fbec0dbf", "text": "ChEMBL is an Open Data database containing binding, functional and ADMET information for a large number of drug-like bioactive compounds. These data are manually abstracted from the primary published literature on a regular basis, then further curated and standardized to maximize their quality and utility across a wide range of chemical biology and drug-discovery research problems. Currently, the database contains 5.4 million bioactivity measurements for more than 1 million compounds and 5200 protein targets. Access is available through a web-based interface, data downloads and web services at: https://www.ebi.ac.uk/chembldb.", "title": "" } ]
[ { "docid": "e8ef1683247fddbd844437c7b27b978f", "text": "Inductive Power Transfer (IPT) is well-established for applications with biomedical implants and radio-frequency identification systems. Recently, also systems for the charging of the batteries of consumer electronic devices and of electric and hybrid electric vehicles have been developed. The efficiency η of the power transfer of IPT systems is given by the inductor quality factor Q and the magnetic coupling k of the transmission coils. In this paper, the influence of the transmission frequency on the inductor quality factor and the efficiency is analyzed taking also the admissible field emissions as limited by standards into account. Aspects of an optimization of the magnetic design with respect to a high magnetic coupling and a high quality factor are discussed for IPT at any power level. It is shown that the magnetic coupling mainly depends on the area enclosed by the coils and that their exact shape has only a minor influence. The results are verified with an experimental prototype.", "title": "" }, { "docid": "e578bafcfef89e66cd77f6ee41c1fd1e", "text": "Quadruped robot is expected to serve in complex conditions such as mountain road, grassland, etc., therefore we desire a walking pattern generation that can guarantee both the speed and the stability of the quadruped robot. In order to solve this problem, this paper focuses on the stability for the tort pattern and proposes trot pattern generation for quadruped robot on the basis of ZMP stability margin. The foot trajectory is first designed based on the work space limitation. Then the ZMP and stability margin is computed to achieve the optimal trajectory of the midpoint of the hip joint of the robot. The angles of each joint are finally obtained through the inverse kinematics calculation. Finally, the effectiveness of the proposed method is demonstrated by the results from the simulation and the experiment on the quadruped robot in BIT.", "title": "" }, { "docid": "126e75d1873d094db5a67d6de425425a", "text": "Exosomes are small extracellular vesicles that are thought to participate in intercellular communication. Recent work from our laboratory suggests that, in normal and cystic liver, exosome-like vesicles accumulate in the lumen of intrahepatic bile ducts, presumably interacting with cholangiocyte cilia. However, direct evidence for exosome-ciliary interaction is limited and the physiological relevance of such interaction remains unknown. Thus, in this study, we tested the hypothesis that biliary exosomes are involved in intercellular communication by interacting with cholangiocyte cilia and inducing intracellular signaling and functional responses. Exosomes were isolated from rat bile by differential ultracentrifugation and characterized by scanning, transmission, and immunoelectron microscopy. The exosome-ciliary interaction and its effects on ERK1/2 signaling, expression of the microRNA, miR-15A, and cholangiocyte proliferation were studied on ciliated and deciliated cultured normal rat cholangiocytes. Our results show that bile contains vesicles identified as exosomes by their size, characteristic \"saucer-shaped\" morphology, and specific markers, CD63 and Tsg101. When NRCs were exposed to isolated biliary exosomes, the exosomes attached to cilia, inducing a decrease of the phosphorylated-to-total ERK1/2 ratio, an increase of miR-15A expression, and a decrease of cholangiocyte proliferation. All these effects of biliary exosomes were abolished by the pharmacological removal of cholangiocyte cilia. Our findings suggest that bile contains exosomes functioning as signaling nanovesicles and influencing intracellular regulatory mechanisms and cholangiocyte proliferation through interaction with primary cilia.", "title": "" }, { "docid": "10a3bb2de2abc34c07a975bf6da5e266", "text": "Main-tie-main (MTM) transfer schemes increase reliability in a power system by switching a load bus to a secondary power source when a power interruption occurs on the primary source. Traditionally, the large number of physical I/O lines required makes main-tie-main schemes expensive to design and implement. Using Ethernet-based IEC 61850, these hardwired I/O lines can be removed and replaced with generic object-oriented substation event (GOOSE) messages. Adjusting the scheme for optimal performance is done via software which saves redesign time and rewiring time. Special attention is paid to change-of-state GOOSE only; no analog GOOSE messages are used, making the scheme fast and easy to configure, maintain, and troubleshoot. Applications such as fast motor-bus transfer are discussed with synchronization remaining at each breaker relay. Simulation test results recorded GOOSE message latencies on a system configured for a main-tie-main scheme. This paper presents details of open and closed, manual and automatic transfers.", "title": "" }, { "docid": "44d96985132b956f809d4f03fbb07415", "text": "We propose a method for extracting very accurate masks of hands in egocentric views. Our method is based on a novel Deep Learning architecture: In contrast with current Deep Learning methods, we do not use upscaling layers applied to a low-dimensional representation of the input image. Instead, we extract features with convolutional layers and map them directly to a segmentation mask with a fully connected layer. We show that this approach, when applied in a multi-scale fashion, is both accurate and efficient enough for real-time. We demonstrate it on a new dataset made of images captured in various environments, from the outdoors to offices.", "title": "" }, { "docid": "5c4f313482543223306be014cff0cc2e", "text": "Transformer inrush currents are high-magnitude, harmonic rich currents generated when transformer cores are driven into saturation during energization. These currents have undesirable effects, including potential damage or loss-of-life of transformer, protective relay miss operation, and reduced power quality on the system. This paper explores the theoretical explanations of inrush currents and explores different factors that have influences on the shape and magnitude of those inrush currents. PSCAD/EMTDC is used to investigate inrush currents phenomena by modeling a practical power system circuit for single phase transformer", "title": "" }, { "docid": "d16053590115de26743945649a682878", "text": "This chapter addresses various subjects, including some open questions related to energy dissipation, information, and noise, that are relevant for nanoand molecular electronics. The object is to give a brief and coherent presentation of the results of a number of recent studies of ours. 1 Energy Dissipation and Miniaturization It has been observed, in the context of Moore’s law, that the power density dissipation of microprocessors keeps growing with increasing miniaturization [1–4], and quantum computing schemes are not principally different [5, 6] for general-purpose computing applications. However, as we point out in Sect. 2 and seemingly in contrast with the above statements, the fundamental lower limit of energy dissipation of a single-bit-flip event (or switching event) is independent of the size of the system. Therefore, the increasing power dissipation may stem from the following practical facts [1–4]: • A larger number of transistors on the chip, contributing to a higher number of switching events per second; • lower relaxation time constants with smaller elements, allowing higher clock frequency and the resulting increased number of switching events per second; L.B. Kish (✉) ⋅ S.P. Khatri Department of Electrical and Computer Engineering, Texas A&M University, TAMUS 3128, College Station, TX 77843-3128, USA e-mail: Laszlokish@tamu.edu; Laszlo.Kish@ece.tamu.edu C.-G. Granqvist ⋅ G.A. Niklasson The Ångström Laboratory, Department of Engineering Sciences, Uppsala University, P.O. Box 534, SE-75121 Uppsala, Sweden F. Peper CiNet, NICT, Osaka University, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan © Springer International Publishing AG 2017 T. Ogawa (ed.), Molecular Architectonics, Advances in Atom and Single Molecule Machines, DOI 10.1007/978-3-319-57096-9_2 27 • increasing electrical field and current density, because the power supply voltage is not scaled back to the same extent as the device size; and • enhanced leakage current and related excess power dissipation, caused by an exponentially increasing tunneling effect associated with decreased insulator thickness and increased electrical field. It is clearly up to future technology to approach the fundamental limits of energy dissipation as much as possible. It is our goal in this chapter to address some of the basic, yet often controversial, aspects of the fundamental limits for nanoand molecular electronics. Specifically, we deal with the following issues: • The fundamental limit of energy dissipation for writing a bit of information. This energy is always positive and characterized by Brillouin’s negentropy formula and our refinement for longer bit operations [7–10]. • The fundamental limits of energy dissipation for erasing a bit of information [7–12]. This energy can be zero or negative; we also present a simple proof of the non-validity of Landauer’s principle of erasure dissipation [11, 12]. • Thermal noise in the low-temperature and/or high-frequency limit, i.e., in the quantum regime (referred to as “zero-point noise”). It is easy to show that both the quantum theory of the fluctuation–dissipation theorem and Nyquist’s seminal formula are incorrect and dependent on the experimental situation [13, 14], which implies that further studies are needed to clarify the properties of zero-point fluctuations in resistors in electronics-based information processors operating in the quantum limit. 2 Fundamental Lower Limits of Energy Dissipation for Writing an Information Bit [7–10] Szilard [15] (in 1929, in an incorrect way) and Brillouin [16] (in 1953, correctly) concluded that the minimum energy dissipation H1 due to changing a bit of information in a system at absolute temperature T is given as", "title": "" }, { "docid": "1fc0453124edd46f7f662a01431884e4", "text": "The paper presents mathematical models of the key components onboard autonomous power supply system (ASE) plane \"conventional\" and more electric aircraft MEA \"More Electric Aircraft\". The subject, and also the aim of this work is the analysis and simulation of selected components of the power system of modern aircraft (synchronous generator, integrated starter/generator, voltage regulator, as device control and regulation of the generator and power electronic rectifiers multi-pulse). Simulation tests of key components confirmed the results obtained by the analytical and based on their mathematical models. The final part of the paper presents the main conclusions of the analysis and simulation based on the mathematical models of the selected components of the ASE system in accordance with the concept of a more electric aircraft.", "title": "" }, { "docid": "5e9d63bfc3b4a66e0ead79a2d883adfe", "text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.", "title": "" }, { "docid": "74acfe91e216c8494b7304cff03a8c66", "text": "Diagnostic accuracy of the talar tilt test is not well established in a chronic ankle instability (CAI) population. Our purpose was to determine the diagnostic accuracy of instrumented and manual talar tilt tests in a group with varied ankle injury history compared with a reference standard of self-report questionnaire. Ninety-three individuals participated, with analysis occurring on 88 (39 CAI, 17 ankle sprain copers, and 32 healthy controls). Participants completed the Cumberland Ankle Instability Tool, arthrometer inversion talar tilt tests (LTT), and manual medial talar tilt stress tests (MTT). The ability to determine CAI status using the LTT and MTT compared with a reference standard was performed. The sensitivity (95% confidence intervals) of LTT and MTT was low [LTT = 0.36 (0.23-0.52), MTT = 0.49 (0.34-0.64)]. Specificity was good to excellent (LTT: 0.72-0.94; MTT: 0.78-0.88). Positive likelihood ratio (+ LR) values for LTT were 1.26-6.10 and for MTT were 2.23-4.14. Negative LR for LTT were 0.68-0.89 and for MTT were 0.58-0.66. Diagnostic odds ratios ranged from 1.43 to 8.96. Both clinical and arthrometer laxity testing appear to have poor overall diagnostic value for evaluating CAI as stand-alone measures. Laxity testing to assess CAI may only be useful to rule in the condition.", "title": "" }, { "docid": "5e07328bf13a9dd2486e9dddbe6a3d8f", "text": "We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.", "title": "" }, { "docid": "59c4b8a66a6cf6add26000cb2475ffe6", "text": "Intelligent transport systems are the rising technology in the near future to build cooperative vehicular networks in which a variety of different ITS applications are expected to communicate with a variety of different units. Therefore, the demand for highly customized communication channel for each or sets of similar ITS applications is increased. This article explores the capabilities of available wireless communication technologies in order to produce a win-win situation while selecting suitable carrier( s) for a single application or a profile of similar applications. Communication requirements for future ITS applications are described to select the best available communication interface for the target application(s).", "title": "" }, { "docid": "f99f522836431aae3e3f98564bcfc125", "text": "Malaysia is a developing country and government’s urbanization policy in 1980s has encouraged migration of rural population to urban centres, consistent with the shift of economy orientation from agriculture base to industrial base. At present about 60% Malaysian live in urban areas. Live demands and labour shortage in industrial sector have forced mothers to join labour force. At present there are about 65% mothers with children below 15 years of age working fulltime outside homes. Issues related to parenting and children’s development becomes crucial especially in examination oriented society like Malaysia. Using 200 families as sample this study attempted to examine effects of parenting styles of dual-earner families on children behaviour and school achievement. Results of the study indicates that for mothers and fathers authoritative style have positive effects on children behaviour and school achievement. In contrast, the permissive and authoritarian styles have negative effects on children behaviour and school achievement. Effects of findings on children development are discussed.", "title": "" }, { "docid": "de99ebecca6a9c3e6539ba00fd91feba", "text": "In previous lectures, we have analyzed random forms of optimization problems, in which the randomness was injected (via random projection) for algorithmic reasons. On the other hand, in statistical problems—even without considering approximations—the starting point is a random instance of an optimization problem. To be more concrete, suppose that we are interested in estimating some parameter θ * ∈ R d based on a set of n samples, say {Z 1 ,. .. , Z n }. Many estimators of θ * are based on solving the (random) optimization problem θ ∈ argmin θ∈C L n (θ), where C ⊂ R d is some subset of R d , and L n (θ) = 1 n n i=1 i (θ; Z i) decomposes as a sum of terms, one for each data point. Our interest will be in analyzing the sequence {θ t } ∞ t=0 generated by some optimization algorithm. A traditional analysis in (deterministic) optimization involves bounding the optimization error θ t − θ, measuring the distance between the iterates and (some) optimum. On the other hand, the population version of this problem is defined in terms of the averaged function ¯ L(θ) : = E[L n (θ)]. If the original problem has been constructed in a reasonable way, then it should be the case that θ * ∈ arg min θ∈C ¯ L(θ), meaning that the quantity of interest is a global minimizer of the population function.", "title": "" }, { "docid": "af56806a30f708cb0909998266b4d8c1", "text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.", "title": "" }, { "docid": "f02587ac75edc7a7880131a4db077bb2", "text": "Single-unit recordings in monkeys have revealed neurons in the lateral prefrontal cortex that increase their firing during a delay between the presentation of information and its later use in behavior. Based on monkey lesion and neurophysiology studies, it has been proposed that a dorsal region of lateral prefrontal cortex is necessary for temporary storage of spatial information whereas a more ventral region is necessary for the maintenance of nonspatial information. Functional neuroimaging studies, however, have not clearly demonstrated such a division in humans. We present here an analysis of all reported human functional neuroimaging studies plotted onto a standardized brain. This analysis did not find evidence for a dorsal/ventral subdivision of prefrontal cortex depending on the type of material held in working memory, but a hemispheric organization was suggested (i.e., left-nonspatial; right-spatial). We also performed functional MRI studies in 16 normal subjects during two tasks designed to probe either nonspatial or spatial working memory, respectively. A group and subgroup analysis revealed similarly located activation in right middle frontal gyrus (Brodmann's area 46) in both spatial and nonspatial [working memory-control] subtractions. Based on another model of prefrontal organization [M. Petrides, Frontal lobes and behavior, Cur. Opin. Neurobiol., 4 (1994) 207-211], a reconsideration of the previous imaging literature data suggested that a dorsal/ventral subdivision of prefrontal cortex may depend upon the type of processing performed upon the information held in working memory.", "title": "" }, { "docid": "ee0d858955c3c45ac3d990d3ad9d56ed", "text": "Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.", "title": "" }, { "docid": "02d5abb55d737fe47da98b55fccfbc8e", "text": "Existing biometric fingerprint devices show numerous reliability problems such as wet or fake fingers. In this letter, a secured method using the internal structures of the finger (papillary layer) for fingerprint identification is presented. With a frequency-domain optical coherence tomography (FD-OCT) system, a 3-D image of a finger is acquired and the information of the internal fingerprint extracted. The right index fingers of 51 individuals were recorded three times. Using a commercial fingerprint identification program, 95% of internal fingerprint images were successfully recognized. These results demonstrate that OCT imaging of internal fingerprints can be used for accurate and reliable fingerprint recognition.", "title": "" }, { "docid": "5fbb54e63158066198cdf59e1a8e9194", "text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.", "title": "" }, { "docid": "1a8d7f9766de483fc70bb98d2925f9b1", "text": "Data mining applications are becoming a more common tool in understanding and solving educational and administrative problems in higher education. In general, research in educational mining focuses on modeling student's performance instead of instructors' performance. One of the common tools to evaluate instructors' performance is the course evaluation questionnaire to evaluate based on students' perception. In this paper, four different classification techniques - decision tree algorithms, support vector machines, artificial neural networks, and discriminant analysis - are used to build classifier models. Their performances are compared over a data set composed of responses of students to a real course evaluation questionnaire using accuracy, precision, recall, and specificity performance metrics. Although all the classifier models show comparably high classification performances, C5.0 classifier is the best with respect to accuracy, precision, and specificity. In addition, an analysis of the variable importance for each classifier model is done. Accordingly, it is shown that many of the questions in the course evaluation questionnaire appear to be irrelevant. Furthermore, the analysis shows that the instructors' success based on the students' perception mainly depends on the interest of the students in the course. The findings of this paper indicate the effectiveness and expressiveness of data mining models in course evaluation and higher education mining. Moreover, these findings may be used to improve the measurement instruments.", "title": "" } ]
scidocsrr
851c9f69a5acaceef297d6abfd393c33
Adaptive eye gaze patterns in interactions with human and artificial agents
[ { "docid": "0d723c344ab5f99447f7ad2ff72c0455", "text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.", "title": "" } ]
[ { "docid": "f5b500c143fd584423ee8f0467071793", "text": "Drug-Drug Interactions (DDIs) are major causes of morbidity and treatment inefficacy. The prediction of DDIs for avoiding the adverse effects is an important issue. There are many drug-drug interaction pairs, it is impossible to do in vitro or in vivo experiments for all the possible pairs. The limitation of DDIs research is the high costs. Many drug interactions are due to alterations in drug metabolism by enzymes. The most common among these enzymes are cytochrome P450 enzymes (CYP450). Drugs can be substrate, inhibitor or inducer of CYP450 which will affect metabolite of other drugs. This paper proposes enzyme action crossing attribute creation for DDIs prediction. Machine learning techniques, k-Nearest Neighbor (k-NN), Neural Networks (NNs), and Support Vector Machine (SVM) were used to find DDIs for simvastatin based on enzyme action crossing. SVM preformed the best providing the predictions at the accuracy of 70.40 % and of 81.85 % with balance and unbalance class label datasets respectively. Enzyme action crossing method provided the new attribute that can be used to predict drug-drug interactions.", "title": "" }, { "docid": "e7f1e8f82c91c7afd4d58c9987f3e95e", "text": "ÐA level set method for capturing the interface between two ¯uids is combined with a variable density projection method to allow for computation of a two-phase ¯ow where the interface can merge/ break and the ¯ow can have a high Reynolds number. A distance function formulation of the level set method enables us to compute ¯ows with large density ratios (1000/1) and ¯ows that are surface tension driven, with no emotional involvement. Recent work has improved the accuracy of the distance function formulation and the accuracy of the advection scheme. We compute ¯ows involving air bubbles and water drops, among others. We validate our code against experiments and theory. In Ref. [1] an Eulerian scheme was described for computing incompressible two-¯uid ¯ow where the density ratio across the interface is large (e.g. air/water) and both surface tension and vis-cous e€ects are included. In this paper, we modify our scheme improving both the accuracy and eciency of the algorithm. We use a level set function tòcapture' the air/water interface thus allowing us to eciently compute ¯ows with complex interfacial structure. In Ref. [1], a new iterative process was devised in order to maintain the level set function as the signed distance from the air/water interface. Since we know the distance from the interface at any point in the domain, we can give the interface a thickness of size O(h); this allows us to compute with sti€ surface tension e€ects and steep density gradients. We have since imposed a new`constraint' on the iterative process improving the accuracy of this process. We have also upgraded our scheme to using higher order ENO for spatial derivatives, and high order Runge±Kutta for the time dis-cretization (see Ref. [2]). An example of the problems we wish to solve is illustrated in Fig. 1. An air bubble rises up to the water surface and then`bursts', emitting a jet of water that eventually breaks up into satellite drops. It is a very dicult problem involving much interfacial complexity and sti€ surface tension e€ects. The density ratio at the interface is ca 1000/1. In Ref. [3], the boundary integral method was used to compute thèbubble-burst' problem and compared with experimental results. The boundary integral method is a very good method for inviscid air/water problems because, as a Lagrangian based scheme, only points on the interface need to be discretized. Unfortunately, if one wants to include the merging and breaking …", "title": "" }, { "docid": "301fb951bb2720ebc71202ee7be37be2", "text": "This work incorporates concepts from the behavioral confirmation tradition, self tradition, and interdependence tradition to identify an interpersonal process termed the Michelangelo phenomenon. The Michelangelo phenomenon describes the means by which the self is shaped by a close partner's perceptions and behavior. Specifically, self movement toward the ideal self is described as a product of partner affirmation, or the degree to which a partner's perceptions of the self and behavior toward the self are congruent with the self's ideal. The results of 4 studies revealed strong associations between perceived partner affirmation and self movement toward the ideal self, using a variety of participant populations and measurement methods. In addition, perceived partner affirmation--particularly perceived partner behavioral affirmation--was strongly associated with quality of couple functioning and stability in ongoing relationships.", "title": "" }, { "docid": "b59e90e5d1fa3f58014dedeea9d5b6e4", "text": "The results of vitrectomy in 240 consecutive cases of ocular trauma were reviewed. Of these cases, 71.2% were war injuries. Intraocular foreign bodies were present in 155 eyes, of which 74.8% were metallic and 61.9% ferromagnetic. Multivariate analysis identified the prognostic factors predictive of poor visual outcome, which included: (1) presence of an afferent pupillary defect; (2) double perforating injuries; and (3) presence of intraocular foreign bodies. Association of vitreous hemorrhage with intraocular foreign bodies was predictive of a poor prognosis. Eyes with foreign bodies retained in the anterior segment and vitreous had a better prognosis than those with foreign bodies embedded in the retina. Timing of vitrectomy and type of trauma had no significant effect on the final visual results. Prophylactic scleral buckling reduced the incidence of retinal detachment after surgery. Injuries confined to the cornea had a better prognosis than scleral injuries.", "title": "" }, { "docid": "c32d61da51308397d889db143c3e6f9d", "text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.", "title": "" }, { "docid": "18aa98d42150adb110632b20118909e4", "text": "In recent times, 60 GHz millimeter wave systems have become increasingly attractive due to the escalating demand for multi-Gb/s wireless communication. Recent works have demonstrated the ability to realize a 60 GHz transceiver by means of a cost-effective CMOS process. This paper aims to give the most up-to-date status of the 60 GHz wireless transceiver development, with an emphasis on realizing low power consumption and small form factor that is applicable for mobile terminals. To make 60 GHz wireless more robust and ease of use in various applications, broadband propagation and interference characteristics are measured at the 60 GHz band in an application-oriented office environment, considering the concurrent use of multiple frequency channels and multiple terminals. Moreover, this paper gives an overview of future millimeter wave systems.", "title": "" }, { "docid": "2cbf6e7fcf3beb005610e3b76d443427", "text": "The following series of experiments explore the effect of static peripheral stimulation on the perception of distance and spatial scale in a typical head-mounted virtual environment. It was found that applying constant white light in an observers far periphery enabled the observer to more accurately judge distances using blind walking. An effect of similar magnitude was also found when observers estimated the size of a virtual space using a visual scale task. The presence of the effect across multiple psychophysical tasks provided confidence that a perceptual change was, in fact, being invoked by the addition of the peripheral stimulation. These results were also compared to observer performance in a very large field of view virtual environment and in the real world. The subsequent findings raise the possibility that distance judgments in virtual environments might be considerably more similar to those in the real world than previous work has suggested.", "title": "" }, { "docid": "20f1a40e7f352085c04709e27c1a2aa2", "text": "Automatic speech recognition (ASR) outputs often contain various disfluencies. It is necessary to remove these disfluencies before processing downstream tasks. In this paper, an efficient disfluency detection approach based on right-to-left transitionbased parsing is proposed, which can efficiently identify disfluencies and keep ASR outputs grammatical. Our method exploits a global view to capture long-range dependencies for disfluency detection by integrating a rich set of syntactic and disfluency features with linear complexity. The experimental results show that our method outperforms state-of-the-art work and achieves a 85.1% f-score on the commonly used English Switchboard test set. We also apply our method to in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.", "title": "" }, { "docid": "626c274978a575cd06831370a6590722", "text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "271faf6ddbd7ade4fb602609fddfb53c", "text": "Most of the existing approaches for RGB-D indoor scene labeling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on directly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simultaneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental results on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-theart.", "title": "" }, { "docid": "e58b15d705923a519fe52688c951ee99", "text": "Automatic glasses detection on real face images is a challenging problem due to different appearance variations. Nevertheless, glasses detection on face images has not been thoroughly investigated. In this paper, an innovative algorithm for automatic glasses detection based on Robust Local Binary Pattern and robust alignment is proposed. Firstly, images are preprocessed and normalized in order to deal with scale and rotation. Secondly, eye glasses region is detected considering that the nosepiece of the glasses is usually placed at the same level as the center of the eyes in both height and width. Thirdly, Robust Local Binary Pattern is built to describe the eyes region, and finally, support vector machine is used to classify the LBP features. This algorithm can be applied as the first step of a glasses removal algorithm due to its robustness and speed. The proposed algorithm has been tested over the Labeled Faces in the Wild database showing a 98.65 % recognition rate. Influences of the resolution, the alignment of the normalized images and the number of divisions in the LBP operator are also investigated.", "title": "" }, { "docid": "6521ae2b4592fccdb061f1e414774024", "text": "The development of the Job Satisfaction Survey (JSS), a nine-subscale measure of employee job satisfaction applicable specifically to human service, public, and nonprofit sector organizations, is described. The item selection, item analysis, and determination of the final 36-item scale are also described, and data on reliability and validity and the instrument's norms are summarized. Included are a multitrait-multimethod analysis of the JSS and the Job Descriptive Index (JDI), factor analysis of the JSS, and scale intercorrelations. Correlation of JSS scores with criteria of employee perceptions and behaviors for multiple samples were consistent with findings involving other satisfaction scales and with findings from the private sector. The strongest correlations were with perceptions of the job and supervisor, intention of quitting, and organizational commitment. More modest correlations were found with salary, age, level, absenteeism, and turnover.", "title": "" }, { "docid": "5305e147b2aa9646366bc13deb0327b0", "text": "This longitudinal case-study aimed at examining whether purposely teaching for the promotion of higher order thinking skills enhances students’ critical thinking (CT), within the framework of science education. Within a pre-, post-, and post–post experimental design, high school students, were divided into three research groups. The experimental group (n=57) consisted of science students who were exposed to teaching strategies designed for enhancing higher order thinking skills. Two other groups: science (n=41) and non-science majors (n=79), were taught traditionally, and acted as control. By using critical thinking assessment instruments, we have found that the experimental group showed a statistically significant improvement on critical thinking skills components and disposition towards critical thinking subscales, such as truth-seeking, open-mindedness, self-confidence, and maturity, compared with the control groups. Our findings suggest that if teachers purposely and persistently practice higher order thinking strategies for example, dealing in class with real-world problems, encouraging open-ended class discussions, and fostering inquiry-oriented experiments, there is a good chance for a consequent development of critical thinking capabilities.", "title": "" }, { "docid": "1f44c8d792b961649903eb1ab2612f3c", "text": "Teeth segmentation is an important step in human identification and Content Based Image Retrieval (CBIR) systems. This paper proposes a new approach for teeth segmentation using morphological operations and watershed algorithm. In Cone Beam Computer Tomography (CBCT) and Multi Slice Computer Tomography (MSCT) each tooth is an elliptic shape region that cannot be separated only by considering their pixels' intensity values. For segmenting a tooth from the image, some enhancement is necessary. We use morphological operators such as image filling and image opening to enhance the image. In the proposed algorithm, a Maximum Intensity Projection (MIP) mask is used to separate teeth regions from black and bony areas. Then each tooth is separated using the watershed algorithm. Anatomical constraints are used to overcome the over segmentation problem in watershed method. The results show a high accuracy for the proposed algorithm in segmenting teeth. Proposed method decreases time consuming by considering only one image of CBCT and MSCT for segmenting teeth instead of using all slices.", "title": "" }, { "docid": "70745e8cdf957b1388ab38a485e98e60", "text": "Network studies of large-scale brain connectivity have begun to reveal attributes that promote the segregation and integration of neural information: communities and hubs. Network communities are sets of regions that are strongly interconnected among each other while connections between members of different communities are less dense. The clustered connectivity of network communities supports functional segregation and specialization. Network hubs link communities to one another and ensure efficient communication and information integration. This review surveys a number of recent reports on network communities and hubs, and their role in integrative processes. An emerging focus is the shifting balance between segregation and integration over time, which manifest in continuously changing patterns of functional interactions between regions, circuits and systems.", "title": "" }, { "docid": "df9c6dc1d6d1df15b78b7db02f055f70", "text": "The robotic grasp detection is a great challenge in the area of robotics. Previous work mainly employs the visual approaches to solve this problem. In this paper, a hybrid deep architecture combining the visual and tactile sensing for robotic grasp detection is proposed. We have demonstrated that the visual sensing and tactile sensing are complementary to each other and important for the robotic grasping. A new THU grasp dataset has also been collected which contains the visual, tactile and grasp configuration information. The experiments conducted on a public grasp dataset and our collected dataset show that the performance of the proposed model is superior to state of the art methods. The results also indicate that the tactile data could help to enable the network to learn better visual features for the robotic grasp detection task.", "title": "" }, { "docid": "1b6c6f7a24d1e44bce19cfb38b32e023", "text": "The purpose of this study was to explore the ways in which audiences build parasocial relationships with media characters via reality TV and social media, and its implications for celebrity endorsement and purchase intentions. Using an online survey, this study collected 401 responses from the Korean Wave fans in Singapore. The results showed that reality TV viewing and SNS use to interact with media characters were positively associated with parasocial relationships between media characters and viewers. Parasocial relationships, in turn, were positively associated with the viewers' perception of endorser and brand credibility, and purchase intention of the brand endorsed by favorite media characters. The results also indicated that self-disclosure played an important role in forming parasocial relationships and in mediating the effectiveness of celebrity endorsement. This study specifies the links between an emerging media genre, a communication technology, and audiences' interaction with the mediated world.", "title": "" }, { "docid": "4917cda3a34111202ca76c2d6cd7d5db", "text": "Monolayer tungsten disulfides (WS2) constitute a high quantum yield two-dimensional (2D) system, and can be synthesized on a large area using chemical vapor deposition (CVD), suggesting promising nanophotonics applications. However, spatially nonuniform photoluminescence (PL) intensities and peak wavelengths observed in single WS2 grains have puzzled researchers, with the origins of variation in relative contributions of excitons, trions, and biexcitons to the PL emission not well understood. Here, we present nanoscale PL and Raman spectroscopy images of triangular CVD-grown WS2 monolayers of different sizes, with these images obtained under different temperatures and values of excitation power. Intense PL emissions were observed around the edges of individual WS2 grains and the grain boundaries between partly merged WS2 grains. The predominant origin of the main PL emission from these regions changed from neutral excitons to trions and biexcitons with increasing laser excitation power, with biexcitons completely dominating the PL emission for the high-power condition. The intense PL emission and the preferential formation of biexcitons in the edges and grain boundaries of monolayer WS2 were attributed to larger population of charge carriers caused by the excessive incorporation of growth promoters during the CVD, suggesting positive roles of excessive carriers in the PL efficiency of TMD monolayers. Our comprehensive nanoscale spectroscopic investigation sheds light on the dynamic competition between exciton complexes occurring in monolayer WS2, suggesting a rich variety of ways to engineer new nanophotonic functions using 2D transition metal dichalcogenide monolayers.", "title": "" }, { "docid": "6cc99565a0e9081a94e82be93a67482e", "text": "The existing shortage of therapists and caregivers assisting physically disabled individuals at home is expected to increase and become serious problem in the near future. The patient population needing physical rehabilitation of the upper extremity is also constantly increasing. Robotic devices have the potential to address this problem as noted by the results of recent research studies. However, the availability of these devices in clinical settings is limited, leaving plenty of room for improvement. The purpose of this paper is to document a review of robotic devices for upper limb rehabilitation including those in developing phase in order to provide a comprehensive reference about existing solutions and facilitate the development of new and improved devices. In particular the following issues are discussed: application field, target group, type of assistance, mechanical design, control strategy and clinical evaluation. This paper also includes a comprehensive, tabulated comparison of technical solutions implemented in various systems.", "title": "" }, { "docid": "545b41a21edb2fa08fd6680d3d20afaf", "text": "SUMMARY This paper demonstrate how Gaussian Markov random fields (conditional autoregressions) can be fast sampled using numerical techniques for sparse matrices. The algorithm is general , surprisingly efficient, and expands easily to various forms for conditional simulation and evaluation of normalisation constants. I demonstrate its use in Markov chain Monte Carlo algorithms for disease mapping, space varying regression model, spatial non-parametrics, hierarchical space-time modelling and Bayesian imaging. Håkon Tjelmeland and Darren J. Wilkinson for stimulating discussions, and Leo Knorr-Held for providing the oral cavity cancer data and the region-map of Germany. on \" Computational and Statistical methods for the analysis of spatial data \" .", "title": "" } ]
scidocsrr
c188cbe4aeaa70ab6acf1d7dcda486f0
Saliency Prediction in the Deep Learning Era: An Empirical Investigation
[ { "docid": "6a72b09ce61635254acb0affb1d5496e", "text": "We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.", "title": "" }, { "docid": "ef6adbe1c2a0863eb6447cebffaaf0fe", "text": "How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.", "title": "" }, { "docid": "a972fb96613715b1d17ac69fdd86c115", "text": "Saliency detection has been widely studied to predict human fixations, with various applications in computer vision and image processing. For saliency detection, we argue in this paper that the state-of-the-art High Efficiency Video Coding (HEVC) standard can be used to generate the useful features in compressed domain. Therefore, this paper proposes to learn the video saliency model, with regard to HEVC features. First, we establish an eye tracking database for video saliency detection, which can be downloaded from https://github.com/remega/video_database. Through the statistical analysis on our eye tracking database, we find out that human fixations tend to fall into the regions with large-valued HEVC features on splitting depth, bit allocation, and motion vector (MV). In addition, three observations are obtained with the further analysis on our eye tracking database. Accordingly, several features in HEVC domain are proposed on the basis of splitting depth, bit allocation, and MV. Next, a kind of support vector machine is learned to integrate those HEVC features together, for video saliency detection. Since almost all video data are stored in the compressed form, our method is able to avoid both the computational cost on decoding and the storage cost on raw data. More importantly, experimental results show that the proposed method is superior to other state-of-the-art saliency detection methods, either in compressed or uncompressed domain.", "title": "" } ]
[ { "docid": "bf89c380e3ce667f4be2e12685f3d583", "text": "Prosocial behaviors are an aspect of adolescents’ positive development that has gained greater attention in the developmental literature since the 1990s. In this article, the authors review the literature pertaining to prosocial behaviors during adolescence. The authors begin by defining prosocial behaviors as prior theory and empirical studies have done. They describe antecedents to adolescents’ prosocial behaviors with a focus on two primary factors: socialization and cultural orientations. Accordingly, the authors review prior literature on prosocial behaviors among different ethnic/cultural groups throughout this article. As limited studies have examined prosocial behaviors among some specific ethnic groups, the authors conclude with recommendations for future research. Adolescence is a period of human development marked by several biological, cognitive, and social transitions. Physical changes, such as the onset of puberty and rapid changes in body composition (e.g., height, weight, and sex characteristics) prompt adolescents to engage in greater self-exploration (McCabe and Ricciardelli, 2003). Enhanced cognitive abilities permit adolescents to engage in more symbolic thinking and to contemplate abstract concepts, such as the self and one’s relationship to others (Kuhn, 2009; Steinberg, 2005). Furthermore, adolescence is marked with increased responsibilities at home and in the school context, opportunities for caregiving within the family, and mutuality in peer relationships (American Psychological Association, 2008). Moreover, society demands a greater level of psychosocial maturity and expects greater adherence to social norms from adolescents compared to children (Eccles et al., 2008). Therefore, adolescence presents itself as a time of major life transitions. In light of these myriad transitions, adolescents are further developing prosocial behaviors. Although the emergence of prosocial behaviors (e.g., expressed behaviors that are intended to benefit others) begins in early childhood, the developmental transitions described above allow adolescents to become active agents in their own developmental process. Behavior that is motivated by adolescents’ concern for others is thought to reflect optimal social functioning or prosocial behaviors (American Psychological Association, 2008). While the early literature focused primarily on prosocial behaviors among young children (e.g., Garner, 2006; Garner et al., 2008; Iannotti, 1985) there are several reasons to track prosocial development into adolescence. First and foremost, individuals develop cognitive abilities that allow them to better phenomenologically process and psychologically mediate life experiences that may facilitate (e.g., completing household chores and caring for siblings) or hinder (e.g., interpersonal conflict and perceptions of institutional discrimination) prosocial development (e.g., Brown and Bigler, 2005). Adolescents express more intentionality in which activities they will engage in and become selective in where they choose to devote their energies (Mahoney et al., 2009). Finally, adolescents are afforded more opportunities to express helping behaviors in other social spheres beyond the family context, such as in schools, communities, and civic society (Yates and Youniss, 1996). Origins and Definitions of Prosocial Behaviors Since the turn of the twenty-first century, there has been growing interest in understanding the relationships that exist between the strengths of individuals and resources within communities (e.g., person 4 context) in order to identify pathways for healthy development, or to understand how adolescents’ thriving can be promoted. This line of thinking is commonly described as the positive youth development perspective (e.g., Lerner et al., 2009). Although the adolescent literature still predominantly focuses on problematic development (e.g., delinquency and risk-taking behaviors), studies on adolescents’ prosocial development have increased substantially since the 1990s (Eisenberg et al., 2009a), paralleling the paradigm shift from a deficit-based model of development to one focusing on positive attributes of youth (e.g., Benson et al., 2006; Lerner, 2005). Generally described as the expression of voluntary behaviors with the intention to benefit others (Carlo, 2006; Eisenberg, 2006; see full review by Eisenberg et al., 2009a), prosocial behavior is one aspect among others of positive adolescent development that is gaining greater attention in the literature. Theory on prosocial development is rooted in the literature on moral development, which includes cognitive aspects of moral reasoning (e.g., how individuals decide between moral dilemmas; Kohlberg, 1978), moral behaviors (e.g., expression of behaviors that benefit society; Eisenberg and Fabes, 1998), and emotions (e.g., empathy; Eisenberg and Fabes, 1990). Empirical studies on adolescents’ prosocial development have found that different types of prosocial behaviors may exist. For example, Carlo and colleagues (e.g., Carlo et al., 2010; Carlo and Randall, 2002) found six types of prosocial tendencies (intentions to help others): compliant, dire, emotional, altruistic, anonymous, and public. Compliant helping refers to an individual’s intent to assist when asked. Emotional helping refers to helping in emotionally evocative situations (e.g., witnessing another individual crying). Dire helping refers to International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 19 http://dx.doi.org/10.1016/B978-0-08-097086-8.23190-5 221 International Encyclopedia of the Social & Behavioral Sciences, Second Edition, 2015, 221–227 Author's personal copy", "title": "" }, { "docid": "894a050eabcb0dafa255b4e5558ba6df", "text": "Awide range of methods for analysis of airborneand satellite-derived imagery continues to be proposed and assessed. In this paper, we review remote sensing implementations of support vector machines (SVMs), a promising machine learning methodology. This review is timely due to the exponentially increasing number of works published in recent years. SVMs are particularly appealing in the remote sensing field due to their ability to generalize well even with limited training samples, a common limitation for remote sensing applications. However, they also suffer from parameter assignment issues that can significantly affect obtained results. A summary of empirical results is provided for various applications of over one hundred published works (as of April, 2010). It is our hope that this survey will provide guidelines for future applications of SVMs and possible areas of algorithm enhancement. © 2010 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "67bbd10e1ed9201fb589e16c58ae76ce", "text": "Author name disambiguation has been one of the hardest problems faced by digital libraries since their early days. Historically, supervised solutions have empirically outperformed those based on heuristics, but with the burden of having to rely on manually labeled training sets for the learning process. Moreover, most supervised solutions just apply some type of generic machine learning solution and do not exploit specific knowledge about the problem. In this article, we follow a similar reasoning, but in the opposite direction. Instead of extending an existing supervised solution, we propose a set of carefully designed heuristics and similarity functions, and apply supervision only to optimize such parameters for each particular dataset. As our experiments show, the result is a very effective, efficient and practical author name disambiguation method that can be used in many different scenarios. In fact, we show that our method can beat state-of-the-art supervised methods in terms of effectiveness in many situations while being orders of magnitude faster. It can also run without any training information, using only default parameters, and still be very competitive when compared to these supervised methods (beating several of them) and better than most existing unsupervised author name disambiguation solutions.", "title": "" }, { "docid": "80019f4fc88cb2b8c2d07f5d8e4c79e9", "text": "Persistence of latently infected cells in presence of Anti-Retroviral Therapy presents the main obstacle to HIV-1 eradication. Much effort is thus placed on identification of compounds capable of HIV-1 latency reversal in order to render infected cells susceptible to viral cytopathic effects and immune clearance. We identified the BAF chromatin remodeling complex as a key player required for maintenance of HIV-1 latency, highlighting its potential as a molecular target for inhibition in latency reversal. Here, we screened a recently identified panel of small molecule inhibitors of BAF (BAFi's) for potential to activate latent HIV-1. Latency reversal was strongly induced by BAFi's Caffeic Acid Phenethyl Ester and Pyrimethamine, two molecules previously characterized for clinical application. BAFi's reversed HIV-1 latency in cell line based latency models, in two ex vivo infected primary cell models of latency, as well as in HIV-1 infected patient's CD4 + T cells, without inducing T cell proliferation or activation. BAFi-induced HIV-1 latency reversal was synergistically enhanced upon PKC pathway activation and HDAC-inhibition. Therefore BAFi's constitute a promising family of molecules for inclusion in therapeutic combinatorial HIV-1 latency reversal.", "title": "" }, { "docid": "1cbf55610014ef23e4015c07f5846619", "text": "Variation of the system parameters and external disturbances always happen in the CNC servo system. With a traditional PID controller, it will cause large overshoot or poor stability. In this paper, a fuzzy-PID controller is proposed in order to improve the performance of the servo system. The proposed controller incorporates the advantages of PID control which can eliminate the steady-state error, and the advantages of fuzzy logic such as simple design, no need of an accurate mathematical model and some adaptability to nonlinearity and time-variation. The fuzzy-PID controller accepts the error (e) and error change(ec) as inputs ,while the parameters kp, ki, kd as outputs. Control rules of the controller are established based on experience so that self-regulation of the values of PID parameters is achieved. A simulation model of position servo system is constructed in Matlab/Simulink module based on a high-speed milling machine researched in our institute. By comparing the traditional PID controller and the fuzzy-PID controller, the simulation results show that the system has stronger robustness and disturbance rejection capability with the latter controller which can meet the performance requirements of the CNC position servo system better", "title": "" }, { "docid": "40e74f062a6d4c969d87e57e7566bc9e", "text": "Bullying is a serious public health concern that is associated with significant negative mental, social, and physical outcomes. Technological advances have increased adolescents' use of social media, and online communication platforms have exposed adolescents to another mode of bullying- cyberbullying. Prevention and intervention materials, from websites and tip sheets to classroom curriculum, have been developed to help youth, parents, and teachers address cyberbullying. While youth and parents are willing to disclose their experiences with bullying to their health care providers, these disclosures need to be taken seriously and handled in a caring manner. Health care providers need to include questions about bullying on intake forms to encourage these disclosures. The aim of this article is to examine the current status of cyberbullying prevention and intervention. Research support for several school-based intervention programs is summarised. Recommendations for future research are provided.", "title": "" }, { "docid": "1611448ce90278a329b1afe8fe598ba9", "text": "This paper is devoted to some mathematical considerations on the geometrical ideas contained in PNK, CN and, successively, in PR. Mainly, we will emphasize that these ideas give very promising suggestions for a modern point-free foundation of geometry. 1. Introduction Recently the researches in point-free geometry received an increasing interest in different areas. As an example, we can quote computability theory, lattice theory, computer science. Now, the basic ideas of point-free geometry were firstly formulated by A. N. Whitehead in PNK and CN where the extension relation between events is proposed as a primitive. The points, the lines and all the \" abstract \" geometrical entities are defined by suitable abstraction processes. As a matter of fact, as observed in Casati and Varzi 1997, the approach proposed in these books is a basis for a \"mereology\" (i.e. an investigation about the part-whole relation) rather than for a point-free geometry. Indeed , the inclusion relation is set-theoretical and not topological in nature and this generates several difficulties. As an example, the definition of point is unsatisfactory (see Section 6). So, it is not surprising that some years later the publication of PNK and CN, Whitehead in PR proposed a different approach in which the primitive notion is the one of connection relation. This idea was suggested in de Laguna 1922. The aim of this paper is not to give a precise account of geometrical ideas contained in these books but only to emphasize their mathematical potentialities. So, we translate the analysis of Whitehead into suitable first order theories and we examine these theories from a logical point of view. Also, we argue that multi-valued logic is a promising tool to reformulate the approach in PNK and CN.", "title": "" }, { "docid": "c09448b80effbde3ec159c2c3e04ecb0", "text": "It is easy for today's students and researchers to believe that modern bioinformatics emerged recently to assist next-generation sequencing data analysis. However, the very beginnings of bioinformatics occurred more than 50 years ago, when desktop computers were still a hypothesis and DNA could not yet be sequenced. The foundations of bioinformatics were laid in the early 1960s with the application of computational methods to protein sequence analysis (notably, de novo sequence assembly, biological sequence databases and substitution models). Later on, DNA analysis also emerged due to parallel advances in (i) molecular biology methods, which allowed easier manipulation of DNA, as well as its sequencing, and (ii) computer science, which saw the rise of increasingly miniaturized and more powerful computers, as well as novel software better suited to handle bioinformatics tasks. In the 1990s through the 2000s, major improvements in sequencing technology, along with reduced costs, gave rise to an exponential increase of data. The arrival of 'Big Data' has laid out new challenges in terms of data mining and management, calling for more expertise from computer science into the field. Coupled with an ever-increasing amount of bioinformatics tools, biological Big Data had (and continues to have) profound implications on the predictive power and reproducibility of bioinformatics results. To overcome this issue, universities are now fully integrating this discipline into the curriculum of biology students. Recent subdisciplines such as synthetic biology, systems biology and whole-cell modeling have emerged from the ever-increasing complementarity between computer science and biology.", "title": "" }, { "docid": "7fab7940321a606b10225d14df46ce65", "text": "Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results.", "title": "" }, { "docid": "f2aff84f10b59cbc127dab6266cee11c", "text": "This paper extends the Argument Interchange Format to enable it to represent dialogic argumentation. One of the challenges is to tie together the rules expressed in dialogue protocols with the inferential relations between premises and conclusions. The extensions are founded upon two important analogies which minimise the extra ontological machinery required. First, locutions in a dialogue are analogous to AIF Inodes which capture propositional data. Second, steps between locutions are analogous to AIF S-nodes which capture inferential movement. This paper shows how these two analogies combine to allow both dialogue protocols and dialogue histories to be represented alongside monologic arguments in a single coherent system.", "title": "" }, { "docid": "c4e8dbd875e35e5bd9bd55ca24cdbfc2", "text": "In this paper, we introduce a new framework for recognizing textual entailment which depends on extraction of the set of publiclyheld beliefs – known as discourse commitments – that can be ascribed to the author of a text or a hypothesis. Once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Promising results were achieved: our system correctly identified more than 80% of examples from the RTE-3 Test Set correctly, without the need for additional sources of training data or other web-based resources.", "title": "" }, { "docid": "d68f1d3762de6db8bf8d67556d4c72ec", "text": "With the emerging technologies and all associated devices, it is predicted that massive amount of data will be created in the next few years – in fact, as much as 90% of current data were created in the last couple of years – a trend that will continue for the foreseeable future. Sustainable computing studies the process by which computer engineer/scientist designs computers and associated subsystems efficiently and effectively with minimal impact on the environment. However, current intelligent machine-learning systems are performance driven – the focus is on the predictive/classification accuracy, based on known properties learned from the training samples. For instance, most machine-learning-based nonparametric models are known to require high computational cost in order to find the global optima. With the learning task in a large dataset, the number of hidden nodes within the network will therefore increase significantly, which eventually leads to an exponential rise in computational complexity. This paper thus reviews the theoretical and experimental data-modeling literature, in large-scale data-intensive fields, relating to: (1) model efficiency, including computational requirements in learning, and data-intensive areas’ structure and design, and introduces (2) new algorithmic approaches with the least memory requirements and processing to minimize computational cost, while maintaining/improving its predictive/classification accuracy and stability.", "title": "" }, { "docid": "73e4f4c946a1c76dfe029d691ecf7acc", "text": "Although cost-effective at-home blood pressure monitors are available, a complementary mobile solution can ease the burden of measuring BP at critical points throughout the day. In this work, we developed and evaluated a smartphone-based BP monitoring application called textitSeismo. The technique relies on measuring the time between the opening of the aortic valve and the pulse later reaching a periphery arterial site. It uses the smartphone's accelerometer to measure the vibration caused by the heart valve movements and the smartphone's camera to measure the pulse at the fingertip. The system was evaluated in a nine participant longitudinal BP perturbation study. Each participant participated in four sessions that involved stationary biking at multiple intensities. The Pearson correlation coefficient of the blood pressure estimation across participants is 0.20-0.77 ($mu$=0.55, $sigma$=0.19), with an RMSE of 3.3-9.2 mmHg ($mu$=5.2, $sigma$=2.0).", "title": "" }, { "docid": "c37945f894ce4f8a142dd1b4e1255443", "text": "A non-invasive glucose measurement system based on the method of metabolic heat conformation (MHC) is presented in this paper. This system consists of three temperature sensors, two humidity sensors, an infrared sensor and an optical measurement device. The glucose level can be deduced from the quantity of heat dissipation, blood flow rate of local tissue and degree of blood oxygen saturation. The methodology of the data process and the measurement error are also analyzed. The system is applied in a primary clinical test. Compared with the results of a commercial automated chemistry analyzer, the correlation coefficient of the collected data from the system is 0.856. Result shows that the correlation coefficient improves when the factor of heat dissipated by evaporation of the skin is added in. A non-invasive method of measuring the blood flow rate of local tissue by heat transmission between skin and contacted conductor is also introduced. Theoretical derivation and numerical simulation are completed as well. The so-called normalized difference mean (NDM) is chosen to express the quantity of the blood flow rate. The correlation coefficient between the blood flow rates by this method and the results of a Doppler blood flow meter is equal to 0.914.", "title": "" }, { "docid": "e15e5896c21018de65653b3c96640ef5", "text": "This paper presents a single-ended Class-E power amplifier for wireless power transfer systems. The power amplifier is designed with a low-cost power MOSFET and high-Q inductor. It adopts a second harmonic filter in the output matching network. The proposed Class-E power amplifier has low second harmonic level by the second harmonic filter. Also, we designed an input driver with a single supply voltage for driving the Class-E power amplifier. The implemented Class-E power amplifier delivers an output power of 40.8 dBm and a high-efficiency of 90.3% for the 6.78 MHz input signal. Index Terms — Class-E power amplifier, High efficiency amplifier, wireless power transfer, harmonic filter", "title": "" }, { "docid": "06fca2fd3cdaab1029d447f0e0823184", "text": "The purpose of the present study was to experimentally assess the effect of cognitive strategies of association and dissociation while running on central nervous activation. A total of 30 long distance runners volunteered for the study. The study protocol consisted on three sessions (scheduled in three different days): (1) maximal incremental treadmill test, (2) associative task session, and (3) dissociative task session. The order of sessions 2 and 3 was counterbalanced. During sessions 2 and 3, participants performed a 55 min treadmill run at moderate intensity. Both, associative and dissociative tasks responses were monitoring and recording in real time through dynamic measure tools. Consequently, was possible to have an objective control of the attentional. Results showed a positive session (exercise+attentional task) effect for central nervous activation. The benefi ts of aerobic exercise at moderate intensity for the performance of self-regulation cognitive tasks are highlighted. The used methodology is proposed as a valid and dynamic option to study cognitions while running in order to overcome the retrospective approach. Research Article", "title": "" }, { "docid": "fe318971645b171929188b091425a8ac", "text": "Metal interconnections are expected to become the limiting factor for the performance of electronic systems as transistors continue to shrink in size. Replacing them by optical interconnections, at different levels ranging from rack-to-rack down to chip-to-chip and intra-chip interconnections, could provide the low power dissipation, low latencies and high bandwidths that are needed. The implementation of optical interconnections relies on the development of micro-optical devices that are integrated with the microelectronics on chips. Recent demonstrations of silicon low-loss waveguides, light emitters, amplifiers and lasers approach this goal, but a small silicon electro-optic modulator with a size small enough for chip-scale integration has not yet been demonstrated. Here we experimentally demonstrate a high-speed electro-optical modulator in compact silicon structures. The modulator is based on a resonant light-confining structure that enhances the sensitivity of light to small changes in refractive index of the silicon and also enables high-speed operation. The modulator is 12 micrometres in diameter, three orders of magnitude smaller than previously demonstrated. Electro-optic modulators are one of the most critical components in optoelectronic integration, and decreasing their size may enable novel chip architectures.", "title": "" }, { "docid": "e38f8080cf1ad8db5fbe186bd7b318f5", "text": "This paper reports the performances of shallow word-level convolutional neural networks (CNN), our earlier work (2015) [3, 4], on the eight datasets with relatively large training data that were used for testing the very deep characterlevel CNN in Conneau et al. (2016) [1]. Our findings are as follows. The shallow word-level CNNs achieve better error rates than the error rates reported in [1] though the results should be interpreted with some consideration due to the unique pre-processing of [1]. The shallow word-level CNN uses more parameters and therefore requires more storage than the deep character-level CNN; however, the shallow word-level CNN computes much faster.", "title": "" }, { "docid": "b07ae3888b52faa598893bbfbf04eae2", "text": "This paper presents a compliant locomotion framework for torque-controlled humanoids using model-based whole-body control. In order to stabilize the centroidal dynamics during locomotion, we compute linear momentum rate of change objectives using a novel time-varying controller for the Divergent Component of Motion (DCM). Task-space objectives, including the desired momentum rate of change, are tracked using an efficient quadratic program formulation that computes optimal joint torque setpoints given frictional contact constraints and joint position / torque limits. In order to validate the effectiveness of the proposed approach, we demonstrate push recovery and compliant walking using THOR, a 34 DOF humanoid with series elastic actuation. We discuss details leading to the successful implementation of optimization-based whole-body control on our hardware platform, including the design of a “simple” joint impedance controller that introduces inner-loop velocity feedback into the actuator force controller.", "title": "" }, { "docid": "5d527ad4493860a8d96283a5c58c3979", "text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.", "title": "" } ]
scidocsrr
db93dfb7cc18d8256679930d7c972511
CNN architectures for large-scale audio classification
[ { "docid": "9787d99954114de7ddd5a58c18176380", "text": "This paper presents a system for acoustic event detection in recordings from real life environments. The events are modeled using a network of hidden Markov models; their size and topology is chosen based on a study of isolated events recognition. We also studied the effect of ambient background noise on event classification performance. On real life recordings, we tested recognition of isolated sound events and event detection. For event detection, the system performs recognition and temporal positioning of a sequence of events. An accuracy of 24% was obtained in classifying isolated sound events into 61 classes. This corresponds to the accuracy of classifying between 61 events when mixed with ambient background noise at 0dB signal-to-noise ratio. In event detection, the system is capable of recognizing almost one third of the events, and the temporal positioning of the events is not correct for 84% of the time.", "title": "" }, { "docid": "afee419227629f8044b5eb0addd65ce3", "text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "45a92ab90fabd875a50229921e99dfac", "text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.", "title": "" }, { "docid": "45252c6ffe946bf0f9f1984f60ffada6", "text": "Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation model that represents the Gibbs distribution using the arg max of randomly perturbed encoder. We subsequently apply the direct loss minimization technique to propagate gradients through the reparameterized arg max. The resulting gradient is estimated by the difference of the encoder gradients that are evaluated in two arg max predictions.", "title": "" }, { "docid": "74141327edf56eb5a198f446d12998a0", "text": "Intramuscular myxomas of the hand are rare entities. Primarily found in the myocardium, these lesions also affect the bone and soft tissues in other parts of the body. This article describes a case of hypothenar muscles myxoma treated with local surgical excision after frozen section biopsy with tumor-free margins. Radiographic images of the axial and appendicular skeleton were negative for fibrous dysplasia, and endocrine studies were within normal limits. The 8-year follow-up period has been uneventful, with no complications. The patient is currently recurrence free, with normal intrinsic hand function.", "title": "" }, { "docid": "2f138f030565d85e4dcd9f90585aecb0", "text": "One of the central questions in neuroscience is how particular tasks, or computations, are implemented by neural networks to generate behavior. The prevailing view has been that information processing in neural networks results primarily from the properties of synapses and the connectivity of neurons within the network, with the intrinsic excitability of single neurons playing a lesser role. As a consequence, the contribution of single neurons to computation in the brain has long been underestimated. Here we review recent work showing that neuronal dendrites exhibit a range of linear and nonlinear mechanisms that allow them to implement elementary computations. We discuss why these dendritic properties may be essential for the computations performed by the neuron and the network and provide theoretical and experimental examples to support this view. 503 A nn u. R ev . N eu ro sc i. 20 05 .2 8: 50 353 2. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by M as sa ch us et ts I ns tit ut e of T ec hn ol og y (M IT ) on 0 6/ 26 /1 4. F or p er so na l u se o nl y. AR245-NE28-18 ARI 13 May 2005 14:15", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "33cd162dc2c0132dbd4153775a569c5d", "text": "The question whether preemptive systems are better than non-preemptive systems has been debated for a long time, but only partial answers have been provided in the real-time literature and still some issues remain open. In fact, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. In particular, limiting preemptions allows increasing program locality, making timing analysis more predictable with respect to the fully preemptive case. In this paper, we integrate the features of both preemptive and non-preemptive scheduling by considering that each task can switch to non-preemptive mode, at any time, for a bounded interval. Three methods (with different complexity and performance) are presented to calculate the longest non-preemptive interval that can be executed by each task, under fixed priorities, without degrading the schedulability of the task set, with respect to the fully preemptive case. The methods are also compared by simulations to evaluate their effectiveness in reducing the number of preemptions.", "title": "" }, { "docid": "cc1ce5471be55747faaa14e28e6eb814", "text": "A meta-analysis was performed to quantify the association between antisocial behavior (ASB) and performance on executive functioning (EF) measures. The metaanalysis expanded on Morgan and Lilienfeld’s (2000) meta-analysis of the same topic by including studies published between 1997 and 2008 and by examining a wider range of EF measures. A total of 42 studies (2,595 participants) were included in the present meta-analysis. Overall, the mean effect size indicated that antisocial groups performed 0.47 standard deviations worse on EF measures compared to control groups. This effect size was in the medium range, compared to the medium to large 0.62 average weighted mean effect size produced by Morgan and Lilienfeld. There was significant variation in calculated effect sizes across studies, indicating that the overall mean effect size was not representative of the association between ASB and EF. Effect size magnitude varied according to ASB groups and measures of EF. Cognitive impairments in ASB were not specific to EF. Other methodological issues in the research literature and implications of the meta-analysis results are discussed and directions for future research are proposed.", "title": "" }, { "docid": "fcf46a98f9e77c83e4946bc75fb97849", "text": "Recent work on sequence to sequence translation using Recurrent Neural Networks (RNNs) based on Long Short Term Memory (LSTM) architectures has shown great potential for learning useful representations of sequential data. A oneto-many encoder-decoder(s) scheme allows for a single encoder to provide representations serving multiple purposes. In our case, we present an LSTM encoder network able to produce representations used by two decoders: one that reconstructs, and one that classifies if the training sequence has an associated label. This allows the network to learn representations that are useful for both discriminative and reconstructive tasks at the same time. This paradigm is well suited for semi-supervised learning with sequences and we test our proposed approach on an action recognition task using motion capture (MOCAP) sequences. We find that semi-supervised feature learning can improve state-of-the-art movement classification accuracy on the HDM05 action dataset. Further, we find that even when using only labeled data and a primarily discriminative objective the addition of a reconstructive decoder can serve as a form of regularization that reduces over-fitting and improves test set accuracy.", "title": "" }, { "docid": "a458f7a0aabee005db091e6b527032b9", "text": "Formal verification has seen much success in several domains of hardware and software design. For example, in hardware verification there has been much work in the verification of microprocessors (e.g. [1]) and memory systems (e.g. [2]). Similarly, software verification has seen success in device-drivers (e.g. [3]) and concurrent software (e.g. [4]). The area of network verification, which consists of both hardware and software components, has received relatively less attention. Traditionally, the focus in this domain has been on performance and security, with less emphasis on functional correctness. However, increasing complexity is resulting in increasing functional failures and thus prompting interest in verification of key correctness properties. This paper reviews the formal verification techniques that have been used here thus far, with the goal of understanding the characteristics of the problem domain that are helpful for each of the techniques, as well as those that pose specific challenges. Finally, it highlights some interesting research challenges that need to be addressed in this important emerging domain.", "title": "" }, { "docid": "55ea00ff6c707aed1342938784ac00f8", "text": "The i.Drive Lab has developed inter-disciplinary methodology for the analysis and modelling of behavioral and physiological responses related to the interaction between driver, vehicle, infrastructure, and virtual environment. The present research outlines the development of a validation study for the combination of virtual and real-life research methodologies. i.Drive driving simulator was set up to replicate the data acquisition of environmental and physiological information coming from an equipped i.Drive electric vehicle with same sensors. i.Drive tests are focused on the identification of driver's affective states that are able to define recurring situations and psychophysical conditions that are relevant for road-safety and drivers' comfort. Results show that it is possible to combine different research paradigms to collect low-level vehicle control behavior and higher-level cognitive measures, in order to develop data collection and elaboration for future mobility challenges.", "title": "" }, { "docid": "6c7bf63f9394bf5432f67b5e554743ae", "text": "419 INTRODUCTION A team from APL has been using model-based systems engineering (MBSE) methods within a conceptual modeling process to support and unify activities related to system-of-systems architecture development; modeling, simulation, and analysis efforts; and system capability trade studies. These techniques have been applied to support analysis of complex systems, particularly in the net-centric operations and warfare domain, which has proven particularly challenging to the modeling, simulation, and analysis community because of its complexity, information richness, and broad scope. In particular, the APL team has used MBSE techniques to provide structured models of complex systems incorporating input from multiple diverse stakeholders odel-based systems engineering techniques facilitate complex system design and documentation processes. A rigorous, iterative conceptual development process based on the Unified Modeling Language (UML) or the Systems Modeling Language (SysML) and consisting of domain modeling, use case development, and behavioral and structural modeling supports design, architecting, analysis, modeling and simulation, test and evaluation, and program management activities. The resulting model is more useful than traditional documentation because it represents structure, data, and functions, along with associated documentation, in a multidimensional, navigable format. Beyond benefits to project documentation and stakeholder communication, UMLand SysML-based models also support direct analysis methods, such as functional thread extraction. The APL team is continuing to develop analysis techniques using conceptual models to reduce the risk of design and test errors, reduce costs, and improve the quality of analysis and supporting modeling and simulation activities in the development of complex systems. Model-Based Systems Engineering in Support of Complex Systems Development", "title": "" }, { "docid": "b6da971f13c1075ce1b4aca303e7393f", "text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.", "title": "" }, { "docid": "b41c0a4e2a312d74d9a244e01fc76d66", "text": "There is a growing interest in studying the adoption of m-payments but literature on the subject is still in its infancy and no empirical research relating to this has been conducted in the context of the UK to date. The aim of this study is to unveil the current situation in m-payment adoption research and provide future research direction through the development of a research model for the examination of factors affecting m-payment adoption in the UK context. Following an extensive search of the literature, this study finds that 186 relationships between independent and dependent variables have been analysed by 32 existing empirical m-payment and m-banking adoption studies. From analysis of these relationships the most significant factors found to influence adoption are uncovered and an extension of UTAUT2 with the addition of perceived risk and trust is proposed to increase the applicability of UTAUT2 to the m-payment context.", "title": "" }, { "docid": "1c90adf8ec68ff52e777b2041f8bf4c4", "text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.", "title": "" }, { "docid": "c1aa687c4a48cfbe037fe87ed4062dab", "text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.", "title": "" }, { "docid": "ef785a3eadaa01a7b45d978f63583513", "text": "This paper presents a laparoscopic grasping tool for minimally invasive surgery with the capability of multiaxis force sensing. The tool is able to sense three-axis Cartesian manipulation force and a single-axis grasping force. The forces are measured by a wrist force sensor located at the distal end of the tool, and two torque sensors at the tool base, respectively. We propose an innovative design of a miniature force sensor achieving structural simplicity and potential cost effectiveness. A prototype is manufactured and experiments are conducted in a simulated surgical environment by using an open platform for surgical robot research, called Raven-II.", "title": "" }, { "docid": "d569902303b93274baf89527e666adc0", "text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.", "title": "" }, { "docid": "23e6d97e1b7b224daf72efc254939d0c", "text": "In this study, the effects of ploidy level and culture medium were studied on the production of tropane alkaloids. We have successfully produced stable tetraploid hairy root lines of Hyoscyamus muticus and their ploidy stability was confirmed 30 months after transformation. Tetraploidy affected the growth rate and alkaloid accumulation in plants and transformed root cultures of Egyptian henbane. Although tetraploid plants could produce 200% higher scopolamine than their diploid counterparts, this result was not observed for corresponding induced hairy root cultures. Culture conditions did not only play an important role for biomass production, but also significantly affected tropane alkaloid accumulation in hairy root cultures. In spite of its lower biomass production, tetraploid clone could produce more scopolamine than the diploid counterpart under similar growth conditions. The highest yields of scopolamine (13.87 mg l−1) and hyoscyamine (107.7 mg 1−1) were obtained when diploid clones were grown on medium consisting of either Murashige and Skoog with 60 g/l sucrose or Gamborg’s B5 with 40 g/l sucrose, respectively. Although the hyoscyamine is the main alkaloid in the H. muticus plants, manipulation of ploidy level and culture conditions successfully changed the scopolamine/hyoscyamine ratio towards scopolamine. The fact that hyoscyamine is converted to scopolamine is very important due to the higher market value of scopolamine.", "title": "" }, { "docid": "d9b7636d566d82f9714272f1c9f83f2f", "text": "OBJECTIVE\nFew studies have investigated the association between religion and suicide either in terms of Durkheim's social integration hypothesis or the hypothesis of the regulative benefits of religion. The relationship between religion and suicide attempts has received even less attention.\n\n\nMETHOD\nDepressed inpatients (N=371) who reported belonging to one specific religion or described themselves as having no religious affiliation were compared in terms of their demographic and clinical characteristics.\n\n\nRESULTS\nReligiously unaffiliated subjects had significantly more lifetime suicide attempts and more first-degree relatives who committed suicide than subjects who endorsed a religious affiliation. Unaffiliated subjects were younger, less often married, less often had children, and had less contact with family members. Furthermore, subjects with no religious affiliation perceived fewer reasons for living, particularly fewer moral objections to suicide. In terms of clinical characteristics, religiously unaffiliated subjects had more lifetime impulsivity, aggression, and past substance use disorder. No differences in the level of subjective and objective depression, hopelessness, or stressful life events were found.\n\n\nCONCLUSIONS\nReligious affiliation is associated with less suicidal behavior in depressed inpatients. After other factors were controlled, it was found that greater moral objections to suicide and lower aggression level in religiously affiliated subjects may function as protective factors against suicide attempts. Further study about the influence of religious affiliation on aggressive behavior and how moral objections can reduce the probability of acting on suicidal thoughts may offer new therapeutic strategies in suicide prevention.", "title": "" } ]
scidocsrr
f0ee456f13048f1fe2a1314c18aa5e69
A Frequency-Reconfigurable Quasi-Yagi Dipole Antenna
[ { "docid": "6661cc34d65bae4b09d7c236d0f5400a", "text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.", "title": "" } ]
[ { "docid": "d86969ab9471333c6eca4af5092b64b6", "text": "We investigate the problem of sequential linear prediction for real life big data applications. The second order algorithms, i.e., Newton-Raphson Methods, asymptotically achieve the performance of the ”best” possible linear predictor much faster compared to the first order algorithms, e.g., Online Gradient Descent. However, implementation of these methods is not usually feasible in big data applications because of the extremely high computational needs. To this end, we introduce a highly efficient implementation reducing the computational complexity of the second order methods from quadratic to linear scale. We do not rely on any statistical assumptions, hence, lose no information. We demonstrate the computational efficiency of our algorithm on a real life sequential big dataset.", "title": "" }, { "docid": "89652309022bc00c7fd76c4fe1c5d644", "text": "In first encounters people quickly form impressions of each other’s personality and interpersonal attitude. We conducted a study to investigate how this transfers to first encounters between humans and virtual agents. In the study, subjects’ avatars approached greeting agents in a virtual museum rendered in both first and third person perspective. Each agent exclusively exhibited nonverbal immediacy cues (smile, gaze and proximity) during the approach. Afterwards subjects judged its personality (extraversion) and interpersonal attitude (hostility/friendliness). We found that within only 12.5 seconds of interaction subjects formed impressions of the agents based on observed behavior. In particular, proximity had impact on judgments of extraversion whereas smile and gaze on friendliness. These results held for the different camera perspectives. Insights on how the interpretations might change according to the user’s own personality are also provided.", "title": "" }, { "docid": "f6592e6495527a8e8df9bede4e983e12", "text": "All Internet facing systems and applications carry security risks. Security professionals across the globe generally address these security risks by Vulnerability Assessment and Penetration Testing (VAPT). The VAPT is an offensive way of defending the cyber assets of an organization. It consists of two major parts, namely Vulnerability Assessment (VA) and Penetration Testing (PT). Vulnerability assessment, includes the use of various automated tools and manual testing techniques to determine the security posture of the target system. In this step all the breach points and loopholes are found. These breach points/loopholes if found by an attacker can lead to heavy data loss and fraudulent intrusion activities. In Penetration testing the tester simulates the activities of a malicious attacker who tries to exploit the vulnerabilities of the target system. In this step the identified set of vulnerabilities in VA is used as input vector. This process of VAPT helps in assessing the effectiveness of the security measures that are present on the target system. In this paper we have described the entire process of VAPT, along with all the methodologies, models and standards. A shortlisted set of efficient and popular open source/free tools which are useful in conducting VAPT and the required list of precautions is given. A case study of a VAPT test conducted on a bank system using the shortlisted tools is also discussed.", "title": "" }, { "docid": "acb3aaaf79ebc3fc65724e92e4d076aa", "text": "Lay dispositionism refers to lay people's tendency to use traits as the basic unit of analysis in social perception (L. Ross & R. E. Nisbett, 1991). Five studies explored the relation between the practices indicative of lay dispositionism and people's implicit theories about the nature of personal attributes. As predicted, compared with those who believed that personal attributes are malleable (incremental theorists), those who believed in fixed traits (entity theorists) used traits or trait-relevant information to make stronger future behavioral predictions (Studies 1 and 2) and made stronger trait inferences from behavior (Study 3). Moreover, the relation between implicit theories and lay dispositionism was found in both the United States (a more individualistic culture) and Hong Kong (a more collectivistic culture), suggesting this relation to be generalizable across cultures (Study 4). Finally, an experiment in which implicit theories were manipulated provided preliminary evidence for the possible causal role of implicit theories in lay dispositionism (Study 5).", "title": "" }, { "docid": "5dba3258382d9781287cdcb6b227153c", "text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.", "title": "" }, { "docid": "8e6debae3b3d3394e87e671a14f8819e", "text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.", "title": "" }, { "docid": "2b3c9b9f92582af41fcde0186c9bd0f6", "text": "Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by ~17% in mAP and ~6% in rank-1, CUHK03 [24] by ~4% in rank-1 and DukeMTMC-reID [50] by ~24% in mAP and ~10% in rank-1.", "title": "" }, { "docid": "a433f47a3c7c06a409a8fc0d98e955be", "text": "The local-dimming backlight has recently been presented for use in LCD TVs. However, the image resolution is low, particularly at weak edges. In this work, a local-dimming backlight is developed to improve the image contrast and reduce power dissipation. The algorithm enhances low-level edge information to improve the perceived image resolution. Based on the algorithm, a 42-in backlight module with white light-emitting diode (LED) devices was driven by a local dimming control core. The block-wise register approach substantially reduced the number of required line-buffers and shortened the latency time. The measurements made in the laboratory indicate that the backlight system reduces power dissipation by an average of 48% and exhibits no visible distortion compared relative to the fixed backlighting system. The system was successfully demonstrated in a 42-in LCD TV, and the contrast ratio was greatly improved by a factor of 100.", "title": "" }, { "docid": "80c1f7e845e21513fc8eaf644b11bdc5", "text": "We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and \"Like\"ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.", "title": "" }, { "docid": "37f55e03f4d1ff3b9311e537dc7122b5", "text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.", "title": "" }, { "docid": "0ef77e74b310e7bac2584a3e49d63ce1", "text": "We focus on named entity recognition (NER) for Chinese social media. With massive unlabeled text and quite limited labelled corpus, we propose a semisupervised learning model based on BLSTM neural network. To take advantage of traditional methods in NER such as CRF, we combine transition probability with deep learning in our model. To bridge the gap between label accuracy and F-score of NER, we construct a model which can be directly trained on F-score. When considering the instability of Fscore driven method and meaningful information provided by label accuracy, we propose an integrated method to train on both F-score and label accuracy. Our integrated model yields substantial improvement over previous state-of-the-art result.", "title": "" }, { "docid": "a059fc50eb0e4cab21b04a75221b3160", "text": "This paper presents the design of an X-band active antenna self-oscillating down-converter mixer in substrate integrated waveguide technology (SIW). Electromagnetic analysis is used to design a SIW cavity backed patch antenna with resonance at 9.9 GHz used as the receiving antenna, and subsequently harmonic balance analysis combined with optimization techniques are used to synthesize a self-oscillating mixer with oscillating frequency of 6.525 GHz. The conversion gain is optimized for the mixing product involving the second harmonic of the oscillator and the RF input signal, generating an IF frequency of 3.15 GHz to have conversion gain in at least 600 MHz bandwidth around the IF frequency. The active antenna circuit finds application in compact receiver front-end modules as well as active self-oscillating mixer arrays.", "title": "" }, { "docid": "aa30fc0f921509b1f978aeda1140ffc0", "text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.", "title": "" }, { "docid": "2bd090c2604b94e24e8f9814549c4a95", "text": "Density estimation forms a critical component of many analytics tasks including outlier detection, visualization, and statistical testing. These tasks often seek to classify data into high and low-density regions of a probability distribution. Kernel Density Estimation (KDE) is a powerful technique for computing these densities, offering excellent statistical accuracy but quadratic total runtime. In this paper, we introduce a simple technique for improving the performance of using a KDE to classify points by their density (density classification). Our technique, thresholded kernel density classification (tKDC), applies threshold-based pruning to spatial index traversal to achieve asymptotic speedups over naïve KDE, while maintaining accuracy guarantees. Instead of exactly computing each point's exact density for use in classification, tKDC iteratively computes density bounds and short-circuits density computation as soon as bounds are either higher or lower than the target classification threshold. On a wide range of dataset sizes and dimensions, tKDC demonstrates empirical speedups of up to 1000x over alternatives.", "title": "" }, { "docid": "9978f33847a09c651ccce68c3b88287f", "text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.", "title": "" }, { "docid": "38c922ff8763d1a03b8beb37cc7bd4bb", "text": "As the number of devices connected to the Internet has been exponentially increasing, the degree of threats to those devices and networks has been also increasing. Various network scanning tools, which use fingerprinting techniques, have been developed to make the devices and networks secure by providing the information on its status. However, the tools may be used for malicious purposes. Using network scanning tools, attackers can not only obtain the information of devices such as the name of OS, version, and sessions but also find its vulnerabilities which can be used for further cyber-attacks. In this paper, we compare and analyze the performances of widely used network scanning tools such as Nmap and Nessus. The existing researches on the network scanning tools analyzed a specific scanning tools and they assumed there are only small number of network devices. In this paper, we compare and analyze the performances of several tools in practical network environments with the number of devices more than 40. The results of this paper provide the direction to prevent possible attacks when they are utilized as attack tools as well as the practical understanding of the threats by network scanning tools and fingerprinting techniques.", "title": "" }, { "docid": "7ac1249e901e558443bc8751b11c9427", "text": "Despite the growing popularity of leasing as an alternative to purchasing a vehicle, there is very little research on how consumers choose among various leasing and ̄nancing (namely buying) contracts and how this choice a®ects the brand they choose. In this paper therefore, we develop a structural model of the consumer's choice of automobile brand and the related decision of whether to lease or buy it. We conceptualize the leasing and buying of the same vehicle as two di®erent goods, each with its own costs and bene ̄ts. The di®erences between the two types of contracts are summarized along three dimensions: (i) the \\net price\" or ̄nancial cost of the contract, (ii) maintenance and repair costs and (iii) operating costs, which depend on the consumer's driving behavior. Based on consumer utility maximization, we derive a nested logit of brand and contract choice that captures the tradeo®s among all three costs. The model is estimated on a dataset of new car purchases from the near luxury segment of the automobile market. The optimal choice of brand and contract is determined by the consumer's implicit interest rate and the number of miles she expects to drive, both of which are estimated as parameters of the model. The empirical results yield several interesting ̄ndings. We ̄nd that (i) cars that deteriorate faster are more likely to be leased than bought, (ii) the estimated implicit interest rate is higher than the market rate, which implies that consumers do not make e±cient tradeo®s between the net price and operating costs and may often incorrectly choose to lease and (iii) the estimate of the annual expected mileage indicates that most consumers would incur substantial penalties if they lease, which explains why buying or ̄nancing continues to be more popular than leasing. This research also provides several interesting managerial insights into the e®ectiveness of various promotional instruments. We examine this issue by looking at (i) sales response to a promotion, (ii) the ability of the promotion to draw sales from other brands and (iii) its overall pro ̄tability. We ̄nd, for example that although the sales response to a cash rebate on a lease is greater than an equivalent increase in the residual value, under certain conditions and for certain brands, a residual value promotion yields higher pro ̄ts. These ̄ndings are of particular value to manufacturers in the prevailing competitive environment, which is marked by the extensive use of large rebates and 0% APR o®ers.", "title": "" }, { "docid": "ac4683be3ffc119f6eb64c4f295ffe2d", "text": "As data rates in electrical links rise to 56Gb/s, standards are gravitating towards PAM-4 modulation to achieve higher spectral efficiency. Such approaches are not without drawbacks, as PAM-4 signaling results in reduced vertical margins as compared to NRZ. This makes data recovery more susceptible to residual, or uncompensated, intersymbol interference (ISI) when the PAM-4 waveform is sampled by the receiver. To overcome this, existing standards such as OIF CEI 56Gb/s very short reach (VSR) require forward error correction to meet the target link BER of 1E-15. This comes at the expense of higher latency, which is undesirable for chip-to-chip VSR links in compute applications. Therefore, different channel equalization strategies should be considered for PAM-4 electrical links. Employing ½-UI (T/2) tap delays in an FFE extends the filter bandwidth as compared to baud- or T-spaced taps [1], resulting in improved timing margins and lower residual ISI for 56Gb/s PAM-4 data sent across VSR channels. While T/2-spaced FFEs have been reported in optical receivers for dispersion compensation [2], the analog delay techniques used are not conducive to designing dense I/O and cannot support a wide range of data rates. This work demonstrates a 56Gb/s PAM-4 transmitter with a T/2-spaced FFE using high-speed clocking techniques to produce well-controlled tap delays that are data-rate agile. The transmitter also supports T-spaced tap delays, ensuring compatibility with existing standards.", "title": "" }, { "docid": "73e24b2743efb3eead62cb1d8cc4c74d", "text": "Enterprise Resource Planning (ERP) systems have been implemented globally and their implementation has been extensively studied during the past decade. However, many organizations are still struggling to derive benefits from the implemented ERP systems. Therefore, ensuring post-implementation success has become the focus of the current ERP research. This study develops an integrative model to explain the post-implementation success of ERP, based on the Technology–Organization–Environment (TOE) theory. We posit that ERP implementation quality (the technological aspect) consisting of project management and system configuration, organizational readiness (the organizational aspect) consisting of leadership involvement and organizational fit, and external support (the environmental aspect) will positively affect the post-implementation success of ERP. An empirical test was conducted in the Chinese retail industry. The results show that both ERP implementation quality and organizational readiness significantly affect post-implementation success, whereas external support does not. The theoretical and practical implications of the findings are discussed. © 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "be5b0dd659434e77ce47034a51fd2767", "text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks", "title": "" } ]
scidocsrr
8c0197c9f805f0af926f9bf8d55d04fb
Efficacy of 2LPAPI ® , a Micro-Immunotherapy Drug, in Patients with High-Risk Papillomavirus Genital Infection
[ { "docid": "a52673140d86780db6c73787e5f53139", "text": "Human papillomavirus (HPV) is the most important etiological factor for cervical cancer. A recent study demonstrated that more than 20 HPV types were thought to be oncogenic for uterine cervical cancer. Notably, more than one-half of women show cervical HPV infections soon after their sexual debut, and about 90 % of such infections are cleared within 3 years. Immunity against HPV might be important for elimination of the virus. The innate immune responses involving macrophages, natural killer cells, and natural killer T cells may play a role in the first line of defense against HPV infection. In the second line of defense, adaptive immunity via cytotoxic T lymphocytes (CTLs) targeting HPV16 E2 and E6 proteins appears to eliminate cells infected with HPV16. However, HPV can evade host immune responses. First, HPV does not kill host cells during viral replication and therefore neither presents viral antigen nor induces inflammation. HPV16 E6 and E7 proteins downregulate the expression of type-1 interferons (IFNs) in host cells. The lack of co-stimulatory signals by inflammatory cytokines including IFNs during antigen recognition may induce immune tolerance rather than the appropriate responses. Moreover, HPV16 E5 protein downregulates the expression of HLA-class 1, and it facilitates evasion of CTL attack. These mechanisms of immune evasion may eventually support the establishment of persistent HPV infection, leading to the induction of cervical cancer. Considering such immunological events, prophylactic HPV16 and 18 vaccine appears to be the best way to prevent cervical cancer in women who are immunized in adolescence.", "title": "" } ]
[ { "docid": "f3641cacf284444ac45f0e085c7214bf", "text": "Recognition that the entire central nervous system (CNS) is highly plastic, and that it changes continually throughout life, is a relatively new development. Until very recently, neuroscience has been dominated by the belief that the nervous system is hardwired and changes at only a few selected sites and by only a few mechanisms. Thus, it is particularly remarkable that Sir John Eccles, almost from the start of his long career nearly 80 years ago, focused repeatedly and productively on plasticity of many different kinds and in many different locations. He began with muscles, exploring their developmental plasticity and the functional effects of the level of motor unit activity and of cross-reinnervation. He moved into the spinal cord to study the effects of axotomy on motoneuron properties and the immediate and persistent functional effects of repetitive afferent stimulation. In work that combined these two areas, Eccles explored the influences of motoneurons and their muscle fibers on one another. He studied extensively simple spinal reflexes, especially stretch reflexes, exploring plasticity in these reflex pathways during development and in response to experimental manipulations of activity and innervation. In subsequent decades, Eccles focused on plasticity at central synapses in hippocampus, cerebellum, and neocortex. His endeavors extended from the plasticity associated with CNS lesions to the mechanisms responsible for the most complex and as yet mysterious products of neuronal plasticity, the substrates underlying learning and memory. At multiple levels, Eccles' work anticipated and helped shape present-day hypotheses and experiments. He provided novel observations that introduced new problems, and he produced insights that continue to be the foundation of ongoing basic and clinical research. This article reviews Eccles' experimental and theoretical contributions and their relationships to current endeavors and concepts. It emphasizes aspects of his contributions that are less well known at present and yet are directly relevant to contemporary issues.", "title": "" }, { "docid": "320f9f3c7a94e8a72ba0695fc526dc41", "text": "PURPOSE\nTo estimate the prevalence of secondhand smoke (SHS) exposure among never-smoking adolescents and identify key factors associated with such exposure.\n\n\nMETHODS\nData were obtained from nationally representative Global Youth Tobacco Surveys conducted in 168 countries during 1999-2008. SHS exposure was ascertained in relation to the location-exposure inside home, outside home, and both inside and outside home, respectively. Independent variables included parental and/or peer smoking, knowledge about smoke harm, attitudes toward smoking ban, age, sex, and World Health Organization region. Simple and multiple logistic regression analyses were conducted.\n\n\nRESULTS\nOf 356,414 never-smoking adolescents included in the study, 30.4%, 44.2%, and 23.2% were exposed to SHS inside home, outside home, and both, respectively. Parental smoking, peer smoking, knowledge about smoke harm, and positive attitudes toward smoke ban were significantly associated with increased odds of SHS exposure. Approximately 14% of adolescents had both smoking parents and peers. Compared with never-smoking adolescents who did not have both smoking parents and peers, those who had both smoking parents and peers had 19 (adjusted odds ratio [aOR], 19.0; 95% confidence interval [CI], 16.86-21.41), eight (aOR, 7.71; 95% CI, 7.05-8.43), and 23 times (aOR, 23.16; 95% CI, 20.74-25.87) higher odds of exposure to SHS inside, outside, and both inside and outcome home, respectively.\n\n\nCONCLUSIONS\nApproximately one third and two fifths of never-smoking adolescents were exposed to SHS inside or outside home, and smoking parents and/or peers are the key factors. Study findings highlight the need to develop and implement comprehensive smoke-free policies consistent with the World Health Organization Framework Convention on Tobacco Control.", "title": "" }, { "docid": "a90be1b83ad475a50dcb82ae616d4f23", "text": "Historically, lower eyelid blepharoplasty has been a challenging surgery fraught with many potential complications, ranging from ocular irritation to full-blown lower eyelid malposition and a poor cosmetic outcome. The prevention of these complications requires a detailed knowledge of lower eyelid anatomy and a focused examination of the factors that may predispose to poor outcome. A thorough preoperative evaluation of lower eyelid skin, muscle, tone, laxity, fat prominence, tear trough deformity, and eyelid vector are critical for surgical planning. When these factors are analyzed appropriately, a natural and aesthetically pleasing outcome is more likely to occur. I have found that performing lower eyelid blepharoplasty in a bilamellar fashion (transconjunctivally to address fat prominence and transcutaneously for skin excision only), along with integrating contemporary concepts of volume preservation/augmentation, canthal eyelid support, and eyelid vector analysis, has been an integral part of successful surgery. In addition, this approach has significantly increased my confidence in attaining more consistent and reproducible results.", "title": "" }, { "docid": "8069410a94a5039305b45fbd7c8ec809", "text": "Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.", "title": "" }, { "docid": "f0f47ce0fc361740aedf17d6d2061e03", "text": "In supervised learning scenarios, feature selection has be en studied widely in the literature. Selecting features in unsupervis ed learning scenarios is a much harder problem, due to the absence of class la bel that would guide the search for relevant information. And, almos t all of previous unsupervised feature selection methods are “wrapper ” techniques that require a learning algorithm to evaluate the candidate fe ture subsets. In this paper, we propose a “filter” method for feature select ion which is independent of any learning algorithm. Our method can be per formed in either supervised or unsupervised fashion. The proposed me thod is based on the observation that, in many real world classification pr oblems, data from the same class are often close to each other. The importa nce of a feature is evaluated by its power of locality preserving, or , Laplacian Score. We compare our method with data variance (unsupervised) an d Fisher score (supervised) on two data sets. Experimental re sults demonstrate the effectiveness and efficiency of our algorithm.", "title": "" }, { "docid": "06a241bc0483a910a3fecef8e7e7883a", "text": "Linear programming duality yields e,cient algorithms for solving inverse linear programs. We show that special classes of conic programs admit a similar duality and, as a consequence, establish that the corresponding inverse programs are e,ciently solvable. We discuss applications of inverse conic programming in portfolio optimization and utility function identi0cation. c © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8b64f95e3677c3d3e693210f64cf193a", "text": "Identifying terms in specialized corpora is a central task in terminological work (compilation of domain-specific dictionaries), but is labour-intensive, especially when the corpora are voluminous which is often the case nowadays. For the past decade, terminologists and specialized lexicographers have been able to rely on term-extraction tools to assist them in the selection of terms. However, most term-extractors focus on the identification of complex terms. Although complex terms (cellular telephone) are central to terminology processing, retrieval of uniterms (telephone) is still a major challenge. This paper evaluates the usefulness of a corpora comparison approach in order to find pinpoint corpus specific words in order to identify uniterms in the field of telecommunications.", "title": "" }, { "docid": "f86fdc743f665e5f6fe13696f4502de4", "text": "The Web is rapidly transforming from a pure document collection to the largest connected public data space. Semantic annotations of web pages make it notably easier to extract and reuse data and are increasingly used by both search engines and social media sites to provide better search experiences through rich snippets, faceted search, task completion, etc. In our work, we study the novel problem of crawling structured data embedded inside HTML pages. We describe Anthelion, the first focused crawler addressing this task. We propose new methods of focused crawling specifically designed for collecting data-rich pages with greater efficiency. In particular, we propose a novel combination of online learning and bandit-based explore/exploit approaches to predict data-rich web pages based on the context of the page as well as using feedback from the extraction of metadata from previously seen pages. We show that these techniques significantly outperform state-of-the-art approaches for focused crawling, measured as the ratio of relevant pages and non-relevant pages collected within a given budget.", "title": "" }, { "docid": "3e8adf9643ff91ae1ed846d9fc6be72e", "text": "Durable responses and encouraging survival have been demonstrated with immune checkpoint inhibitors in small-cell lung cancer (SCLC), but predictive markers are unknown. We used whole exome sequencing to evaluate the impact of tumor mutational burden on efficacy of nivolumab monotherapy or combined with ipilimumab in patients with SCLC from the nonrandomized or randomized cohorts of CheckMate 032. Patients received nivolumab (3 mg/kg every 2 weeks) or nivolumab plus ipilimumab (1 mg/kg plus 3 mg/kg every 3 weeks for four cycles, followed by nivolumab 3 mg/kg every 2 weeks). Efficacy of nivolumab ± ipilimumab was enhanced in patients with high tumor mutational burden. Nivolumab plus ipilimumab appeared to provide a greater clinical benefit than nivolumab monotherapy in the high tumor mutational burden tertile.", "title": "" }, { "docid": "c8869d9a481a3d7100397788d4ced1fb", "text": "E-commerce (electronic commerce or EC) is the buying and selling of goods and services, or the transmitting of funds or data online. E-commerce platforms come in many kinds, with global players such as Amazon, Airbnb, Alibaba, eBay, JD.com and platforms targeting specific markets such as Bol.com and Booking.com. Information retrieval has a natural role to play in e-commerce, especially in connecting people to goods and services. Information discovery in e-commerce concerns different types of search (exploratory search vs. lookup tasks), recommender systems, and natural language processing in e-commerce portals. Recently, the explosive popularity of e-commerce sites has made research on information discovery in e-commerce more important and more popular. There is increased attention for e-commerce information discovery methods in the community as witnessed by an increase in publications and dedicated workshops in this space. Methods for information discovery in e-commerce largely focus on improving the performance of e-commerce search and recommender systems, on enriching and using knowledge graphs to support e-commerce, and on developing innovative question-answering and bot-based solutions that help to connect people to goods and services. Below we describe why we believe that the time is right for an introductory tutorial on information discovery in e-commerce, the objectives of the proposed tutorial, its relevance, as well as more practical details, such as the format, schedule and support materials.", "title": "" }, { "docid": "3a98dd611afcfd6d51c319bde3b84cc9", "text": "This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/3, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.", "title": "" }, { "docid": "0ba15705fcd12cb3efa17a6878c43606", "text": "Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant's microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth's position on the body and the user's language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.", "title": "" }, { "docid": "4d87c091246b3cbb43444a59187efc94", "text": "A fully-integrated FMCW radar system for automotive applications operating at 77 GHz has been proposed. Utilizing a fractional- synthesizer as the FMCW generator, the transmitter linearly modulates the carrier frequency across a range of 700 MHz. The receiver together with an external baseband processor detects the distance and relative speed by conducting an FFT-based algorithm. Millimeter-wave PA and LNA are incorporated on chip, providing sufficient gain, bandwidth, and sensitivity. Fabricated in 65-nm CMOS technology, this prototype provides a maximum detectable distance of 106 meters for a mid-size car while consuming 243 mW from a 1.2-V supply.", "title": "" }, { "docid": "1a27a4575f59b6ba20e9250c0c77187d", "text": "Model-based anomaly detection in technical systems is an important application field of artificial intelligence. We consider discrete event systems, which is a system class to which a wide range of relevant technical systems belong and for which no comprehensive model-based anomaly detection approach exists so far. The original contributions of this paper are threefold: First, we identify the types of anomalies that occur in discrete event systems and we propose a tailored behavior model that captures all anomaly types, called probabilistic deterministic timed-transition automata (PDTTA). Second, we present a new algorithm to learn a PDTTA from sample observations of a system. Third, we describe an approach to detect anomalies based on a learned PDTTA. An empirical evaluation in a practical application, namely ATM fraud detection, shows promising results.", "title": "" }, { "docid": "a8a483db765f791a6bd27e066eee20b0", "text": "Autoerotic death by hanging or ligature is a method of autoeroticism well known by forensic pathologists. In order to analyze autoerotic deaths of nonclassic hanging or ligature type, this paper reviews all published cases of autoerotic deaths from 1954 to 2004, with the exclusion of homicide cases or cases in which the autoerotic activity was not solitary. These articles were obtained through a systematic Medline database search. A total of 408 cases of such deaths has been reported in 57 articles. For each case, the following characteristics are presented here: sex, age, race, method of autoerotic activity, cause of death, and location where the body was found. Autoerotic death practioners were predominantly Caucasian males. Victims were aged from 9 to 77 years and were mainly found in various indoor locations. Most cases were asphyxia by hanging, ligature, plastic bags, chemical substances, or a mixture of these. Still, atypical methods of autoerotic activity leading to death accounted for about 10.3% of cases in the literature and are classified here into five broad categories: electrocution (3.7%), overdressing/body wrapping (1.5%), foreign body insertion (1.2%), atypical asphyxia method (2.9%), and miscellaneous (1.0%). All these atypical methods are further discussed individually.", "title": "" }, { "docid": "8c89db0cd8c5dc666d7d6b244d35326b", "text": "Cervical cancer, as the fourth most common cause of death from cancer among women, has no symptoms in the early stage. There are few methods to diagnose cervical cancer precisely at present. Support vector machine (SVM) approach is introduced in this paper for cervical cancer diagnosis. Two improved SVM methods, support vector machine-recursive feature elimination and support vector machine-principal component analysis (SVM-PCA), are further proposed to diagnose the malignant cancer samples. The cervical cancer data are represented by 32 risk factors and 4 target variables: Hinselmann, Schiller, Cytology, and Biopsy. All four targets have been diagnosed and classified by the three SVM-based approaches, respectively. Subsequently, we make the comparison among these three methods and compare our ranking result of risk factors with the ground truth. It is shown that SVM-PCA method is superior to the others.", "title": "" }, { "docid": "eb7990a677cd3f96a439af6620331400", "text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "title": "" }, { "docid": "4ecd2e3d009351ee9e1328614784c4ba", "text": "The vast supply of different smartphone makes and models, along with their accompanying operating systems, increase the demand for an all-in-one development solution. Quite a few approaches to solving this problem have cropped up over the years, ranging from purely web-oriented solutions to something more akin to a native application. React Native and Progressive Web App development are two different approaches, both new and promising, on this spectrum. This thesis evaluates these approaches in a standardized way using the ISO 25010 Product Quality Model to gain insight into these types of cross-platform development as well as how well such an evaluation works in this context. The results show that, while not a perfect fit, a standardized evaluation brings forward less obvious aspects of the development process and contributes with a helpful structure to the evaluation process. Author", "title": "" }, { "docid": "feafd64c9f81b07f7f616d2e36e15e0c", "text": "Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.", "title": "" }, { "docid": "e2009f56982f709671dcfe43048a8919", "text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.", "title": "" } ]
scidocsrr
9f243d3d12f8eaeff1bb4406fdcb517e
Experimental exploration of the performance of binary networks
[ { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" }, { "docid": "b7d13c090e6d61272f45b1e3090f0341", "text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "title": "" }, { "docid": "b9aa1b23ee957f61337e731611a6301a", "text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.", "title": "" } ]
[ { "docid": "2f58e94218fb0a46b9f654c1141b192d", "text": "How can the development of ideas in a scientific field be studied over time? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006. We induce topic clusters using Latent Dirichlet Allocation, and examine the strength of each topic over time. Our methods find trends in the field including the rise of probabilistic methods starting in 1988, a steady increase in applications, and a sharp decline of research in semantics and understanding between 1978 and 2001, possibly rising again after 2001. We also introduce a model of the diversity of ideas, topic entropy, using it to show that COLING is a more diverse conference than ACL, but that both conferences as well as EMNLP are becoming broader over time. Finally, we apply Jensen-Shannon divergence of topic distributions to show that all three conferences are converging in the topics they cover.", "title": "" }, { "docid": "72d47983c009c7892155fc3c491c9f52", "text": "To improve the stability accuracy of stable platform of unmanned aerial vehicle (UAV), a line-of-sight stabilized control system is developed by using an inertial and optical-mechanical (fast steering mirror) combined method in a closed loop with visual feedback. The system is based on Peripheral Component Interconnect (PCI), included an image-deviation-obtained system and a combined controller using a PQ method. The method changes the series-wound structure to the shunt-wound structure of dual-input/single-output (DISO), and decouples the actuator range and frequency of inertial stabilization and fast steering mirror stabilization. Test results show the stability accuracy improves from 20μrad of inertial method to 5μrad of inertial and optical-mechanical combined method, and prove the effectiveness of the combined line-of-sight stabilization control system.", "title": "" }, { "docid": "f0ced6c4641ebdc46e1f5efe4c3080ce", "text": "This paper summarizes results of the 1st Contest on Semantic Description of Human Activities (SDHA), in conjunction with ICPR 2010. SDHA 2010 consists of three types of challenges, High-level Human Interaction Recognition Challenge, Aerial View Activity Classification Challenge, and Wide-Area Activity Search and Recognition Challenge. The challenges are designed to encourage participants to test existing methodologies and develop new approaches for complex human activity recognition scenarios in realistic environments. We introduce three new public datasets through these challenges, and discuss results of state-ofthe-art activity recognition systems designed and implemented by the contestants. A methodology using a spatio-temporal voting [19] successfully classified segmented videos in the UT-Interaction datasets, but had a difficulty correctly localizing activities from continuous videos. Both the method using local features [10] and the HMM based method [18] recognized actions from low-resolution videos (i.e. UT-Tower dataset) successfully. We compare their results in this paper.", "title": "" }, { "docid": "e9684914bb38ad30ffc623668f6b6cfe", "text": "The Glasgow Coma Scale (GCS) has been widely adopted. Failure to assess the verbal score in intubated patients and the inability to test brainstem reflexes are shortcomings. We devised a new coma score, the FOUR (Full Outline of UnResponsiveness) score. It consists of four components (eye, motor, brainstem, and respiration), and each component has a maximal score of 4. We prospectively studied the FOUR score in 120 intensive care unit patients and compared it with the GCS score using neuroscience nurses, neurology residents, and neurointensivists. We found that the interrater reliability was excellent with the FOUR score (kappa(w) = 0.82) and good to excellent for physician rater pairs. The agreement among raters was similar with the GCS (kappa(w) = 0.82). Patients with the lowest GCS score could be further distinguished using the FOUR score. We conclude that the agreement among raters was good to excellent. The FOUR score provides greater neurological detail than the GCS, recognizes a locked-in syndrome, and is superior to the GCS due to the availability of brainstem reflexes, breathing patterns, and the ability to recognize different stages of herniation. The probability of in-hospital mortality was higher for the lowest total FOUR score when compared with the lowest total GCS score.", "title": "" }, { "docid": "3c635de0cc71f3744b3496069633bdd2", "text": "Where malaria prospers most, human societies have prospered least. The global distribution of per-capita gross domestic product shows a striking correlation between malaria and poverty, and malaria-endemic countries also have lower rates of economic growth. There are multiple channels by which malaria impedes development, including effects on fertility, population growth, saving and investment, worker productivity, absenteeism, premature mortality and medical costs.", "title": "" }, { "docid": "34380f2c5427aeac7677c2ef7ed35627", "text": "An electrocardiography SoC is integrated into a form-fitting textile along with flexible electrodes, battery and antenna. Clinically standard 12-lead ECG is recorded from this “smart shirt.” The data is encrypted and wirelessly transmitted via an on-chip ISM band radio and flexible antenna allowing secure, continuous cardiac monitoring on a smartphone while dissipating less than 1mW.", "title": "" }, { "docid": "a1389b49a11508c33d462d28b1b3d93e", "text": "Glaucoma is one of the most common causes of blindness in the world. The vision lost due to glaucoma cannot be regained. Early detection of glaucoma is thus very important. The Optic Disk(OD), Optic Cup(OC) and Neuroretinal Rim(NRR) are among the important features of a retinal image that can be used in the detection of glaucoma. In this paper, a computer-assisted method for the detection of glaucoma based on the ISNT rule is presented. The OD and OC are segmented using watershed transformation. The NRR area in the ISNT quadrants is obtained from the segmented OD and OC. The method is applied on the publicly available databases HRF, Messidor, DRIONS-DB, RIM-ONE and a local hospital database consisting of both normal and glaucomatous images. The proposed method is simple, computationally efficient and achieves a sensitivity of 91.82% and an overall accuracy of 94.14%.", "title": "" }, { "docid": "617e92bba5d9bd93eaae1718c1da276c", "text": "This paper describes MAISE, an embedded linear circuit simulator for use mainly within timing and noise analysis tools. MAISE achieves the fastest possible analysis performance over a wide range of circuit sizes and topologies by an adaptive architecture that allows applying the most efficient combination of model reduction algorithms and linear solvers for each class of circuits. The main pillar of adaptability in MAISE is a novel nodal-analysis formulation (PNA) which permits the use of symmetric, positive-definite Cholesky solvers for all circuit topologies. Moreover, frequently occurring special cases, e.g., inductor-resistor tree structures result in particular types of matrices that are solved by an even faster linear time algorithm. Model order reduction algorithms employed in MAISE exploit symmetry and positive-definiteness whenever available and use symmetric-Lanczos iteration and nonstandard inner-products for generating the Krylov subspace basis. The efficiency of the new simulator is supported by a wide range of industrial examples.", "title": "" }, { "docid": "89c9ac66eaa0371bb6fe22c822edfc42", "text": "A study of email responsiveness was conducted to understand how the timing of email responses conveys important information. Interviews and observations explored users’ perceptions of how they responded to email and formed expectations of others’ responses to them. We identified ways in which users maintain and cultivate a responsiveness image for projecting expectations about their email response. We also discuss other ways people discover contextual cues for responsiveness, which include using tools such as the calendar and phone, accounting for the amount of work time overlap available, and establishing a pacing between email correspondents. These cues help users develop a sense of when to expect a response and when breakdown has occurred, requiring further action. Anyone who uses email regularly has sent a message and wondered, “When will I get a response to this email?” Or, “How long should I wait for a response to this message before taking further action?” Beyond the content of email messages, the timing of when email is sent, when it is read, and when a response is received are all examples of rhythms of email activity that help users coordinate their email correspondence. Previous work has demonstrated that people have rhythmic temporal patterns of activity in the workplace, and that these rhythms can help coordinate interaction (Begole et al., 2002). We wanted to extend this work by exploring what meaningful temporal patterns occur in the usage of email. Email is clearly a crucial and ubiquitous tool for office workers, and understanding the types of rhythms that govern its use will help guide design, improving the overall effectiveness of email services. Specifically, we explored the following questions about email rhythms: • How do individuals decide when to read and respond to a message? • How do individuals form expectations about how long it will take others to respond to an email? • How do these expectations affect email behaviors?", "title": "" }, { "docid": "a3034cc659f433317109d9157ea53302", "text": "Cyberbullying is an emerging form of bullying that takes place through contemporary information and communication technologies. Building on past research on the psychosocial risk factors for cyberbullying in this age group, the present study assessed a theory-driven, school-based preventive intervention that targeted moral disengagement, empathy and social cognitive predictors of cyberbullying. Adolescents (N = 355) aged between 16 and 18 years were randomly assigned into the intervention and the control group. Both groups completed anonymous structured questionnaires about demographics, empathy, moral disengagement and cyberbullying-related social cognitive variables (attitudes, actor prototypes, social norms, and behavioral expectations) before the intervention, post-intervention and 6 months after the intervention. The intervention included awareness-raising and interactive discussions about cyberbullying with intervention group students. Analysis of covariance (ANCOVA) showed that, after controlling for baseline measurements, there were significant differences at post-intervention measures in moral disengagement scores, and in favorability of actor prototypes. Further analysis on the specific mechanisms of moral disengagement showed that significant differences were observed in distortion of consequences and attribution of blame. The implications of the intervention are discussed, and guidelines for future school-based interventions against cyberbullying are provided.", "title": "" }, { "docid": "2ecfc909301dcc6241bec2472b4d4135", "text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.", "title": "" }, { "docid": "735d6103002a568de50bd2c591d23a89", "text": "This paper presents a new, simple and bandwidth-efficient distributed routing protocol to support mobile computing in a conference size ad-hoc mobile network environment. Unlike the conventional approaches such as link-state and distance-vector distributed routing algorithms, our protocol does not attempt to consistently maintain routing information in every node. In an ad-hoc mobile network where mobile hosts (MHs) are acting as routers and where routes are made inconsistent by MHs’ movement, we employ an associativity-based routing scheme where a route is selected based on nodes having associativity states that imply periods of stability. In this manner, the routes selected are likely to be long-lived and hence there is no need to restart frequently, resulting in higher attainable throughput. Route requests are broadcast on a per need basis. The association property also allows the integration of ad-hoc routing into a BS-oriented Wireless LAN (WLAN) environment, providing the fault tolerance in times of base stations (BSs) failures. To discover shorter routes and to shorten the route recovery time when the association property is violated, the localised-query and quick-abort mechanisms are respectively incorporated into the protocol. To further increase cell capacity and lower transmission power requirements, a dynamic cell size adjustment scheme is introduced. The protocol is free from loops, deadlock and packet duplicates and has scalable memory requirements. Simulation results obtained reveal that shorter and better routes can be discovered during route re-constructions.", "title": "" }, { "docid": "c9ea1bdac0e4b4a5bc079a48316114af", "text": "We report clinical, radiological and anthropological findings from the first Czech patient with Kniest dysplasia whose radio-clinical diagnosis was confirmed by DNA studies. Kniest dysplasia is an inherited disorder associated with defects in type of collagen II with specific clinical and characteristic radiographic findings. Our affected girl had dysmorphic and radiographic features consistent with Kniest disease: cleft palate, hip dysplasia, dysmorphic flat face, short trunk and extremities, spine deformity, platyspondyly, short and broad femoral necks. Mental development was normal. Body height was below norm (-3.2 SD) and muscular hypotrophy of the extremities and trunk was noticeable. Molecular studies supported the diagnosis of Kniest disease by identification of the COL2A1 mutation (c.1023+1G>A) in intron 16.", "title": "" }, { "docid": "8e3eec62b02a9cf7a56803775757925f", "text": "Emotional states of individuals, also known as moods, are central to the expression of thoughts, ideas and opinions, and in turn impact attitudes and behavior. As social media tools are increasingly used by individuals to broadcast their day-to-day happenings, or to report on an external event of interest, understanding the rich ‘landscape’ of moods will help us better interpret and make sense of the behavior of millions of individuals. Motivated by literature in psychology, we study a popular representation of human mood landscape, known as the ‘circumplex model’ that characterizes affective experience through two dimensions: valence and activation. We identify more than 200 moods frequent on Twitter, through mechanical turk studies and psychology literature sources, and report on four aspects of mood expression: the relationship between (1) moods and usage levels, including linguistic diversity of shared content (2) moods and the social ties individuals form, (3) moods and amount of network activity of individuals, and (4) moods and participatory patterns of individuals such as link sharing and conversational engagement. Our results provide at-scale naturalistic assessments and extensions of existing conceptualizations of human mood in social media contexts.", "title": "" }, { "docid": "52d5dc571b13d47cc281504a0b890a67", "text": "We replicated a controlled experiment first run in the early 1980’s to evaluate the effectiveness and efficiency of 50 student subjects who used three defect-detection techniques to observe failures and isolate faults in small C programs. The three techniques were code reading by stepwise abstraction, functional (black-box) testing, and structural (white-box) testing. Two internal replications showed that our relatively inexperienced subjects were similarly effective at observing failures and isolating faults with all three techniques. However, our subjects were most efficient at both tasks when they used functional testing. Some significant differences among the techniques in their effectiveness at isolating faults of different types were seen. These results suggest that inexperienced subjects can apply a formal verification technique (code reading) as effectively as an execution-based validation technique, but they are most efficient when using functional testing.", "title": "" }, { "docid": "7d6c0cbff9e16a08a8f8d27a7fc72547", "text": "We examined the relationships among social comparisons (i.e., body, eating, and exercise), body surveillance, and body dissatisfaction in the natural environment. Participants were 232 college women who completed a daily diary protocol for 2 weeks, responding to online surveys 3 times per day. When the contemporaneous relationships among these variables were examined in a single model, results indicated that comparing one's body, eating, or exercise to others or engaging in body surveillance was associated with elevated body dissatisfaction in the same short-term assessment period. Additionally, individuals with high trait-like engagement in body comparisons or body surveillance experienced higher levels of body dissatisfaction. Trait-like eating and exercise comparison tendencies did not predict unique variance in body dissatisfaction. When examined prospectively in a single model, trait-like body comparison and body surveillance remained predictors of body dissatisfaction, but the only more state-like behavior predictive of body dissatisfaction at the next assessment was eating comparison. Results provide support for the notion that naturalistic body dissatisfaction is predicted by both state- and trait-like characteristics. In particular, social comparisons (i.e., body, eating, and exercise) and body surveillance may function as proximal triggers for contemporaneous body dissatisfaction, with eating comparisons emerging as an especially important predictor of body dissatisfaction over time. Regarding trait-like predictors, general tendencies to engage in body comparisons and body surveillance may be more potent distal predictors of body dissatisfaction than general eating or exercise comparison tendencies.", "title": "" }, { "docid": "a7caa527f1eacda37b30a1686309d7b4", "text": "We offer a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors made by three commercial devices. After discussing briefly the character recognition abilities of humans and computers, we present illustrated examples of recognition errors. The top level of our taxonomy of the causes of errors consists of Imaging Defects, Similar Symbols, Punctuation, and Typography. The analysis of a series of \"snippets\" from this perspective provides insight into the strengths and weaknesses of current systems, and perhaps a road map to future progress. The examples were drawn from the large-scale tests conducted by the authors at the Information Science Research Institute of the University of Nevada, Las Vegas. By way of conclusion, we point to possible approaches for improving the accuracy of today's systems. The talk is based on our eponymous monograph, recently published in The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, 1999.", "title": "" }, { "docid": "1deeae749259ff732ad3206dc4a7e621", "text": "In traditional active learning, there is only one labeler that always returns the ground truth of queried labels. However, in many applications, multiple labelers are available to offer diverse qualities of labeling with different costs. In this paper, we perform active selection on both instances and labelers, aiming to improve the classification model most with the lowest cost. While the cost of a labeler is proportional to its overall labeling quality, we also observe that different labelers usually have diverse expertise, and thus it is likely that labelers with a low overall quality can provide accurate labels on some specific instances. Based on this fact, we propose a novel active selection criterion to evaluate the cost-effectiveness of instance-labeler pairs, which ensures that the selected instance is helpful for improving the classification model, and meanwhile the selected labeler can provide an accurate label for the instance with a relative low cost. Experiments on both UCI and real crowdsourcing data sets demonstrate the superiority of our proposed approach on selecting cost-effective queries.", "title": "" }, { "docid": "5a1b49162856c8f2b59fec0e063246e9", "text": "Supply chain network design (SCND) is one of the most crucial planning problems in supply chain management (SCM). Nowadays, design decisions should be viable enough to function well under complex and uncertain business environments for many years or decades. Therefore, it is essential to make these decisions in the presence of uncertainty, as over the last two decades, a large number of relevant publications have emphasized its importance. The aim of this paper is to provide a comprehensive review of studies in the fields of SCND and reverse logistics network design under uncertainty. The paper is organized in two main parts to investigate the basic features of these studies. In the first part, planning decisions, network structure, paradigms and aspects related to SCM are discussed. In the second part, existing optimization techniques for dealing with uncertainty such as recourse-based stochastic programming, risk-averse stochastic programming, robust optimization, and fuzzy mathematical programming are explored in terms of mathematical modeling and solution approaches. Finally, the drawbacks and missing aspects of the related literature are highlighted and a list of potential issues for future research directions is recommended. © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license. ( http://creativecommons.org/licenses/by/4.0/ )", "title": "" }, { "docid": "f923f1a0b2e6748cc5aef14a17036461", "text": "In the early days of email, widely-used conventions for indicating quoted reply content and email signatures made it easy to segment email messages into their functional parts. Today, the explosion of different email formats and styles, coupled with the ad hoc ways in which people vary the structure and layout of their messages, means that simple techniques for identifying quoted replies that used to yield 95% accuracy now find less than 10% of such content. In this paper, we describe Zebra, an SVM-based system for segmenting the body text of email messages into nine zone types based on graphic, orthographic and lexical cues. Zebra performs this task with an accuracy of 87.01%; when the number of zones is abstracted to two or three zone classes, this increases to 93.60% and 91.53% respectively.", "title": "" } ]
scidocsrr
3c7f9fcfa011211679523349532ec37d
Bridging belief function theory to modern machine learning
[ { "docid": "70e6148316bd8915afd8d0908fb5ab0d", "text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section", "title": "" }, { "docid": "dfa3b26c56900f3c2ff066bda4082a55", "text": "Procedures of statistical inference are described which generalize Bayesian inference in specific ways. Probability is used in such a way that in general only bounds may be placed on the probabilities of given events, and probability systems of this kind are suggested both for sample information and for prior information. These systems are then combined using a specified rule. Illustrations are given for inferences about trinomial probabilities, and for inferences about a monotone sequence of binomial pi. Finally, some comments are made on the general class of models which produce upper and lower probabilities, and on the specific models which underlie the suggested inference procedures.", "title": "" }, { "docid": "c18cec45829e4aec057443b9da0eeee5", "text": "This paper presents a synthesis on the application of fuzzy integral as an innovative tool for criteria aggregation in decision problems. The main point is that fuzzy integrals are able to model interaction between criteria in a flexible way. The methodology has been elaborated mainly in Japan, and has been applied there successfully in various fields such as design, reliability, evaluation of goods, etc. It seems however that this technique is still very little known in Europe. It is one of the aim of this review to disseminate this emerging technology in many industrial fields.", "title": "" } ]
[ { "docid": "dfd11eacd79e58d01876390710a6f34e", "text": "Do metaphors shape people’s emotional states and mindsets for dealing with hardship? Natural language metaphors may act as frames that encourage people to reappraise an emotional situation, changing the way they respond to it. Recovery from cancer is one type of adversity that many people face, and it can be mediated by the mindset people adopt. We investigate whether two common metaphors for describing a cancer experience – the battle and the journey – encourage people to make different inferences about the patient’s emotional state. After being exposed to the battle metaphor participants inferred that the patient would feel more guilt if he didn’t recover, while after being exposed to the journey metaphor participants felt that he had a better chance of making peace with his situation. We discuss implications of this work for investigations of metaphor and emotion, mindsets, and recovery.", "title": "" }, { "docid": "b4b2c5f66c948cbd4c5fbff7f9062f12", "text": "China is taking major steps to improve Beijing’s air quality for the 2008 Olympic Games. However, concentrations of fine particulate matter and ozone in Beijing often exceed healthful levels in the summertime. Based on the US EPA’s Models-3/CMAQ model simulation over the Beijing region, we estimate that about 34% of PM2.5 on average and 35–60% of ozone during high ozone episodes at the Olympic Stadium site can be attributed to sources outside Beijing. Neighboring Hebei and Shandong Provinces and the Tianjin Municipality all exert significant influence on Beijing’s air quality. During sustained wind flow from the south, Hebei Province can contribute 50–70% of Beijing’s PM2.5 concentrations and 20–30% of ozone. Controlling only local sources in Beijing will not be sufficient to attain the air quality goal set for the Beijing Olympics. There is an urgent need for regional air quality management studies and new emission control strategies to ensure that the air quality goals for 2008 are met. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "acb3aaaf79ebc3fc65724e92e4d076aa", "text": "Lay dispositionism refers to lay people's tendency to use traits as the basic unit of analysis in social perception (L. Ross & R. E. Nisbett, 1991). Five studies explored the relation between the practices indicative of lay dispositionism and people's implicit theories about the nature of personal attributes. As predicted, compared with those who believed that personal attributes are malleable (incremental theorists), those who believed in fixed traits (entity theorists) used traits or trait-relevant information to make stronger future behavioral predictions (Studies 1 and 2) and made stronger trait inferences from behavior (Study 3). Moreover, the relation between implicit theories and lay dispositionism was found in both the United States (a more individualistic culture) and Hong Kong (a more collectivistic culture), suggesting this relation to be generalizable across cultures (Study 4). Finally, an experiment in which implicit theories were manipulated provided preliminary evidence for the possible causal role of implicit theories in lay dispositionism (Study 5).", "title": "" }, { "docid": "6d699c8c41db2bd702002765b0342a31", "text": "This paper aims to describe different approaches for studying the overall diet with advantages and limitations. Studies of the overall diet have emerged because the relationship between dietary intake and health is very complex with all kinds of interactions. These cannot be captured well by studying single dietary components. Three main approaches to study the overall diet can be distinguished. The first method is researcher-defined scores or indices of diet quality. These are usually based on guidelines for a healthy diet or on diets known to be healthy. The second approach, using principal component or cluster analysis, is driven by the underlying dietary data. In principal component analysis, scales are derived based on the underlying relationships between food groups, whereas in cluster analysis, subgroups of the population are created with people that cluster together based on their dietary intake. A third approach includes methods that are driven by a combination of biological pathways and the underlying dietary data. Reduced rank regression defines linear combinations of food intakes that maximally explain nutrient intakes or intermediate markers of disease. Decision tree analysis identifies subgroups of a population whose members share dietary characteristics that influence (intermediate markers of) disease. It is concluded that all approaches have advantages and limitations and essentially answer different questions. The third approach is still more in an exploration phase, but seems to have great potential with complementary value. More insight into the utility of conducting studies on the overall diet can be gained if more attention is given to methodological issues.", "title": "" }, { "docid": "564c71ca08e39063f5de01fa5c8e74a3", "text": "The Internet of Things (IoT) is a latest concept of machine-to-machine communication, that also gave birth to several information security problems. Many traditional software solutions fail to address these security issues such as trustworthiness of remote entities. Remote attestation is a technique given by  Trusted Computing Group (TCG) to monitor and verify this trustworthiness. In this regard, various remote validation methods have been proposed. However, static techniques cannot provide resistance to recent attacks e.g. the latest Heartbleed bug, and the recent high profile glibc attack on Linux operating system. In this research, we have designed and implemented a lightweight Linux kernel security module for IoT devices that is  scalable enough to monitor multiple applications in the kernel space. The newly built technique can measure and report multiple application’s static and dynamic behavior simultaneously. Verification of behavior of applications is performed via machine learning techniques. The result shows that deviating behavior can be detected successfully by the verifier.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "45071a33abbf7b33ed69d610936a6af7", "text": "Graphene is a wonder material with many superlatives to its name. It is the thinnest known material in the universe and the strongest ever measured. Its charge carriers exhibit giant intrinsic mobility, have zero effective mass, and can travel for micrometers without scattering at room temperature. Graphene can sustain current densities six orders of magnitude higher than that of copper, shows record thermal conductivity and stiffness, is impermeable to gases, and reconciles such conflicting qualities as brittleness and ductility. Electron transport in graphene is described by a Dirac-like equation, which allows the investigation of relativistic quantum phenomena in a benchtop experiment. This review analyzes recent trends in graphene research and applications, and attempts to identify future directions in which the field is likely to develop.", "title": "" }, { "docid": "f63da8e7659e711bcb7a148ea12a11f2", "text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.", "title": "" }, { "docid": "f3727bfc3965bcb49d8897f144ac13a3", "text": "Presenteeism refers to attending work while ill. Although it is a subject of intense interest to scholars in occupational medicine, relatively few organizational scholars are familiar with the concept. This article traces the development of interest in presenteeism, considers its various conceptualizations, and explains how presenteeism is typically measured. Organizational and occupational correlates of attending work when ill are reviewed, as are medical correlates of resulting productivity loss. It is argued that presenteeism has important implications for organizational theory and practice, and a research agenda for organizational scholars is presented. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "765c5ce51bc7c50ae0fa09bc4f04d851", "text": "Following the success of deep convolutional networks in various vision and speech related tasks, researchers have started investigating generalizations of the well-known technique for graph-structured data. A recently-proposed method called Graph Convolutional Networks has been able to achieve state-of-the-art results in the task of node classification. However, since the proposed method relies on localized first-order approximations of spectral graph convolutions, it is unable to capture higher-order interactions between nodes in the graph. In this work, we propose a motif-based graph attention model, called Motif Convolutional Networks, which generalizes past approaches by using weighted multi-hop motif adjacency matrices to capture higher-order neighborhoods. A novel attention mechanism is used to allow each individual node to select the most relevant neighborhood to apply its filter. Experiments show that our proposed method is able to achieve state-of-the-art results on the semi-supervised node classification task.", "title": "" }, { "docid": "2eff2b22b7ed1a23613399ee39535ccf", "text": "Despite wishing to return to productive activity, many individuals with schizophrenia enter rehabilitation with severe doubts about their abilities. Negative beliefs in schizophrenia have been linked with poorer employment outcome. Accordingly, in this paper, we describe efforts to synthesize vocational and cognitive behavior therapy interventions into a 6-month manualized program to assist persons with schizophrenia spectrum disorders overcome negative beliefs and meet vocational goals. This program, the Indianapolis Vocational Intervention Program (IVIP), includes weekly group and individual interventions and is intended as an adjunct to work therapy programs. The IVIP was initially developed over a year of working with 20 participants with Structured Clinical Interview for the Diagnostic and Statistical Manual-I (SCID-I) confirmed diagnoses of schizophrenia or schizoaffective disorder who were actively engaged in 20 hours per week of work activity. For this paper, we explain the development of the treatment manual and the group and individual interventions and present case examples that illustrate how persons with severe mental illness might utilize the manualized intervention.", "title": "" }, { "docid": "441a6a879e0723c00f48796fd4bb1a91", "text": "Recent research on Low Power Wide Area Network (LPWAN) technologies which provide the capability of serving massive low power devices simultaneously has been very attractive. The LoRaWAN standard is one of the most successful developments. Commercial pilots are seen in many countries around the world. However, the feasibility of large scale deployments, for example, for smart city applications need to be further investigated. This paper provides a comprehensive case study of LoRaWAN to show the feasibility, scalability, and reliability of LoRaWAN in realistic simulated scenarios, from both technical and economic perspectives. We develop a Matlab based LoRaWAN simulator to offer a software approach of performance evaluation. A practical LoRaWAN network covering Greater London area is implemented. Its performance is evaluated based on two typical city monitoring applications. We further present an economic analysis and develop business models for such networks, in order to provide a guideline for commercial network operators, IoT vendors, and city planners to investigate future deployments of LoRaWAN for smart city applications.", "title": "" }, { "docid": "e9e19edc17e284932e4a09a97a603947", "text": "In this paper we analyze the process of hypermedia applications design and implementation, focusing in particular on two critical aspects of these applications: the navigational and interface structure. We discuss the way in which we build the navigation and abstract interface models using the Object-Oriented Hypermedia Design Method (OOHDM); we show which concerns must be taken into account for each task by giving examples from a real project we are developing, the Portinari Project. We show which implementation concerns must be considered when defining interface behavior, discussing both a Toolbook and a HTML implementation of the example application.", "title": "" }, { "docid": "b12984acfb3d48040d0fad8818606355", "text": "Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using the well-known interval approach.", "title": "" }, { "docid": "6403b543937832f641d98b9212d2428e", "text": "Information edge and 3 millennium predisposed so many of revolutions. Business organization with emphasize on information systems is try to gathering desirable information for decision making. Because of comprehensive change in business background and emerge of computers and internet, the business structure and needed information had change, the competitiveness as a major factor for life of organizations in information edge is preyed of information technology challenges. In this article we have reviewed in the literature of information systems and discussed the concepts of information system as a strategic tool.", "title": "" }, { "docid": "a393ee2f132cf445e61837bd449a33c6", "text": "A major cyber-security concern to date for webservers are Distributed Denial of Service (DDoS) attacks. Previously we proposed a novel overlay-based method consisting of distributed network of public servers (PS) for preparation, and access nodes (AN) for actual communication. The AN's performance is evaluated under difficult to detect HTTP(S)-DDoS attacks. Yet, attackers may attempt service denial by attacking the PS instead. The focus in this paper is on mitigating complex slow-requesting HTTP-DDoS attacks that target the PS. A proof-of-concept prototype is implemented with simplified countermeasures and tested. We report on the results of two experiments. Results suggest that the simple PS role can enable high mitigation factors of both high-rate and low-rate attack traffic per source, even with 10,000 unique attack sources per target PS, acting as a second layer of defense with the AN. Yet, with a cost of longer time to load the requested resource file in comparison to direct access.", "title": "" }, { "docid": "97aab9c46e7eb7a3d332f18a6a00411a", "text": "Identifying mathematical relations expressed in text is essential to understanding a broad range of natural language text from election reports, to financial news, to sport commentaries to mathematical word problems. This paper focuses on identifying and understanding mathematical relations described within a single sentence. We introduce the problem of Equation Parsing – given a sentence, identify noun phrases which represent variables, and generate the mathematical equation expressing the relation described in the sentence. We introduce the notion of projective equation parsing and provide an efficient algorithm to parse text to projective equations. Our system makes use of a high precision lexicon of mathematical expressions and a pipeline of structured predictors, and generates correct equations in 70% of the cases. In 60% of the time, it also identifies the correct noun phrase → variables mapping, significantly outperforming baselines. We also release a new annotated dataset for task evaluation.", "title": "" }, { "docid": "916a76aa0c4209567a6309885e0b9b32", "text": "The term \"Industry 4.0\" symbolizes new forms of technology and artificial intelligence within production technologies. Smart robots are going to be the game changers within the factories of the future and will work with humans in indispensable teams within many processes. With this fourth industrial revolution, classical production lines are going through comprehensive modernization, e.g. in terms of in-the-box manufacturing, where humans and machines work side by side in so-called \"hybrid teams\". Questions about how to prepare for newly needed engineering competencies for the age of Industry 4.0, how to assess them and how to teach and train e.g. human-robot-teams have to be tackled in future engineering education. The paper presents theoretical aspects and empirical results of a series of studies, carried out to investigate the competencies of virtual collaboration and joint problem solving in virtual worlds.", "title": "" } ]
scidocsrr
4cb845d12a0ecd0faf247d87f71c031b
An Analysis of Factors Affecting on Online Shopping Behavior of Consumers
[ { "docid": "59a49feef4e3a79c5899fede208a183c", "text": "This study proposed and tested a model of consumer online buying behavior. The model posits that consumer online buying behavior is affected by demographics, channel knowledge, perceived channel utilities, and shopping orientations. Data were collected by a research company using an online survey of 999 U.S. Internet users, and were cross-validated with other similar national surveys before being used to test the model. Findings of the study indicated that education, convenience orientation, Página 1 de 20 Psychographics of the Consumers in Electronic Commerce 11/10/01 http://www.ascusc.org/jcmc/vol5/issue2/hairong.html experience orientation, channel knowledge, perceived distribution utility, and perceived accessibility are robust predictors of online buying status (frequent online buyer, occasional online buyer, or non-online buyer) of Internet users. Implications of the findings and directions for future research were discussed.", "title": "" }, { "docid": "61ba52f205c8b497062995498816b60f", "text": "The past century experienced a proliferation of retail formats in the marketplace. However, as a new century begins, these retail formats are being threatened by the emergence of a new kind of store, the online or Internet store. From being almost a novelty in 1995, online retailing sales were expected to reach $7 billion by 2000 [9]. In this increasngly timeconstrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money [6]. Why is such counterintuitive phenomena prevailing? The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. Having said this, however, we must point out that consumers are buying goods on the Internet. This is reflected in the fact that total sales on the Internet are on the increase [8, 11]. Who are the consumers that are patronizing the Internet? Evidently, for them the perception of the risk associated with shopping on the Internet is low or is overshadowed by its relative convenience. This article attempts to determine why certain consumers are drawn to the Internet and why others are not. Since the pioneering research done by Becker [3], it has been accepted that the consumer maximizes his utility subject to not only income constraints but also time constraints. A consumer seeks out his best decision given that he has a limited budget of time and money. While purchasing a product from a store, a consumer has to expend both money and time. Therefore, the consumer patronizes the retail store where his total costs or the money and time spent in the entire process are the least. Since the util-", "title": "" } ]
[ { "docid": "04d3d9ebbde32b70d2125a88896667ba", "text": "We formulate and study distributed estimation algorithms based on diffusion protocols to implement cooperation among individual adaptive nodes. The individual nodes are equipped with local learning abilities. They derive local estimates for the parameter of interest and share information with their neighbors only, giving rise to peer-to-peer protocols. The resulting algorithm is distributed, cooperative and able to respond in real time to changes in the environment. It improves performance in terms of transient and steady-state mean-square error, as compared with traditional noncooperative schemes. Closed-form expressions that describe the network performance in terms of mean-square error quantities are derived, presenting a very good match with simulations.", "title": "" }, { "docid": "1274ab286b1e3c5701ebb73adc77109f", "text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.", "title": "" }, { "docid": "7e7314256a28deb2250377e9e74c5413", "text": "After stress, the brain is exposed to waves of stress mediators, including corticosterone (in rodents) and cortisol (in humans). Corticosteroid hormones affect neuronal physiology in two time-domains: rapid, non-genomic actions primarily via mineralocorticoid receptors; and delayed genomic effects via glucocorticoid receptors. In parallel, cognitive processing is affected by stress hormones. Directly after stress, emotional behaviour involving the amygdala is strongly facilitated with cognitively a strong emphasis on the \"now\" and \"self,\" at the cost of higher cognitive processing. This enables the organism to quickly and adequately respond to the situation at hand. Several hours later, emotional circuits are dampened while functions related to the prefrontal cortex and hippocampus are promoted. This allows the individual to rationalize the stressful event and place it in the right context, which is beneficial in the long run. The brain's response to stress depends on an individual's genetic background in interaction with life events. Studies in rodents point to the possibility to prevent or reverse long-term consequences of early life adversity on cognitive processing, by normalizing the balance between the two receptor types for corticosteroid hormones at a critical moment just before the onset of puberty.", "title": "" }, { "docid": "fd0dccac0689390e77a0cc1fb14e5a34", "text": "Chromatin remodeling is a complex process shaping the nucleosome landscape, thereby regulating the accessibility of transcription factors to regulatory regions of target genes and ultimately managing gene expression. The SWI/SNF (switch/sucrose nonfermentable) complex remodels the nucleosome landscape in an ATP-dependent manner and is divided into the two major subclasses Brahma-associated factor (BAF) and Polybromo Brahma-associated factor (PBAF) complex. Somatic mutations in subunits of the SWI/SNF complex have been associated with different cancers, while germline mutations have been associated with autism spectrum disorder and the neurodevelopmental disorders Coffin–Siris (CSS) and Nicolaides–Baraitser syndromes (NCBRS). CSS is characterized by intellectual disability (ID), coarsening of the face and hypoplasia or absence of the fifth finger- and/or toenails. So far, variants in five of the SWI/SNF subunit-encoding genes ARID1B, SMARCA4, SMARCB1, ARID1A, and SMARCE1 as well as variants in the transcription factor-encoding gene SOX11 have been identified in CSS-affected individuals. ARID2 is a member of the PBAF subcomplex, which until recently had not been linked to any neurodevelopmental phenotypes. In 2015, mutations in the ARID2 gene were associated with intellectual disability. In this study, we report on two individuals with private de novo ARID2 frameshift mutations. Both individuals present with a CSS-like phenotype including ID, coarsening of facial features, other recognizable facial dysmorphisms and hypoplasia of the fifth toenails. Hence, this study identifies mutations in the ARID2 gene as a novel and rare cause for a CSS-like phenotype and enlarges the list of CSS-like genes.", "title": "" }, { "docid": "63d26f3336960c1d92afbd3a61a9168c", "text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.", "title": "" }, { "docid": "96a979cd63c6155ea2d5ca39d729d0bd", "text": "A millimeter-wave rotary-wave oscillator (RWO) is presented that hybridizes standing- and traveling-wave behavior. This paper presents an analysis of the phase noise of this RWO. The multiphase voltage-controlled RWO is implemented in a 0.12-μm SiGe BiCMOS process using only nMOS devices for the oscillator core. The measured frequency of operation is 45 GHz with 6.5% tuning range and has a phase noise of -91 dBc/Hz at 1 MHz and -112 dBc/Hz at 10 MHz. The power consumption of the oscillator core is 13.8 mW from a supply voltage of 1.2 V.", "title": "" }, { "docid": "190cecfd31f1c269ef5a24babaa71371", "text": "This paper introduces the active learning of Pareto fronts (ALP) algorithm, a novel approach to recover the Pareto front of a multiobjective optimization problem. ALP casts the identification of the Pareto front into a supervised machine learning task. This approach enables an analytical model of the Pareto front to be built. The computational effort in generating the supervised information is reduced by an active learning strategy. In particular, the model is learned from a set of informative training objective vectors. The training objective vectors are approximated Pareto-optimal vectors obtained by solving different scalarized problem instances. The experimental results show that ALP achieves an accurate Pareto front approximation with a lower computational effort than state-of-the-art estimation of distribution algorithms and widely known genetic techniques.", "title": "" }, { "docid": "e0a2031394922edec46eaac60c473358", "text": "In-wheel-motor drive electric vehicle (EV) is an innovative configuration, in which each wheel is driven individually by an electric motor. It is possible to use an electronic differential (ED) instead of the heavy mechanical differential because of the fast response time of the motor. A new ED control approach for a two-in-wheel-motor drive EV is devised based on the fuzzy logic control method. The fuzzy logic method employs to estimate the slip rate of each wheel considering the complex and nonlinear of the system. Then, the ED system distributes torque and power to each motor according to requirements. The effectiveness and validation of the proposed control method are evaluated in the Matlab/Simulink environment. Simulation results show that the new ED control system can keep the slip rate within the optimized range, ensuring the stability of the vehicle either in a straight or a curve lane.", "title": "" }, { "docid": "e68fc0a0522f7cd22c7071896263a1f4", "text": "OBJECTIVES\nThe aim of this study was to evaluate the costs of subsidized care for an adult population provided by private and public sector dentists.\n\n\nMETHODS\nA sample of 210 patients was drawn systematically from the waiting list for nonemergency dental treatment in the city of Turku. Questionnaire data covering sociodemographic background, dental care utilization and marginal time cost estimates were combined with data from patient registers on treatment given. Information was available on 104 patients (52 from each of the public and the private sectors).\n\n\nRESULTS\nThe overall time taken to provide treatment was 181 days in the public sector and 80 days in the private sector (P<0.002). On average, public sector patients had significantly (P < 0.01) more dental visits (5.33) than private sector patients (3.47), which caused higher visiting fees. In addition, patients in the public sector also had higher other out-of-pocket costs than in the private sector. Those who needed emergency dental treatment during the waiting time for comprehensive care had significantly more costly treatment and higher total costs than the other patients. Overall time required for dental visits significantly increased total costs. The total cost of dental care in the public sector was slightly higher (P<0.05) than in the private sector.\n\n\nCONCLUSIONS\nThere is no direct evidence of moral hazard on the provider side from this study. The observed cost differences between the two sectors may indicate that private practitioners could manage their publicly funded patients more quickly than their private paying patients. On the other hand, private dentists providing more treatment per visit could be explained by private dentists providing more than is needed by increasing the content per visit.", "title": "" }, { "docid": "696069ce14bb37713421a01686555a92", "text": "We propose a Bayesian trajectory prediction and criticality assessment system that allows to reason about imminent collisions of a vehicle several seconds in advance. We first infer a distribution of high-level, abstract driving maneuvers such as lane changes, turns, road followings, etc. of all vehicles within the driving scene by modeling the domain in a Bayesian network with both causal and diagnostic evidences. This is followed by maneuver-based, long-term trajectory predictions, which themselves contain random components due to the immanent uncertainty of how drivers execute specific maneuvers. Taking all uncertain predictions of all maneuvers of every vehicle into account, the probability of the ego vehicle colliding at least once within a time span is evaluated via Monte-Carlo simulations and given as a function of the prediction horizon. This serves as the basis for calculating a novel criticality measure, the Time-To-Critical-Collision-Probability (TTCCP) - a generalization of the common Time-To-Collision (TTC) in arbitrary, uncertain, multi-object driving environments and valid for longer prediction horizons. The system is applicable from highly-structured to completely non-structured environments and additionally allows the prediction of vehicles not behaving according to a specific maneuver class.", "title": "" }, { "docid": "49e52c99226766f626dca492fd22ce70", "text": "Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs. As a result, using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption. On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) on various sequential models including sequence classification and language modeling. We demonstrate that our method achieves competitive results on the aforementioned tasks while using binary/ternary weights during the runtime. On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights. Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12× memory saving and 10× inference speedup compared to the full-precision implementation on an ASIC platform.", "title": "" }, { "docid": "d32c13d9d2338cdfd63686ce0adf1960", "text": "Mobility has always been a big challenge in cellular networks, because it is responsible for traffic fluctuations that eventually result into inconstant resource usage and the need for proper Quality of Service management. When applications get deployed at the network edge, the challenges even grow because software is harder to hand-over than traffic streams. Cloud technologies have been designed with different specifications, and should be properly revised to balance efficiency and effectiveness in distributed and capillary infrastructures. In this paper, we propose some extensions to OpenStack for power management and Quality of Service. Our framework provides additional APIs for setting the service level and interacting with power-saving mechanisms. It is designed to be easily integrated with modern software orchestration tools and workload consolidation algorithms. We report real measurements from an experimental proof-of-concept.", "title": "" }, { "docid": "cfeb97c3be1c697fb500d54aa43af0e1", "text": "The development of accurate and robust palmprint verification algorithms is a critical issue in automatic palmprint authentication systems. Among various palmprint verification approaches, the orientation based coding methods, such as competitive code (CompCode), palmprint orientation code (POC) and robust line orientation code (RLOC), are state-of-the-art ones. They extract and code the locally dominant orientation as features and could match the input palmprint in real-time and with high accuracy. However, using only one dominant orientation to represent a local region may lose some valuable information because there are cross lines in the palmprint. In this paper, we propose a novel feature extraction algorithm, namely binary orientation co-occurrence vector (BOCV), to represent multiple orientations for a local region. The BOCV can better describe the local orientation features and it is more robust to image rotation. Our experimental results on the public palmprint database show that the proposed BOCV outperforms the CompCode, POC and RLOC by reducing the equal error rate (EER) significantly. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "648f4e6997fe289e56f4b2729c2ecb80", "text": "A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure. In a recent paper, Shen et al. (2018) introduce such a model and report nearstate-of-the-art results on the target task of language modeling, and the first strong latent tree learning result on constituency parsing. In an attempt to reproduce these results, we discover issues that make the original results hard to trust, including tuning and even training on what is effectively the test set. Here, we attempt to reproduce these results in a fair experiment and to extend them to two new datasets. We find that the results of this work are robust: All variants of the model under study outperform all latent tree learning baselines, and perform competitively with symbolic grammar induction systems. We find that this model represents the first empirical success for latent tree learning, and that neural network language modeling warrants further study as a setting for grammar induction.", "title": "" }, { "docid": "4a39ad1bac4327a70f077afa1d08c3f0", "text": "Machine learning plays a role in many aspects of modern IR systems, and deep learning is applied in all of them. The fast pace of modern-day research has given rise to many approaches to many IR problems. The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. The aim of this full- day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR.", "title": "" }, { "docid": "f8dc1eb09fb8f13b02f8e17734190b9f", "text": "The aim of this paper is to show the possibility to harvest RF energy to supply wireless sensor networks in an outdoor environment. In those conditions, the number of existing RF bands is unpredictable. The RF circuit has to harvest all the potential RF energy present and cannot be designed for a single RF tone. In this paper, the designed RF harvester adds powers coming from an unlimited number of sub-frequency bands. The harvester's output voltage ratios increase with the number of RF bands. As an application example, a 4-RF band rectenna is designed. The system harvests energy from GSM900 (Global System for Mobile Communications), GSM1800, UMTS (Universal Mobile Telecommunications System) and WiFi bands simultaneously. RF-to-dc conversion efficiency is measured at 62% for a cumulative -10-dBm input power homogeneously widespread over the four RF bands and reaches 84% at 5.8 dBm. The relative error between the measured dc output power with all four RF bands on and the ideal sum of each of the four RF bands power contribution is less than 3%. It is shown that the RF-to-dc conversion efficiency is more than doubled compared to that measured with a single RF source, thanks to the proposed rectifier architecture.", "title": "" }, { "docid": "f55c7be1b44d7870627122ab192483db", "text": "The spontaneous tendency to synchronize our facial expressions with those of others is often termed emotional contagion. It is unclear, however, whether emotional contagion depends on visual awareness of the eliciting stimulus and which processes underlie the unfolding of expressive reactions in the observer. It has been suggested either that emotional contagion is driven by motor imitation (i.e., mimicry), or that it is one observable aspect of the emotional state arising when we see the corresponding emotion in others. Emotional contagion reactions to different classes of consciously seen and \"unseen\" stimuli were compared by presenting pictures of facial or bodily expressions either to the intact or blind visual field of two patients with unilateral destruction of the visual cortex and ensuing phenomenal blindness. Facial reactions were recorded using electromyography, and arousal responses were measured with pupil dilatation. Passive exposure to unseen expressions evoked faster facial reactions and higher arousal compared with seen stimuli, therefore indicating that emotional contagion occurs also when the triggering stimulus cannot be consciously perceived because of cortical blindness. Furthermore, stimuli that are very different in their visual characteristics, such as facial and bodily gestures, induced highly similar expressive responses. This shows that the patients did not simply imitate the motor pattern observed in the stimuli, but resonated to their affective meaning. Emotional contagion thus represents an instance of truly affective reactions that may be mediated by visual pathways of old evolutionary origin bypassing cortical vision while still providing a cornerstone for emotion communication and affect sharing.", "title": "" }, { "docid": "50e66cf5c5e2f379d346c33c360e3b6d", "text": "This manuscript presents a double-sided flat-type permanent magnetic linear energy harvester to scavenge kinetic power from linear motions. The dynamic system model of the linear generator is analytically driven and analyzed by an innovative state-based approach. The analytical equations on maximum power generation specific for non-resonant applications such as human foot motion are derived. Both magnetic circuit analysis and the finite-element analysis simulation are carried out on the linear machine to investigate its performance. Under the typical horizontal foot motion velocity of 4.5 m/s, the proposed energy harvester generates 674 mW electric power with the power density as high as 3.4 mW/cm3. A macro-scale proof-of-concept machine is prototyped to verify the performance of the linear machine and validate the analysis, simulation, and the proposed modeling approach.", "title": "" }, { "docid": "144d1ad172d5dd2ca7b3fc93a83b5942", "text": "This paper extends the recently introduced approach to the modeling and control design in the framework of model predictive control of the dc-dc boost converter to the dc-dc parallel interleaved boost converter. Based on the converter's model a constrained optimal control problem is formulated and solved. This allows the controller to achieve (a) the regulation of the output voltage to a predefined reference value, despite changes in the input voltage and the load, and (b) the load current balancing to the converter's individual legs, by regulating the currents of the circuit's inductors to proper references, set by an outer loop based on an observer. Simulation results are provided to illustrate the merits of the proposed control scheme.", "title": "" }, { "docid": "b7914e542be8aeb5755106525916e86d", "text": "Waymo's self-driving cars contain a broad set of technologies that enable our cars to sense the vehicle surroundings, perceive and understand what is happening in the vehicle vicinity, and determine the safe and efficient actions that the vehicle should take. Many of these technologies are rooted in advanced semiconductor technologies, e.g. faster transistors that enable more compute or low noise designs that enable the faintest sensor signals to be perceived. This paper summarizes a few areas where semiconductor technologies have proven to be fundamentally enabling to self-driving capabilities. The paper also lays out some of the challenges facing advanced semiconductors in the automotive context, as well as some of the opportunities for future innovation.", "title": "" } ]
scidocsrr
074441ec90fbdfcc8f349e7c1b9e4e10
A feature study for classification-based speech separation at very low signal-to-noise ratio
[ { "docid": "1cd45a4f897ea6c473d00c4913440836", "text": "What is the computational goal of auditory scene analysis? This is a key issue to address in the Marrian information-processing framework. It is also an important question for researchers in computational auditory scene analysis (CASA) because it bears directly on how a CASA system should be evaluated. In this chapter I discuss different objectives used in CASA. I suggest as a main CASA goal the use of the ideal time-frequency (T-F) binary mask whose value is one for a T-F unit where the target energy is greater than the interference energy and is zero otherwise. The notion of the ideal binary mask is motivated by the auditory masking phenomenon. Properties of the ideal binary mask are discussed, including their relationship to automatic speech recognition and human speech intelligibility. This CASA goal has led to algorithms that directly estimate the ideal binary mask in monaural and binaural conditions, and these algorithms have substantially advanced the state-of-the-art performance in speech separation.", "title": "" } ]
[ { "docid": "47afccb5e7bcdade764666f3b5ab042e", "text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.", "title": "" }, { "docid": "2ebb21cb1c6982d2d3839e2616cac839", "text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.", "title": "" }, { "docid": "0be66cf5af756aa7bc37e4b452419c45", "text": "Fact checking has captured the attention of the media and the public alike; it has also recently received strong attention from the computer science community, in particular from data and knowledge management, natural language processing and information retrieval; we denote these together under the term “content management”. In this paper, we identify the fact checking tasks which can be performed with the help of content management technologies, and survey the recent research works in this area, before laying out some perspectives for the future. We hope our work will provide interested researchers, journalists and fact checkers with an entry point in the existing literature as well as help develop a roadmap for future research and development work.", "title": "" }, { "docid": "280e83986138daf0237e7502747b8a50", "text": "E-government adoption is the focus of many research studies. However, few studies have compared the adoption factors to identify the most salient predictors of e-government use. This study compares popular adoption constructs to identify the most influential. A survey was administered to elicit citizen perceptions of e-government services. The results of stepwise regression indicate perceived usefulness, trust of the internet, previous use of an e-government service and perceived ease of use all have a significant impact on one’s intention to use an e-government service. The implications for research and practice are discussed below.", "title": "" }, { "docid": "c23008c36f0bca7a1faf405c5f3083ff", "text": "The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.", "title": "" }, { "docid": "2ec37b57a75c70e9edeb9603b0dac5e0", "text": "In this paper, different analysis and design techniques are used to analyze the drive motor in the 2004 Prius hybrid vehicle and to examine alternative spoke-type magnet rotor (buried magnets with magnetization which is orthogonal to the radial direction) and induction motor arrangements. These machines are characterized by high transient torque requirement, compactness, and forced cooling. While rare-earth magnet machines are commonly used in these applications, there is an increasing interest in motors without magnets, hence the investigation of an induction motor. This paper illustrates that the machines operate under highly saturated conditions at high torque and that care should be taken when selecting the correct analysis technique. This is illustrated by divergent results when using I-Psi loops and dq techniques to calculate the torque.", "title": "" }, { "docid": "5e6209b4017039a809f605d0847a57af", "text": "Bag-of-ngrams (BoN) models are commonly used for representing text. One of the main drawbacks of traditional BoN is the ignorance of n-gram’s semantics. In this paper, we introduce the concept of Neural Bag-of-ngrams (Neural-BoN), which replaces sparse one-hot n-gram representation in traditional BoN with dense and rich-semantic n-gram representations. We first propose context guided n-gram representation by adding n-grams to word embeddings model. However, the context guided learning strategy of word embeddings is likely to miss some semantics for text-level tasks. Text guided ngram representation and label guided n-gram representation are proposed to capture more semantics like topic or sentiment tendencies. Neural-BoN with the latter two n-gram representations achieve state-of-the-art results on 4 documentlevel classification datasets and 6 semantic relatedness categories. They are also on par with some sophisticated DNNs on 3 sentence-level classification datasets. Similar to traditional BoN, Neural-BoN is efficient, robust and easy to implement. We expect it to be a strong baseline and be used in more real-world applications.", "title": "" }, { "docid": "151cb6f067634d915f24865c16425277", "text": "We describe a framework for using analytics to proactively tackle voluntary attrition of employees. This is especially important in organizations with large services arms where unplanned departures of key employees can lead to big losses by way of lost productivity, delayed or missed deadlines, and hiring costs of replacements. By proactively identifying top talent at a high risk of voluntarily leaving, an organization can take appropriate action in time to actually affect such employee departures, thereby avoiding financial and knowledge losses. The main retention action we study in this paper is that of proactive salary raises to at-risk employees. Our approach uses data mining for identifying employees at risk of attrition and balances the cost of attrition/replacement of an employee against the cost of retaining that employee (by way of increased salary) to enable the optimal use of limited funds that may be available for this purpose, thereby allowing the action to be targeted towards employees with the highest potential returns on investment. This approach has been used to do a proactive retention action for several thousand employees across several geographies and business units for a large, Fortune 500 multinational company. We discuss this action and discuss the results to date that show a significant reduction in voluntary resignations of the targeted groups.", "title": "" }, { "docid": "5419504f65f3ae634f064f692f38f38f", "text": "Part-of-speech tagging is an important preprocessing step in many natural language processing applications. Despite much work already carried out in this field, there is still room for improvement, especially in Portuguese. We experiment here with an architecture based on neural networks and word embeddings, and that has achieved promising results in English. We tested our classifier in different corpora: a new revision of the Mac-Morpho corpus, in which we merged some tags and performed corrections and two previous versions of it. We evaluate the impact of using different types of word embeddings and explicit features as input. We compare our tagger’s performance with other systems and achieve state-of-the-art results in the new corpus. We show how different methods for generating word embeddings and additional features differ in accuracy. The work reported here contributes with a new revision of the Mac-Morpho corpus and a state-of-the-art new tagger available for use out-of-the-box.", "title": "" }, { "docid": "aa58cb2b2621da6260aeb203af1bd6f1", "text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.", "title": "" }, { "docid": "918bf13ef0289eb9b78309c83e963b26", "text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.", "title": "" }, { "docid": "dc817bc11276d76f8d97f67e4b1b2155", "text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.", "title": "" }, { "docid": "472e9807c2f4ed6d1e763dd304f22c64", "text": "Commercial analytical database systems suffer from a high \"time-to-first-analysis\": before data can be processed, it must be modeled and schematized (a human effort), transferred into the database's storage layer, and optionally clustered and indexed (a computational effort). For many types of structured data, this upfront effort is unjustifiable, so the data are processed directly over the file system using the Hadoop framework, despite the cumulative performance benefits of processing this data in an analytical database system. In this paper we describe a system that achieves the immediate gratification of running MapReduce jobs directly over a file system, while still making progress towards the long-term performance benefits of database systems. The basic idea is to piggyback on MapReduce jobs, leverage their parsing and tuple extraction operations to incrementally load and organize tuples into a database system, while simultaneously processing the file system data. We call this scheme Invisible Loading, as we load fractions of data at a time at almost no marginal cost in query latency, but still allow future queries to run much faster.", "title": "" }, { "docid": "7b7f1f029e13008b1578c87c7319b645", "text": "This paper presents the design and manufacturing processes of a new piezoactuated XY stage with integrated parallel, decoupled, and stacked kinematics structure for micro-/nanopositioning application. The flexure-based XY stage is composed of two decoupled prismatic-prismatic limbs which are constructed by compound parallelogram flexures and compound bridge-type displacement amplifiers. The two limbs are assembled in a parallel and stacked manner to achieve a compact stage with the merits of parallel kinematics. Analytical models for the mechanical performance assessment of the stage in terms of kinematics, statics, stiffness, load capacity, and dynamics are derived and verified with finite element analysis. A prototype of the XY stage is then fabricated, and its decoupling property is tested. Moreover, the Bouc-Wen hysteresis model of the system is identified by resorting to particle swarm optimization, and a control scheme combining the inverse hysteresis model-based feedforward with feedback control is employed to compensate for the plant nonlinearity and uncertainty. Experimental results reveal that a submicrometer accuracy single-axis motion tracking and biaxial contouring can be achieved by the micropositioning system, which validate the effectiveness of the proposed mechanism and controller designs as well.", "title": "" }, { "docid": "1d15d5e8176aea14713a7f7b426d41aa", "text": "In this work we present a deep learning framework for video compressive sensing. The proposed formulation enables recovery of video frames in a few seconds at significantly improved reconstruction quality compared to previous approaches. Our investigation starts by learning a linear mapping between video sequences and corresponding measured frames which turns out to provide promising results. We then extend the linear formulation to deep fully-connected networks and explore the performance gains using deeper architectures. Our analysis is always driven by the applicability of the proposed framework on existing compressive video architectures. Extensive simulations on several video sequences document the superiority of our approach both quantitatively and qualitatively. Finally, our analysis offers insights into understanding how dataset sizes and number of layers affect reconstruction performance while raising a few points for future investigation.", "title": "" }, { "docid": "e9f8bf1d0a1ffaf97da66578779a5c4e", "text": "preadsheets have proven highly successful for interacting with numerical data, such as applying algebraic operations, defining data propagation relationships, manipulating rows or columns, and exploring \" what-if \" scenarios. Spread-sheet techniques have recently been extended from numeric domains to other domains. 1,2 Here we present a spreadsheet approach to displaying and exploring information visu-alizations, with large, abstract, multidimensional data sets that are visually represented in multiple ways. We illustrate how spread-sheet techniques provide a struc-tured, intuitive, and powerful interface for investigating information visualizations. An earlier version of this article appeared in the proceedings of the 1997 Information Visualization Symposium. 3 Here we refocus the discussion to illustrate principles that make the spreadsheet approach powerful. These principles show how we can perform many user tasks easily in the visu-alization spreadsheet that prove much more difficult using other approaches. The visualization spreadsheet's benefit comes from enabling users to build multiple visual representations of several data sets, perform operations on these visu-alizations together or separately, and compare and contrast them visually. These operations are becoming ever more important as we realize certain interaction capabilities are critical, such as exploring different views of the data interactively, applying operations like rotation or data filtering to a group of views, and comparing two or more related data sets. These operations fit naturally into a spreadsheet environment. These benefits derive from the way spreadsheets span a range of user interactions. On the one hand, spreadsheets directly benefit end users, because the direct manipulation interface makes it easy to view, navigate, and interact with the data. On the other hand, spreadsheets provide a flexible and easy-to-learn environment for user programming. The success of spreadsheet–based structured interaction eliminates many of the stumbling blocks in traditional programming environments. Spreadsheet developers create templates that enable end users to reliably repeat often-needed computations without the effort of redevelopment or coding. Users do not have to worry about the data dependencies between data sets or memory management. These programming idiosyncrasies are taken care of automatically. By providing a natural environment to explore and apply operations on data, visualization spreadsheets easily enable the exploration of data sets. What is a visualization spreadsheet? Based on our experiences and drawing on others' past work, 1-3 we define the spreadsheet paradigm's characteristics as follows: s The tabular layout lets users view collections of visu-alizations simultaneously. Cells can handle large data sets instead of a few numbers. s …", "title": "" }, { "docid": "48af87459dedc417c1ad090fc72ee3d1", "text": "Four studies examined English-speaking children's productivity with word order and verb morphology. Two- and 3-year-olds were taught novel transitive verbs with experimentally controlled argument structures. The younger children neither used nor comprehended word order with these verbs; older children comprehended and used word order correctly to mark agents and patients of the novel verbs. Children as young as 2 years 1 month added -ing but not -ed to verb stems; older children were productive with both inflections. These studies demonstrate that the present progressive inflection is used productively before the regular past tense marker and suggest that productivity with word order may be independent of developments in verb morphology. The findings are discussed in terms of M. Tomasello's (1992a) Verb Island hypothesis and M. Rispoli's (1991) notion of the mosaic acquisition of grammatical relations.", "title": "" }, { "docid": "c2f620287606a2e233e2d3654c64c016", "text": "Urban terrain is complex and they present a very challenging and difficult environment for simulating virtual forces as well as for rendering. The objective of this work is to research on Binary Space Partition technique (BSP) for modeling urban terrain environments. BSP is a method for recursively subdividing a space into convex sets by hyper-planes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree. Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.", "title": "" }, { "docid": "0873dd0181470d722f0efcc8f843eaa6", "text": "Compared to traditional service, the characteristics of the customer behavior in electronic service are personalized demand, convenient consumed circumstance and perceptual consumer behavior. Therefore, customer behavior is an important factor to facilitate online electronic service. The purpose of this study is to explore the key success factors affecting customer purchase intention of electronic service through the behavioral perspectives of customers. Based on the theory of technology acceptance model (TAM) and self service technology (SST), the study proposes a theoretical model for the empirical examination of the customer intention for purchasing electronic services. A comprehensive survey of online customers having e-shopping experiences is undertaken. Then this model is tested by means of the statistical analysis method of structure equation model (SEM). The empirical results indicated that perceived usefulness and perceived assurance have a significant impact on purchase in e-service. Discussion and implication are presented in the end.", "title": "" }, { "docid": "83580c373e9f91b021d90f520011a5da", "text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.", "title": "" } ]
scidocsrr
a4a87cd46717129b8d9ea63046db2f4e
Survey on Various Gesture Recognition Techniques for Interfacing Machines Based on Ambient Intelligence
[ { "docid": "9d0b7f84d0d326694121a8ba7a3094b4", "text": "Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are di cult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from ordinary gray scale images at speeds of up to 10 Hz.", "title": "" } ]
[ { "docid": "d7305a95bb305a00d92ac94b67687f5c", "text": "In the past decade, we have witnessed explosive growth in the number of low-power embedded and Internet-connected devices, reinforcing the new paradigm, Internet of Things (IoT). The low power wide area network (LPWAN), due to its long-range, low-power and low-cost communication capability, is actively considered by academia and industry as the future wireless communication standard for IoT. However, despite the increasing popularity of `mobile IoT', little is known about the suitability of LPWAN for those mobile IoT applications in which nodes have varying degrees of mobility. To fill this knowledge gap, in this paper, we conduct an experimental study to evaluate, analyze, and characterize LPWAN in both indoor and outdoor mobile environments. Our experimental results indicate that the performance of LPWAN is surprisingly susceptible to mobility, even to minor human mobility, and the effect of mobility significantly escalates as the distance to the gateway increases. These results call for development of new mobility-aware LPWAN protocols to support mobile IoT.", "title": "" }, { "docid": "76dd20f0464ff42badc5fd4381eed256", "text": "C therapy (CBT) approaches are rooted in the fundamental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behavioral responses to life situations. In CBT models, cognitive processes, in the form of meanings, judgments, appraisals, and assumptions associated with specific life events, are the primary determinants of one’s feelings and actions in response to life events and thus either facilitate or hinder the process of adaptation. CBT includes a range of approaches that have been shown to be efficacious in treating posttraumatic stress disorder (PTSD). In this chapter, we present an overview of leading cognitive-behavioral approaches used in the treatment of PTSD. The treatment approaches discussed here include cognitive therapy/reframing, exposure therapies (prolonged exposure [PE] and virtual reality exposure [VRE]), stress inoculation training (SIT), eye movement desensitization and reprocessing (EMDR), and Briere’s selftrauma model (1992, 1996, 2002). In our discussion of each of these approaches, we include a description of the key assumptions that frame the particular approach and the main strategies associated with the treatment. In the final section of this chapter, we review the growing body of research that has evaluated the effectiveness of cognitive-behavioral treatments for PTSD.", "title": "" }, { "docid": "8decac4ff789460595664a38e7527ed6", "text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.", "title": "" }, { "docid": "9c25084d690dcd1a654289f9817105bb", "text": "The authors describe a behavioral theory of the dynamics of insider-threat risks. Drawing on data related to information technology security violations and on a case study created to explain the dynamics observed in that data, the authors constructed a system dynamics model of a theory of the development of insider-threat risks and conducted numerical simulations to explore the parameter and response spaces of the model. By examining several scenarios in which attention to events, increased judging capabilities, better information, and training activities are simulated, the authors theorize about why information technology security effectiveness changes over time. The simulation results argue against the common presumption that increased security comes at the cost of reduced production.", "title": "" }, { "docid": "cd5210231c5fa099be6b858a3069414d", "text": "Fat grafting to the aging face has become an integral component of esthetic surgery. However, the amount of fat to inject to each area of the face is not standardized and has been based mainly on the surgeon’s experience. The purpose of this study was to perform a systematic review of injected fat volume to different facial zones. A systematic review of the literature was performed through a MEDLINE search using keywords “facial,” “fat grafting,” “lipofilling,” “Coleman technique,” “autologous fat transfer,” and “structural fat grafting.” Articles were then sorted by facial subunit and analyzed for: author(s), year of publication, study design, sample size, donor site, fat preparation technique, average and range of volume injected, time to follow-up, percentage of volume retention, and complications. Descriptive statistics were performed. Nineteen articles involving a total of 510 patients were included. Rhytidectomy was the most common procedure performed concurrently with fat injection. The mean volume of fat injected to the forehead is 6.5 mL (range 4.0–10.0 mL); to the glabellar region 1.4 mL (range 1.0–4.0 mL); to the temple 5.9 mL per side (range 2.0–10.0 mL); to the eyebrow 5.5 mL per side; to the upper eyelid 1.7 mL per side (range 1.5–2.5 mL); to the tear trough 0.65 mL per side (range 0.3–1.0 mL); to the infraorbital area (infraorbital rim to lower lid/cheek junction) 1.4 mL per side (range 0.9–3.0 mL); to the midface 1.4 mL per side (range 1.0–4.0 mL); to the nasolabial fold 2.8 mL per side (range 1.0–7.5 mL); to the mandibular area 11.5 mL per side (range 4.0–27.0 mL); and to the chin 6.7 mL (range 1.0–20.0 mL). Data on exactly how much fat to inject to each area of the face in facial fat grafting are currently limited and vary widely based on different methods and anatomical terms used. This review offers the ranges and the averages for the injected volume in each zone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "4d4bf7b06c88fba54b794921ee67109f", "text": "This article provides surgical pathologists an overview of health information systems (HISs): what they are, what they do, and how such systems relate to the practice of surgical pathology. Much of this article is dedicated to the electronic medical record. Information, in how it is captured, transmitted, and conveyed, drives the effectiveness of such electronic medical record functionalities. So critical is information from pathology in integrated clinical care that surgical pathologists are becoming gatekeepers of not only tissue but also information. Better understanding of HISs can empower surgical pathologists to become stakeholders who have an impact on the future direction of quality integrated clinical care.", "title": "" }, { "docid": "d034e1b08f704c7245a50bb383206001", "text": "Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general.", "title": "" }, { "docid": "3058eddad0052470b7b74cb6a4142ffa", "text": "With ever-increasing advancements in technology, neuroscientists are able to collect data in greater volumes and with finer resolution. The bottleneck in understanding how the brain works is consequently shifting away from the amount and type of data we can collect and toward what we actually do with the data. There has been a growing interest in leveraging this vast volume of data across levels of analysis, measurement techniques, and experimental paradigms to gain more insight into brain function. Such efforts are visible at an international scale, with the emergence of big data neuroscience initiatives, such as the BRAIN initiative (Bargmann et al., 2014), the Human Brain Project, the Human Connectome Project, and the National Institute of Mental Health's Research Domain Criteria initiative. With these large-scale projects, much thought has been given to data-sharing across groups (Poldrack and Gorgolewski, 2014; Sejnowski et al., 2014); however, even with such data-sharing initiatives, funding mechanisms, and infrastructure, there still exists the challenge of how to cohesively integrate all the data. At multiple stages and levels of neuroscience investigation, machine learning holds great promise as an addition to the arsenal of analysis tools for discovering how the brain works.", "title": "" }, { "docid": "db7bc8bbfd7dd778b2900973f2cfc18d", "text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.", "title": "" }, { "docid": "4dcd3f6b631458707153a4369ccfd269", "text": "Smart grids are electric networks that employ advanced monitoring, control, and communication technologies to deliver reliable and secure energy supply, enhance operation efficiency for generators and distributors, and provide flexible choices for prosumers. Smart grids are a combination of complex physical network systems and cyber systems that face many technological challenges. In this paper, we will first present an overview of these challenges in the context of cyber-physical systems. We will then outline potential contributions that cyber-physical systems can make to smart grids, as well as the challenges that smart grids present to cyber-physical systems. Finally, implications of current technological advances to smart grids are outlined.", "title": "" }, { "docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1", "text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6a798f74dab69594c790a088fdec6491", "text": "Clustering and classification of ECG records for four patient classes from the Internet databases by using the Weka system. Patient classes include normal, atrial arrhythmia, supraventricular arrhythmia and CHF. Chaos features are extracted automatically by using the ECG Chaos Extractor platform and recorded in Arff files. The list of features includes: correlation dimension, central tendency measure, spatial filling index and approximate entropy. Both ECG signal files and ECG annotations files are analyzed. The results show that chaos features can successfully cluster and classify the ECG annotations records by using standard and efficient algorithms such as EM and C4.5.", "title": "" }, { "docid": "8474b5b3ed5838e1d038e73579168f40", "text": "For the first time to the best of our knowledge, this paper provides an overview of millimeter-wave (mmWave) 5G antennas for cellular handsets. Practical design considerations and solutions related to the integration of mmWave phased-array antennas with beam switching capabilities are investigated in detail. To experimentally examine the proposed methodologies, two types of mesh-grid phased-array antennas featuring reconfigurable horizontal and vertical polarizations are designed, fabricated, and measured at the 60 GHz spectrum. Afterward the antennas are integrated with the rest of the 60 GHz RF and digital architecture to create integrated mmWave antenna modules and implemented within fully operating cellular handsets under plausible user scenarios. The effectiveness, current limitations, and required future research areas regarding the presented mmWave 5G antenna design technologies are studied using mmWave 5G system benchmarks.", "title": "" }, { "docid": "90bf404069bd3dfff1e6b108dafffe4c", "text": "To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When participants were engaged in habitual behavior, defined as behavior that had been performed almost daily in stable contexts, they were likely to think about issues unrelated to their behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior, or actions performed less often or in shifting contexts, participants' thoughts tended to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior.", "title": "" }, { "docid": "edb99b9884679b54a4db70cfa367ffa5", "text": "Smart cities are nowadays expanding and flourishing worldwide with Internet of Things (IoT), i.e. smart things like sensors and actuators, and mobile devices applications and installations which change the citizens' and authorities' everyday life. Smart cities produce daily huge streams of sensors data while citizens interact with Web and/or mobile devices utilizing social networks. In such a smart city context, new approaches to integrate big data streams from both sensors and social networks are needed to exploit big data production and circulation towards offering innovative solutions and applications. The SmartSantander infrastructure (EU FP7 project) has offered the ground for the SEN2SOC experiment which has integrated sensor and social data streams. This presentation outlines its research and industrial perspective and potential impact.", "title": "" }, { "docid": "17ab4797666afed3a37a8761fcbb0d1e", "text": "In this paper, we propose a CPW fed triple band notch UWB antenna array with EBG structure. The major consideration in the antenna array design is the mutual coupling effect that exists within the elements. The use of Electromagnetic Band Gap structures in the antenna arrays can limit the coupling by suppresssing the surface waves. The triple band notch antenna consists of three slots which act as notch resonators for a specific band of frequencies, the C shape slot at the main radiator (WiMax-3.5GHz), a pair of CSRR structures at the ground plane(WLAN-5.8GHz) and an inverted U shaped slot in the center of the patch (Satellite Service bands-8.2GHz). The main objective is to reduce mutual coupling which in turn improves the peak realized gain, directivity.", "title": "" }, { "docid": "7105302557aa312e3dedbc7d7cc6e245", "text": "a Canisius College, Richard J. Wehle School of Business, Department of Management and Marketing, 2001 Main Street, Buffalo, NY 14208-1098, United States b Clemson University, College of Business and Behavioral Science, Department of Marketing, 245 Sirrine Hall, Clemson, SC 29634-1325, United States c University of Alabama at Birmingham, School of Business, Department of Marketing, Industrial Distribution and Economics, 1150 10th Avenue South, Birmingham, AL 35294, United States d Vlerick School of Management Reep 1, BE-9000 Ghent Belgium", "title": "" }, { "docid": "57e5d801778711f2ab9a152f08ae53e8", "text": "A modular multilevel converter (MMC) is one of the next-generation multilevel PWM converters intended for high- or medium-voltage power conversion without transformers. The MMC consists of cascade connection of multiple bidirectional PWM chopper-cells and floating dc capacitors per leg, thus requiring voltage-balancing control of their chopper-cells. However, no paper has been discussed explicitly on voltage-balancing control with theoretical and experimental verifications. This paper deals with two types of modular multilevel PWM converters with focus on their circuit configurations and voltage-balancing control. Combination of averaging and balancing controls enables the MMCs to achieve voltage balancing without any external circuit. The viability of the MMCs as well as the effectiveness of the PWM control method is confirmed by simulation and experiment.", "title": "" }, { "docid": "f05b001f03e00bf2d0807eb62d9e2369", "text": "Since the hydraulic actuating suspension system has nonlinear and time-varying behavior, it is difficult to establish an accurate model for designing a model-based controller. Here, an adaptive fuzzy sliding mode controller is proposed to suppress the sprung mass position oscillation due to road surface variation. This intelligent control strategy combines an adaptive rule with fuzzy and sliding mode control algorithms. It has online learning ability to deal with the system time-varying and nonlinear uncertainty behaviors, and adjust the control rules parameters. Only eleven fuzzy rules are required for this active suspension system and these fuzzy control rules can be established and modified continuously by online learning. The experimental results show that this intelligent control algorithm effectively suppresses the oscillation amplitude of the sprung mass with respect to various road surface disturbances.", "title": "" }, { "docid": "abe375d47dc0344467d41f6a0c13f885", "text": "Brain and the gastrointestinal (GI) tract are intimately connected to form a bidirectional neurohumoral communication system. The communication between gut and brain, knows as the gut-brain axis, is so well established that the functional status of gut is always related to the condition of brain. The researches on the gut-brain axis were traditionally focused on the psychological status affecting the function of the GI tract. However, recent evidences showed that gut microbiota communicates with the brain via the gut-brain axis to modulate brain development and behavioral phenotypes. These recent findings on the new role of gut microbiota in the gut-brain axis implicate that gut microbiota could associate with brain functions as well as neurological diseases via the gut-brain axis. To elucidate the role of gut microbiota in the gut-brain axis, precise identification of the composition of microbes constituting gut microbiota is an essential step. However, identification of microbes constituting gut microbiota has been the main technological challenge currently due to massive amount of intestinal microbes and the difficulties in culture of gut microbes. Current methods for identification of microbes constituting gut microbiota are dependent on omics analysis methods by using advanced high tech equipment. Here, we review the association of gut microbiota with the gut-brain axis, including the pros and cons of the current high throughput methods for identification of microbes constituting gut microbiota to elucidate the role of gut microbiota in the gut-brain axis.", "title": "" } ]
scidocsrr
dfd633d8d54f866d069cc6db87a652c7
Design of lane keeping assist system for autonomous vehicles
[ { "docid": "71c81eb75f55ad6efaf8977b93e6dbef", "text": "Autonomous vehicle navigation is challenging since various types of road scenarios in real urban environments have to be considered, particularly when only perception sensors are used, without position information. This paper presents a novel real-time optimal-drivable-region and lane detection system for autonomous driving based on the fusion of light detection and ranging (LIDAR) and vision data. Our system uses a multisensory scheme to cover the most drivable areas in front of a vehicle. We propose a feature-level fusion method for the LIDAR and vision data and an optimal selection strategy for detecting the best drivable region. Then, a conditional lane detection algorithm is selectively executed depending on the automatic classification of the optimal drivable region. Our system successfully handles both structured and unstructured roads. The results of several experiments are provided to demonstrate the reliability, effectiveness, and robustness of the system.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "d49e6b7c6da44fae798e94dcb3a90c88", "text": "Given a photo collection of “unconstrained” face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.", "title": "" }, { "docid": "65289003b014d86eed03baad6aa1ed83", "text": "Camera calibration is one of the long existing research issues in computer vision domain. Typical calibration methods take two steps for the procedure: control points localization and camera parameters computation. In practical situation, control points localization is a time-consuming task because the localization puts severe assumption that the calibration object should be visible in all images. To satisfy the assumption, users may avoid moving the calibration object near the image boundary. As a result, we estimate poor quality parameters. In this paper, we aim to solve this partial occlusion problem of the calibration object. To solve the problem, we integrate a planar marker tracking algorithm that can track its target marker even with partial occlusion. Specifically, we localize control points by a RANdom DOts Markers (RANDOM) tracking algorithm that uses markers with randomly distributed circle dots. Once the control points are localized, they are used to estimate the camera parameters. The proposed method is validated with both synthetic and real world experiments. The experimental results show that the proposed method realizes camera calibration from image on which part of the calibration object is visible.", "title": "" }, { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" }, { "docid": "e0b85ff6cd78f1640f25215ede3a39e6", "text": "Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRFbased model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.", "title": "" }, { "docid": "d8472e56a4ffe5d6b0cb0c902186d00b", "text": "In C. S. Peirce, as well as in the work of many biosemioticians, the semiotic object is sometimes described as a physical “object” with material properties and sometimes described as an “ideal object” or mental representation. I argue that to the extent that we can avoid these types of characterizations we will have a more scientific definition of sign use and will be able to better integrate the various fields that interact with biosemiotics. In an effort to end Cartesian dualism in semiotics, which has been the main obstacle to a scientific biosemiotics, I present an argument that the “semiotic object” is always ultimately the objective of self-affirmation (of habits, physical or mental) and/or self-preservation. Therefore, I propose a new model for the sign triad: response-sign-objective. With this new model it is clear, as I will show, that self-mistaking (not self-negation as others have proposed) makes learning, creativity and purposeful action possible via signs. I define an “interpretation” as a response to something as if it were a sign, but whose semiotic objective does not, in fact, exist. If the response-as-interpretation turns out to be beneficial for the system after all, there is biopoiesis. When the response is not “interpretive,” but self-confirming in the usual way, there is biosemiosis. While the conditions conducive to fruitful misinterpretation (e.g., accidental similarity of non-signs to signs and/or contiguity of non-signs to self-sustaining processes) might be artificially enhanced, according to this theory, the outcomes would be, by nature, more or less uncontrollable and unpredictable. Nevertheless, biosemiotics could be instrumental in the manipulation and/or artificial creation of purposeful systems insofar as it can describe a formula for the conditions under which new objectives and novel purposeful behavior may emerge, however unpredictably.", "title": "" }, { "docid": "d4cf47c898268ffe01dc9aab75810d7c", "text": "In this paper, a new robust fault detection and isolation (FDI) methodology for an unmanned aerial vehicle (UAV) is proposed. The fault diagnosis scheme is constructed based on observer-based techniques according to fault models corresponding to each component (actuator, sensor, and structure). The proposed fault diagnosis method takes advantage of the structural perturbation of the UAV model due to the icing (the main structural fault in aircraft), sensor, and actuator faults to reduce the error of observers that are used in the FDI module in addition to distinguishing among faults in different components. Moreover, the accuracy of the FDI module is increased by considering the structural perturbation of the UAV linear model due to wind disturbances which is the major environmental disturbance affecting an aircraft. Our envisaged FDI strategy is capable of diagnosing recurrent faults through properly designed residuals with different responses to different types of faults. Simulation results are provided to illustrate and demonstrate the effectiveness of our proposed FDI approach due to faults in sensors, actuators, and structural components of unmanned aerial vehicles.", "title": "" }, { "docid": "ad6bb165620dafb7dcadaca91c9de6b0", "text": "This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression.", "title": "" }, { "docid": "5946378b291a1a0e1fb6df5cd57d716f", "text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)", "title": "" }, { "docid": "0fc08886411f225a3e5e767be3b6fd39", "text": "To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models. We also demonstrate for the first time ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks. Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary. The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Together, these two techniques offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.", "title": "" }, { "docid": "9c3aed8548b61b70ae35be98050fb4bf", "text": "In the present work, a widely tunable high-Q air filled evanescent cavity bandpass filter is created in an LTCC substrate. A low loss Rogers Duroidreg flexible substrate forms the top of the filter, acting as a membrane for a tunable parasitic capacitor that allows variable frequency loading. A commercially available piezoelectric actuator is mounted on the Duroidreg substrate for precise electrical tuning of the filter center frequency. The filter is tuned from 2.71 to 4.03 GHz, with insertion losses ranging from 1.3 to 2.4 dB across the range for a 2.5% bandwidth filter. Secondarily, an exceptionally narrow band filter is fabricated to show the potential for using the actuators to fine tune the response to compensate for fabrication tolerances. While most traditional machining techniques would not allow for such narrow band filtering, the high-Q and the sensitive tuning combine to allow for near channel selection for a front-end receiver. For further analysis, a widely tunable resonator is also created with a 100% tunable frequency range, from 2.3 to 4.6 GHz. The resonator analysis gives unloaded quality factors ranging from 360 to 700 with a maximum frequency loading of 89%. This technique shows a lot of promise for tunable RF filtering applications.", "title": "" }, { "docid": "0e8dbf7567f183c314b55890cad98050", "text": "Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.", "title": "" }, { "docid": "61165fc9e404ef0fdf3c2525845cf032", "text": "The automated comparison of points of view between two politicians is a very challenging task, due not only to the lack of annotated resources, but also to the different dimensions participating to the definition of agreement and disagreement. In order to shed light on this complex task, we first carry out a pilot study to manually annotate the components involved in detecting agreement and disagreement. Then, based on these findings, we implement different features to capture them automatically via supervised classification. We do not focus on debates in dialogical form, but we rather consider sets of documents, in which politicians may express their position with respect to different topics in an implicit or explicit way, like during an electoral campaign. We create and make available three different datasets.", "title": "" }, { "docid": "d026b12bedce1782a17654f19c7dcdf7", "text": "The millions of movies produced in the human history are valuable resources for computer vision research. However, learning a vision model from movie data would meet with serious difficulties. A major obstacle is the computational cost – the length of a movie is often over one hour, which is substantially longer than the short video clips that previous study mostly focuses on. In this paper, we explore an alternative approach to learning vision models from movies. Specifically, we consider a framework comprised of a visual module and a temporal analysis module. Unlike conventional learning methods, the proposed approach learns these modules from different sets of data – the former from trailers while the latter from movies. This allows distinctive visual features to be learned within a reasonable budget while still preserving long-term temporal structures across an entire movie. We construct a large-scale dataset for this study and define a series of tasks on top. Experiments on this dataset showed that the proposed method can substantially reduce the training time while obtaining highly effective features and coherent temporal structures.", "title": "" }, { "docid": "2509b427f650c7fc54cdb5c38cdb2bba", "text": "Inbreeding depression on female fertility and calving ease in Spanish dairy cattle was studied by the traditional inbreeding coefficient (F) and an alternative measurement indicating the inbreeding rate (DeltaF) for each animal. Data included records from 49,497 and 62,134 cows for fertility and calving ease, respectively. Both inbreeding measurements were included separately in the routine genetic evaluation models for number of insemination to conception (sequential threshold animal model) and calving ease (sire-maternal grandsire threshold model). The F was included in the model as a categorical effect, whereas DeltaF was included as a linear covariate. Inbred cows showed impaired fertility and tended to have more difficult calvings than low or noninbred cows. Pregnancy rate decreased by 1.68% on average for cows with F from 6.25 to 12.5%. This amount of inbreeding, however, did not seem to increase dystocia incidence. Inbreeding depression was larger for F greater than 12.5%. Cows with F greater than 25% had lower pregnancy rate and higher dystocia rate (-6.37 and 1.67%, respectively) than low or noninbred cows. The DeltaF had a significant effect on female fertility. A DeltaF = 0.01, corresponding to an inbreeding coefficient of 5.62% for the average equivalent generations in the data used (5.68), lowered pregnancy rate by 1.5%. However, the posterior estimate for the effect of DeltaF on calving ease was not significantly different from zero. Although similar patterns were found with both F and DeltaF, the latter detected a lowered pregnancy rate at an equivalent F, probably because it may consider the known depth of the pedigree. The inbreeding rate might be an alternative choice to measure inbreeding depression.", "title": "" }, { "docid": "15a079037d3dbb1b08591c0a3c8e0804", "text": "The paper offers an introduction and a road map to the burgeoning literature on two-sided markets. In many industries, platforms court two (or more) sides that use the platform to interact with each other. The platforms’ usage or variable charges impact the two sides’ willingness to trade, and thereby their net surpluses from potential interactions; the platforms’ membership or fixed charges in turn determine the end-users’ presence on the platform. The platforms’ fine design of the structure of variable and fixed charges is relevant only if the two sides do not negotiate away the corresponding usage and membership externalities. The paper first focuses on usage charges and provides conditions for the allocation of the total usage charge (e.g., the price of a call or of a payment card transaction) between the two sides not to be neutral; the failure of the Coase theorem is necessary but not sufficient for two-sidedness. Second, the paper builds a canonical model integrating usage and membership externalities. This model allows us to unify and compare the results obtained in the two hitherto disparate strands of the literature emphasizing either form of externality; and to place existing membership (or indirect) externalities models on a stronger footing by identifying environments in which these models can accommodate usage pricing. We also obtain general results on usage pricing of independent interest. Finally, the paper reviews some key economic insights on platform price and non-price strategies.", "title": "" }, { "docid": "1afdefb31d7b780bb78b59ca8b0d3d8a", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "949a5da7e1a8c0de43dbcb7dc589851c", "text": "Silicon photonics devices offer promising solution to meet the growing bandwidth demands of next-generation interconnects. This paper presents a 5 × 25 Gb/s carrier-depletion microring-based wavelength-division multiplexing (WDM) transmitter in 65 nm CMOS. An AC-coupled differential driver is proposed to realize 4 × VDD output swing as well as tunable DC-biasing. The proposed transmitter incorporates 2-tap asymmetric pre-emphasis to effectively cancel the optical nonlinearity of the ring modulator. An average-power-based dynamic wavelength stabilization loop is also demonstrated to compensate for thermal induced resonant wavelength drift. At 25 Gb/s operation, each transmitter channel consumes 113.5 mW and maintains 7 dB extinction ratio with a 4.4 V pp-diff output swing in the presence of thermal fluctuations.", "title": "" }, { "docid": "61f079cb59505d9bf1de914330dd852e", "text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.", "title": "" }, { "docid": "83692fd5290c7c2a43809e1e2014566d", "text": "Humans have a biological predisposition to form attachment to social partners, and they seem to form attachment even toward non-human and inanimate targets. Attachment styles influence not only interpersonal relationships, but interspecies and object attachment as well. We hypothesized that young people form attachment toward their mobile phone, and that people with higher attachment anxiety use the mobile phone more likely as a compensatory attachment target. We constructed a scale to observe people's attachment to their mobile and we assessed their interpersonal attachment style. In this exploratory study we found that young people readily develop attachment toward their phone: they seek the proximity of it and experience distress on separation. People's higher attachment anxiety predicted higher tendency to show attachment-like features regarding their mobile. Specifically, while the proximity of the phone proved to be equally important for people with different attachment styles, the constant contact with others through the phone was more important for anxiously attached people. We conclude that attachment to recently emerged artificial objects, like the mobile may be the result of cultural co-option of the attachment system. People with anxious attachment style may face challenges as the constant contact and validation the computer-mediated communication offers may deepen their dependence on others. © 2016 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
7a2cb3cc0324bf60ace084a4fef2a353
DSD: Regularizing Deep Neural Networks with Dense-Sparse-Dense Training Flow
[ { "docid": "065ca3deb8cb266f741feb67e404acb5", "text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "title": "" }, { "docid": "1b625a1136bec100f459a39b9b980575", "text": "This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k non-zero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key motivation for developing the new algorithm. The proposed method is tested on applications such as sparse principal component analysis and the densest k-subgraph problem. Extensive experiments on several synthetic and real-world data sets demonstrate the competitive empirical performance of our method.", "title": "" } ]
[ { "docid": "ad48ca7415808c4337c0b6eb593005d6", "text": "Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously. Currently, there is little consensus on how such data should be analyzed. Here we introduce LFADS (Latent Factor Analysis via Dynamical Systems), a method to infer latent dynamics from simultaneously recorded, single-trial, high-dimensional neural spiking data. LFADS is a sequential model based on a variational auto-encoder. By making a dynamical systems hypothesis regarding the generation of the observed data, LFADS reduces observed spiking to a set of low-dimensional temporal factors, per-trial initial conditions, and inferred inputs. We compare LFADS to existing methods on synthetic data and show that it significantly out-performs them in inferring neural firing rates and latent dynamics.", "title": "" }, { "docid": "752996e9527e6f9830cf32ae32662c37", "text": "This paper presents our investigation of the effect of language resources on the performance of Amharic speech recognition. We have used language model training text of different sizes and seen the effect on word error rate (WER) reduction. Moreover, we have investigated the effect of handling language issues (germination, epenthetic vowel insertion and glottal stop consonant pronunciation) on the performance of speech recognition systems using data-driven phone-level transcriptions. The results of our experiments show that only slight reduction in WER can be obtained by increasing language model training text. However, proper transcription of gemination, the epenthetic vowel and the glottal stop consonant did not bring performance improvement for Amharic speech recognition. This can be attributed to the larger number of phone HMM acoustic models (62 compared to 37 phone set of the grapheme-based phone-level transcriptions) trained with a small (5 hrs) training speech.", "title": "" }, { "docid": "74850471591f7de174c8e57c413461bc", "text": "As computer graphics technique rises to the challenge of rendering lifelike performers, more lifelike performance is required. The techniques used to animate robots, arthropods, and suits of armor, have been extended to flexible surfaces of fur and flesh. Physical models of muscle and skin have been devised. But more complex databases and sophisticated physical modeling do not directly address the performance problem. The gestures and expressions of a human actor are not the solution to a dynamic system. This paper describes a means of acquiring the expressions of real faces, and applying them to computer-generated faces. Such an \"electronic mask\" offers a means for the traditional talents of actors to be flexibly incorporated in digital animations. Efforts in a similar spirit have resulted in servo-controlled \"animatrons,\" hightechnology puppets, and CG puppetry [1]. The manner in which the skills of actors and puppetteers as well as animators are accommodated in such systems may point the way for a more general incorporation of human nuance into our emerging computer media.The ensuing description is divided into two major subjects: the construction of a highly-resolved human head model with photographic texture mapping, and the concept demonstration of a system to animate this model by tracking and applying the expressions of a human performer", "title": "" }, { "docid": "469c17aa0db2c70394f081a9a7c09be5", "text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.", "title": "" }, { "docid": "c95980f3f1921426c20757e6020f62c2", "text": "Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC systems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our preliminary results in scaling the size of model that can be trained with the LBANN toolkit.", "title": "" }, { "docid": "c4ee2810b5a799a16e2ea66073719050", "text": "Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.", "title": "" }, { "docid": "1be4284ecc83855ecb2fee27dd8b12ac", "text": "This paper describes a new strategy for real-time cooperative localization of autonomous vehicles. The strategy aims to improve the vehicles localization accuracy and reduce the impact of computing time of multi-sensor data fusion algorithms and vehicle-to-vehicle communication on parallel architectures. The method aims to solve localization issues in a cluster of autonomous vehicles, equipped with low-cost navigation systems in an unknown environment. It stands on multiple forms of the Kalman filter derivatives to estimate the vehicles' nonlinear model vector state, named local fusion node. The vehicles exchange their local state estimate and Covariance Intersection algorithm for merging the local vehicles' state estimate in the second node (named global data fusion node). This strategy simultaneously exploits the proprioceptive and sensors -a Global Positioning System, and a vehicle-to-vehicle transmitter and receiver- and an exteroceptive sensor, range finder, to sense their surroundings for more accurate and reliable collaborative localization.", "title": "" }, { "docid": "d9c5ba7a4321e72b0506ec1e85c54e3c", "text": "The design and implementation of a compact, flexible and lightweight X-band transmitter (Tx) module based on high-power gallium nitride (GaN) transistor technology and a low-cost organic package made from liquid crystal polymer (LCP) is presented. In-package measurements of the power amplifier (PA) at 8 GHz show a P.A.E.max of >31%, P1dB of 20 dBm and gain of 11.42 dB. A 4×1 patch antenna array was also fabricated on the same platform. Though no thermal management was used, an effective isotropically radiated power (EIRP) in excess of 20 dBm at 10 GHz was measured for the transmitter module, consisting of only a single-stage PA and antenna array, thus demonstrating that even greater performance can be achieved in the future.", "title": "" }, { "docid": "21a68f76ed6d18431f446398674e4b4e", "text": "With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.", "title": "" }, { "docid": "e2f8ecd3b325a3f067e53e9beb087919", "text": "This paper presents a seven-dimensional ordinary differential equation modelling the transmission of Plasmodium falciparum malaria between humans and mosquitoes with non-linear forces of infection in form of saturated incidence rates. These incidence rates produce antibodies in response to the presence of parasite-causing malaria in both human and mosquito populations.The existence of region where the model is epidemiologically feasible is established. Stability analysis of the disease-free equilibrium is investigated via the threshold parameter (reproduction number R0) obtained using the next generation matrix technique. The model results show that the disease-free equilibrium is asymptotically stable at threshold parameter less than unity and unstable at threshold parameter greater than unity. The existence of the unique endemic equilibrium is also determined under certain conditions. Numerical simulations are carried out to confirm the analytic results and explore the possible behavior of the formulated model. AMS Subject Classification: 92B05, 93A30", "title": "" }, { "docid": "f0d8d6d1adaa765153f2ec93266889a3", "text": "We present a new approach to localize extensive facial landmarks with a coarse-to-fine convolutional network cascade. Deep convolutional neural networks (DCNN) have been successfully utilized in facial landmark localization for two-fold advantages: 1) geometric constraints among facial points are implicitly utilized, 2) huge amount of training data can be leveraged. However, in the task of extensive facial landmark localization, a large number of facial landmarks (more than 50 points) are required to be located in a unified system, which poses great difficulty in the structure design and training process of traditional convolutional networks. In this paper, we design a four-level convolutional network cascade, which tackles the problem in a coarse-to-fine manner. In our system, each network level is trained to locally refine a subset of facial landmarks generated by previous network levels. In addition, each level predicts explicit geometric constraints (the position and rotation angles of a specific facial component) to rectify the inputs of the current network level. The combination of coarse-to-fine cascade and geometric refinement enables our system to locate extensive facial landmarks (68 points) accurately in the 300-W facial landmark localization challenge.", "title": "" }, { "docid": "eb0a5d496dd9a427ab7d52416f70aab3", "text": "Progress in habit theory can be made by distinguishing habit from frequency of occurrence, and using independent measures for these constructs. This proposition was investigated in three studies using a longitudinal, cross-sectional and experimental design on eating, mental habits and word processing, respectively. In Study 1, snacking habit and past snacking frequency independently predicted later snacking behaviour, while controlling for the theory of planned behaviour variables. Habit fully mediated the effect of past on later behaviour. In Study 2, habitual negative self-thinking and past frequency of negative self-thoughts independently predicted self-esteem and the presence of depressive and anxiety symptoms. In Study 3, habit varied as a function of experimentally manipulated task complexity, while behavioural frequency was held constant. Taken together, while repetition is necessary for habits to develop, these studies demonstrate that habit should not be equated with frequency of occurrence, but rather should be considered as a mental construct involving features of automaticity, such as lack of awareness, difficulty to control and mental efficiency.", "title": "" }, { "docid": "cf519fe3098fab7a394a42f947ad48d1", "text": "In this paper we introduce a new technique for blind source separation of speech signals. We focus on the temporal structure of the signals in contrast to most other major approaches to this problem. The idea is to apply the decorrelation method proposed by Molgedey and Schuster in the time-frequency domain. We show some results of experiments with both artificially controlled data and speech data recorded in the real environment.", "title": "" }, { "docid": "87a04076b2137b67d6f04172e7def48b", "text": "An architecture for low-noise spatial cancellation of co-channel interferer (CCI) at RF in a digital beamforming (DBF)/MIMO receiver (RX) array is presented. The proposed RF cancellation can attenuate CCI prior to the ADC in a DBF/MIMO RX array while preserving a field-of-view (FoV) in each array element, enabling subsequent DSP for multi-beamforming. A novel hybrid-coupler/polyphase-filter based input coupling scheme that simplifies spatial selection of CCI and enables low-noise cancellation is described. A 4-element 10GHz prototype is implemented in 65nm CMOS that achieves >20dB spatial cancellation of CCI while adding <;1.5dB output noise.", "title": "" }, { "docid": "6e8f02cfdab45ed1277e8649bd73c6cf", "text": "Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.", "title": "" }, { "docid": "562cf2d0bc59f0fde4d7377f1d5058a2", "text": "The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.", "title": "" }, { "docid": "14562d79675d2808da07bbbe3201dd5b", "text": "Many happened-before-based detectors for debugging multithreaded programs implement vector clocks to incrementally track the casual relations among synchronization events produced by concurrent threads and generate trace logs. They update the vector clocks via vector-based comparison and content assignment in every case. We observe that many such tracking comparison and assignment operations are removable in part or in whole, which if identified and used properly, have the potential to reduce the log traces thus produced. This paper presents our analysis to identify such removable tracking operations and shows how they could be used to reduce log traces. We implement our analysis result as a technique entitled LOFT. We evaluate LOFT on the well-studied PARSEC benchmarking suite and five large-scale real-world applications. The main experimental result shows that on average, LOFT identifies 63.9 percent of all synchronization operations incurred by the existing approach as removable and does not compromise the efficiency of the latter.", "title": "" }, { "docid": "cb29a1fc5a8b70b755e934c9b3512a36", "text": "The problem of pedestrian detection in image and video frames has been extensively investigated in the past decade. However, the low performance in complex scenes shows that it remains an open problem. In this paper, we propose to cascade simple Aggregated Channel Features (ACF) and rich Deep Convolutional Neural Network (DCNN) features for efficient and effective pedestrian detection in complex scenes. The ACF based detector is used to generate candidate pedestrian windows and the rich DCNN features are used for fine classification. Experiments show that the proposed approach achieved leading performance in the INRIA dataset and comparable performance to the state-of-the-art in the Caltech and ETH datasets.", "title": "" }, { "docid": "6459493643eb7ff011fa0d8873382911", "text": "This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.", "title": "" }, { "docid": "cbce30ed2bbdcd25fb708394dff1b7b6", "text": "Current syntactic accounts of English resultatives are based on the assumption that result XPs are predicated of underlying direct objects. This assumption has helped to explain the presence of reflexive pronouns with some intransitive verbs but not others and the apparent lack of result XPs predicated of subjects of transitive verbs. We present problems for and counterexamples to some of the basic assumptions of the syntactic approach, which undermine its explanatory power. We develop an alternative account that appeals to principles governing the well-formedness of event structure and the event structure-to-syntax mapping. This account covers the data on intransitive verbs and predicts the distribution of subject-predicated result XPs with transitive verbs.*", "title": "" } ]
scidocsrr
64ff18a646ed38c407f7af2a484fe4f2
Modeling Local Coherence: An Entity-Based Approach
[ { "docid": "02936143b0da0a789fc1c645e30c7e50", "text": "We describe a robust accurate domain-independent approach t statistical parsing incorporated into the new release of the ANLT toolkit, and publicly available as a research tool. The system has bee n used to parse many well known corpora in order to produce dat a for lexical acquisition efforts; it has also been used as a component in a open-domain question answering project. The performance of the system is competitive with that of statistical parsers using highl y lexicalised parse selection models. However, we plan to ex end the system to improve parse coverage, depth and accuracy.", "title": "" } ]
[ { "docid": "d31a8eb1c3e13d2057ad0b242200eb59", "text": "BACKGROUND\nCharacterization of the insertion site anatomy in anterior cruciate ligament reconstruction has recently received increased attention in the literature, coinciding with a growing interest in anatomic reconstruction. The purpose of this study was to visualize and quantify the position of anatomic anteromedial and posterolateral bone tunnels in anterior cruciate ligament reconstruction with use of novel methods applied to three-dimensional computed tomographic reconstruction images.\n\n\nMETHODS\nCareful arthroscopic dissection and anatomic double-bundle anterior cruciate ligament tunnel drilling were performed with use of topographical landmarks in eight cadaver knees. Computed tomography scans were performed on each knee, and three-dimensional models were created and aligned into an anatomic coordinate system. Tibial tunnel aperture centers were measured in the anterior-to-posterior and medial-to-lateral directions on the tibial plateau. The femoral tunnel aperture centers were measured in anatomic posterior-to-anterior and proximal-to-distal directions and with the quadrant method (relative to the femoral notch).\n\n\nRESULTS\nThe centers of the tunnel apertures for the anteromedial and posterolateral tunnels were located at a mean (and standard deviation) of 25% +/- 2.8% and 46.4% +/- 3.7%, respectively, of the anterior-to-posterior tibial plateau depth and at a mean of 50.5% +/- 4.2% and 52.4% +/- 2.5% of the medial-to-lateral tibial plateau width. On the medial wall of the lateral femoral condyle in the anatomic posterior-to-anterior direction, the anteromedial and posterolateral tunnels were located at 23.1% +/- 6.1% and 15.3% +/- 4.8%, respectively. The proximal-to-distal locations were at 28.2% +/- 5.4% and 58.1 +/- 7.1%, respectively. With the quadrant method, anteromedial and posterolateral tunnels were measured at 21.7% +/- 2.5% and 35.1% +/- 3.5%, respectively, from the proximal condylar surface (parallel to the Blumensaat line), and at 33.2% +/- 5.6% and 55.3% +/- 5.3% from the notch roof (perpendicular to the Blumensaat line). Intraobserver and interobserver reliability was high, with small standard errors of measurement.\n\n\nCONCLUSIONS\nThis cadaver study provides reference data against which tunnel position in anterior cruciate ligament reconstruction can be compared in future clinical trials.", "title": "" }, { "docid": "8e2407e6fc3e3b3e5f0aeb64eb842712", "text": "Visual programming in 3D sounds much more appealing than programming in 2D, but what are its benefits? Here, University of Colorado Boulder educators discuss the differences between 2D and 3D regarding three concepts connecting computer graphics to computer science education: ownership, spatial thinking, and syntonicity.", "title": "" }, { "docid": "4160267cb2de92621edb5634a3bb985e", "text": "This paper reports the results of a study carried out to assess the benefits, impediments and major critical success factors in adopting business to consumer e-business solutions. A case study method of investigation was used, and the experiences of six online companies and two bricks and mortar companies were documented. The major impediments identified are: leadership issues, operational issues, technology, and ineffective solution design. The critical success factors in the adoption of e-business are identified as: combining e-business knowledge, value proposition and delivery measurement, customer satisfaction and retention, monitoring internal processes and competitor activity, and finally building trust. Findings suggest that above all, adoption of e-business should be appropriate, relevant, value adding, and operationally as well as strategically viable for an organization instead of being a result of apprehensive compliance. q 2004 Published by Elsevier Ltd.", "title": "" }, { "docid": "79cec2bfe95ae81b6dedf5c693f2acf0", "text": "Impedance of blood relatively affected by blood-glucose concentration. Blood electrical impedance value is varied with the content of blood glucose in a human body. This characteristic between glucose and electrical impedance has been proven by using four electrode method's measurement. The bioelectrical voltage output shows a difference between fasting and non-fasting blood glucose measured by using designed four tin lead alloy electrode. 10 test subjects ages between 20-25 years old are UniMAP student has been participated in this experiment and measurement of blood glucose using current clinical measurement and designed device is obtained. Preliminary study using the developed device, has shown that glucose value in the range of 4-5mol/Liter having the range of 0.500V to -1.800V during fasting, and 0.100V or less during normal glucose condition, 5 to 11 mol/liter. On the other hand, It also shows that prediction of blood glucose using this design device could achieve relevant for measurement accuracy compared to gold standard measurement, the hand prick invasive measurement. This early result has support that there is an ample scope in blood electrical study for the non-invasive blood glucose measurement.", "title": "" }, { "docid": "bc018ef7cbcf7fc032fe8556016d08b1", "text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.", "title": "" }, { "docid": "97de6efcdba528f801cbfa087498ab3f", "text": "Abstract: Educational Data Mining refers to techniques, tools, and research designed for automatically extracting meaning from large repositories of data generated by or related to people' learning activities in educational settings.[1] It is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.[2]", "title": "" }, { "docid": "c03265e4a7d7cc14e6799c358a4af95a", "text": "Three studies considered the consequences of writing, talking, and thinking about significant events. In Studies 1 and 2, students wrote, talked into a tape recorder, or thought privately about their worst (N = 96) or happiest experience (N = 111) for 15 min each during 3 consecutive days. In Study 3 (N = 112), students wrote or thought about their happiest day; half systematically analyzed, and half repetitively replayed this day. Well-being and health measures were administered before each study's manipulation and 4 weeks after. As predicted, in Study 1, participants who processed a negative experience through writing or talking reported improved life satisfaction and enhanced mental and physical health relative to those who thought about it. The reverse effect for life satisfaction was observed in Study 2, which focused on positive experiences. Study 3 examined possible mechanisms underlying these effects. Students who wrote about their happiest moments--especially when analyzing them--experienced reduced well-being and physical health relative to those who replayed these moments. Results are discussed in light of current understanding of the effects of processing life events.", "title": "" }, { "docid": "c9fc426722df72b247093779ad6e2c0e", "text": "Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. When the ground conditions and stability constraint are satisfied, it is desirable to select a walking pattern that requires small torque and velocity of the joint actuators. In this paper, we first formulate the constraints of the foot motion parameters. By varying the values of the constraint parameters, we can produce different types of foot motion to adapt to ground conditions. We then propose a method for formulating the problem of the smooth hip motion with the largest stability margin using only two parameters, and derive the hip trajectory by iterative computation. Finally, the correlation between the actuator specifications and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.", "title": "" }, { "docid": "e841b5790d69c58982cb2ff5725f96eb", "text": "Copyright and moral rights to this thesis/research project are retained by the author and/or other copyright owners. The work is supplied on the understanding that any use for commercial gain is strictly forbidden. A copy may be downloaded for personal, non-commercial, research or study without prior permission and without charge. Any use of the thesis/research project for private study or research must be properly acknowledged with reference to the work’s full bibliographic details.", "title": "" }, { "docid": "3c3c30050b32b46c28abef3ecff06376", "text": "The analysis of social, communication and information networks for identifying patterns, evolutionary characteristics and anomalies is a key problem for the military, for instance in the Intelligence community. Current techniques do not have the ability to discern unusual features or patterns that are not a priori known. We investigate the use of deep learning for network analysis. Over the last few years, deep learning has had unprecedented success in areas such as image classification, speech recognition, etc. However, research on the use of deep learning to network or graph analysis is limited. We present three preliminary techniques that we have developed as part of the ARL Network Science CTA program: (a) unsupervised classification using a very highly trained image recognizer, namely Caffe; (b) supervised classification using a variant of convolutional neural networks on node features such as degree and assortativity; and (c) a framework called node2vec for learning representations of nodes in a network using a mapping to natural language processing.", "title": "" }, { "docid": "d9224bda0061d4a266aa961f61ef957e", "text": "Exploratory search activities tend to span multiple sessions and involve finding, analyzing and evaluating information found through many queries. Typical search systems, on the other hand, are designed to support single query, precision-oriented search tasks. We describe a search interface and system design of a multi-session exploratory search system, discuss design challenges encountered, and chronicle the evolution of our design. Our design describes novel displays for visualizing retrieval history information, and introduces ambient displays and persuasive elements to interactive information retrieval.", "title": "" }, { "docid": "ad9fd6e57616a0abc5377dcf6e80d6ec", "text": "Recent research has provided evidence that software developers experience a wide range of emotions. We argue that among those emotions anger deserves special attention as it can serve as an onset for tools supporting collaborative softwaredevelopment. This, however, requires a fine-grained model of the anger emotion, able to distinguish between anger directed towards self, others, and objects. Detecting anger towards self could be useful to support developers experiencing difficulties, detection of anger towards others might be helpful for community management, detecting anger towards objects might be helpful to recommend and prioritize improvements. As a first step towards automatic identification of anger direction, we built a classifier for anger direction, based on a manually annotated gold standard of 723 sentences that were obtained by mining comments in Apache issue reports.", "title": "" }, { "docid": "0f8183f5781e26208da631978d0f610b", "text": "Historically, games have been played between human opponents. However, with the advent of the computer came the notion that one might play with or against a computational surrogate. Dating back to the 1950s with early efforts in computer chess, approaches to game artificial intelligence (AI) have been designed around adversarial, or zero-sum, games. The goal of intelligent game-playing agents in these cases is to maximize their payoff. Simply put, they are designed to win the game. Central to the vast majority of techniques in AI is the notion of optimality, implying that the best performing techniques seek to find the solution to a problem that will result in the highest (or lowest) possible evaluation of some mathematical function. In adversarial games, this function typically evaluates to symmetric values such as +1 when the game is won and -1 when the game is lost. That is, winning or losing the game is an outcome or an end. While there may be a long sequence of actions that actually determine who wins or loses the game, for all intents and purposes, it is a single, terminal event that is evaluated and “maximized.” In recent years, similar approaches have been applied to newer game genres: real-time strategy, first person shooters, role-playing games, and other games in which the player is immersed in a virtual world. Despite the relative complexities of these environments compared to chess, the fundamental goals of the AI agents remain the same: to win the game. There is another perspective on game AI often advocated by developers of modern games: AI is a tool for increasing engagement and enjoyability. With this perspective in mind, game developers often take steps to “dumb down” the AI game playing agents by limiting their computational resources (Liden, 2003) or making suboptimal moves (West, 2008) such as holding back an attack until the player is ready or “rubber banding” to force strategic drawbacks if the AI ever gets the upper hand. The gameplaying agent is adversarial but is intentionally designed in an ad hoc manner to be non-competitive to make the player feel powerful.", "title": "" }, { "docid": "56180fc767d249d7e62ab7832fd05a73", "text": "Transcranial Doppler (TCD) is a noninvasive ultrasound (US) study used to measure cerebral blood flow velocity (CBF-V) in the major intracranial arteries. It involves use of low-frequency (≤2 MHz) US waves to insonate the basal cerebral arteries through relatively thin bone windows. TCD allows dynamic monitoring of CBF-V and vessel pulsatility, with a high temporal resolution. It is relatively inexpensive, repeatable, and portable. However, the performance of TCD is highly operator dependent and can be difficult, with approximately 10-20% of patients having inadequate transtemporal acoustic windows. Current applications of TCD include vasospasm in sickle cell disease, subarachnoid haemorrhage (SAH), and intra- and extracranial arterial stenosis and occlusion. TCD is also used in brain stem death, head injury, raised intracranial pressure (ICP), intraoperative monitoring, cerebral microembolism, and autoregulatory testing.", "title": "" }, { "docid": "65863f11815d4bfa083f354328581f80", "text": "The categorical compositional approach to meaning has been successfully applied in natural language processing, outperforming other models in mainstream empirical language processing tasks. We show how this approach can be generalized to conceptual space models of cognition. In order to do this, first we introduce the category of convex relations as a new setting for categorical compositional semantics, emphasizing the convex structure important to conceptual space applications. We then show how to construct conceptual spaces for various types such as nouns, adjectives and verbs. Finally we show by means of examples how concepts can be systematically combined to establish the meanings of composite phrases from the meanings of their constituent parts. This provides the mathematical underpinnings of a new compositional approach to cognition.", "title": "" }, { "docid": "36efef11d536fa3b586af2eb5e0847fe", "text": "Coming with the emerging of depth sensors link Microsoft Kinect, human hand gesture recognition has received ever increasing research interests recently. A successful gesture recognition system has usually heavily relied on having a good feature representation of data, which is expected to be task-dependent as well as coping with the challenges and opportunities induced by depth sensor. In this paper, a feature learning approach based on sparse auto-encoder (SAE) and principle component analysis is proposed for recognizing human actions, i.e. finger-spelling or sign language, for RGB-D inputs. The proposed model of feature learning is consisted of two components: First, features are learned respectively from the RGB and depth channels, using sparse auto-encoder with convolutional neural networks. Second, the learned features from both channels is concatenated and fed into a multiple layer PCA to get the final feature. Experimental results on American sign language (ASL) dataset demonstrate that the proposed feature learning model is significantly effective, which improves the recognition rate from 75% to 99.05% and outperforms the state-of-the-art. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "849b5b02998839fbea7e1fc5e07d31fe", "text": "As the Internet became popular, the volume of digital multimedia data is exponentially increased in all aspects of our life. This drastic increment in multimedia data causes unwelcome deliveries of adult image contents to the Internet. Consequently, a large number of children are wide-open to these harmful contents. In this paper, we propose an efficient classification system that can categorize the images into multiple classes such as swimming suit, topless, nude, sexual act, and normal. The experiment shows that this system achieved more than 80% of the success rate. Thus, the proposed system can be used as a framework for web contents rating systems.", "title": "" }, { "docid": "f10ce9ef67abec42deeabbf98f7f7cd8", "text": "In this paper we first deal with the design and operational control of Automated Guided Vehicle (AGV) systems, starting from the literature on these topics. Three main issues emerge: track layout, the number of AGVs required and operational transportation control. An hierarchical queueing network approach to determine the number of AGVs is decribed. Also basic concepts are presented for the transportation control of both a job-shop and a flow-shop. Next we report on the results of a case study, in which track layout and transportation control are the main issues. Finally we suggest some topics for further research.", "title": "" }, { "docid": "c0a67a4d169590fa40dfa9d80768ef09", "text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction", "title": "" } ]
scidocsrr
dfbcc0ab7826e667b7dda8210cfa4161
Towards Adversarial Retinal Image Synthesis
[ { "docid": "d622cf283f27a32b2846a304c0359c5f", "text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.", "title": "" }, { "docid": "2679d4bdb1aff322a7ec85d9712abfc7", "text": "The multiscale second order local structure of an image (Hessian) is examined with the purpose of developing a vessel enhancement filter. A vesselness measure is obtained on the basis of all eigenvalues of the Hessian. This measure is tested on two dimensional DSA and three dimensional aortoiliac and cerebral MRA data. Its clinical utility is shown by the simultaneous noise and background suppression and vessel enhancement in maximum intensity projections and volumetric displays.", "title": "" } ]
[ { "docid": "155c535c78e75b016d13ffa892f54926", "text": "Modern servers have become heterogeneous, often combining multi-core CPUs with many-core GPGPUs. Such heterogeneous architectures have the potential to improve the performance of data-intensive stream processing applications, but they are not supported by current relational stream processing engines. For an engine to exploit a heterogeneous architecture, it must execute streaming SQL queries with sufficient data-parallelism to fully utilise all available heterogeneous processors, and decide how to use each in the most effective way. It must do this while respecting the semantics of streaming SQL queries, in particular with regard to window handling.\n We describe Saber, a hybrid high-performance relational stream processing engine for CPUs and GPGPUs. Saber executes window-based streaming SQL queries in a data-parallel fashion using all available CPU and GPGPU cores. Instead of statically assigning query operators to heterogeneous processors, Saber employs a new adaptive heterogeneous lookahead scheduling strategy, which increases the share of queries executing on the processor that yields the highest performance. To hide data movement costs, Saber pipelines the transfer of stream data between CPU and GPGPU memory. Our experimental comparison against state-of-the-art engines shows that Saber increases processing throughput while maintaining low latency for a wide range of streaming SQL queries with both small and large window sizes.", "title": "" }, { "docid": "89f1cec7c2999693805945c3c898c484", "text": "Studies investigating the relationship between job satisfaction and turnover intention are abundant. Yet, this relationship has not been fully addressed in the IT field particularly in the developing countries. Moving from this point, this study aims at further probe this area by evaluating the levels of job satisfaction and turnover intention among a sample of IT employees in the Palestinian IT firms. Then, it attempts to examine the sources of job satisfaction and the causes of turnover intention among those employees. The findings show job security, work conditions, pay and benefits, work nature, coworkers, career advancement, supervision and management were all significantly correlated with overall job satisfaction. Only job security, pay, and coworkers were able to significantly influence turnover intention. Implications of the findings and future research directions are discussed", "title": "" }, { "docid": "9bec22bcbf1ab3071d65dd8b41d3cf51", "text": "Omni-directional mobile platforms have the ability to move instantaneously in any direction from any configuration. As such, it is important to have a mathematical model of the platform, especially if the platform is to be used as an autonomous vehicle. Autonomous behaviour requires that the mobile robot choose the optimum vehicle motion in different situations for object/collision avoidance and task achievement. This paper develops and verifies a mathematical model of a mobile robot platform that implements mecanum wheels to achieve omni-directionality. The mathematical model will be used to achieve optimum autonomous control of the developed mobile robot as an office service robot. Omni-directional mobile platforms have improved performance in congested environments and narrow aisles, such as those found in factory workshops, offices, warehouses, hospitals, etc.", "title": "" }, { "docid": "ed8f1c0544a6a33d1fdcaf2fd9fc74c6", "text": "Stress and negative mood during pregnancy increase risk for poor childbirth outcomes and postnatal mood problems and may interfere with mother–infant attachment and child development. However, relatively little research has focused on the efficacy of psychosocial interventions to reduce stress and negative mood during pregnancy. In this study, we developed and pilot tested an eight-week mindfulness-based intervention directed toward reducing stress and improving mood in pregnancy and early postpartum. We then conducted a small randomized trial (n = 31) comparing women who received the intervention during the last half of their pregnancy to a wait-list control group. Measures of perceived stress, positive and negative affect, depressed and anxious mood, and affect regulation were collected prior to, immediately following, and three months after the intervention (postpartum). Mothers who received the intervention showed significantly reduced anxiety (effect size, 0.89; p < 0.05) and negative affect (effect size, 0.83; p < 0.05) during the third trimester in comparison to those who did not receive the intervention. The brief and nonpharmaceutical nature of this intervention makes it a promising candidate for use during pregnancy.", "title": "" }, { "docid": "e375901afdd6d99b422342dd486c5330", "text": "Face synthesis has been a fascinating yet challenging problem in computer vision and machine learning. Its main research effort is to design algorithms to generate photo-realistic face images via given semantic domain. It has been a crucial prepossessing step of main-stream face recognition approaches and an excellent test of AI ability to use complicated probability distributions. In this paper, we provide a comprehensive review of typical face synthesis works that involve traditional methods as well as advanced deep learning approaches. Particularly, Generative Adversarial Net (GAN) is highlighted to generate photo-realistic and identity preserving results. Furthermore, the public available databases and evaluation metrics are introduced in details. We end the review with discussing unsolved difficulties and promising directions for future research.", "title": "" }, { "docid": "70bce8834a23bc84bea7804c58bcdefe", "text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.", "title": "" }, { "docid": "c42aaf64a6da2792575793a034820dcb", "text": "Psychologists and psychiatrists commonly rely on self-reports or interviews to diagnose or treat behavioral addictions. The present study introduces a novel source of data: recordings of the actual problem behavior under investigation. A total of N = 58 participants were asked to fill in a questionnaire measuring problematic mobile phone behavior featuring several questions on weekly phone usage. After filling in the questionnaire, all participants received an application to be installed on their smartphones, which recorded their phone usage for five weeks. The analyses revealed that weekly phone usage in hours was overestimated; in contrast, numbers of call and text message related variables were underestimated. Importantly, several associations between actual usage and being addicted to mobile phones could be derived exclusively from the recorded behavior, but not from self-report variables. The study demonstrates the potential benefit to include methods of psychoinformatics in the diagnosis and treatment of problematic mobile phone use.", "title": "" }, { "docid": "e0ff61d4b5361c3e2b39265310d02b85", "text": "This paper presents an adaptive technique for obtaining centers of the hidden layer neurons of radial basis function neural network (RBFNN) for face recognition. The proposed technique uses firefly algorithm to obtain natural sub-clusters of training face images formed due to variations in pose, illumination, expression and occlusion, etc. Movement of fireflies in a hyper-dimensional input space is controlled by tuning the parameter gamma (γ) of firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time and the recognition accuracy. The proposed technique is novel as it combines the advantages of evolutionary firefly algorithm and RBFNN in adaptive evolution of number and centers of hidden neurons. The strength of the proposed technique lies in its fast convergence, improved face recognition performance, reduced feature selection overhead and algorithm stability. The proposed technique is validated using benchmark face databases, namely ORL, Yale, AR and LFW. The average face recognition accuracies achieved using proposed algorithm for the above face databases outperform some of the existing techniques in face recognition.", "title": "" }, { "docid": "8dc366f9bdcb8ade26c1dc5557c9e3e0", "text": "While the idea that querying mechanisms for complex relationships (otherwise known as Semantic Associations) should be integral to Semantic Web search technologies has recently gained some ground, the issue of how search results will be ranked remains largely unaddressed. Since it is expected that the number of relationships between entities in a knowledge base will be much larger than the number of entities themselves, the likelihood that Semantic Association searches would result in an overwhelming number of results for users is increased, therefore elevating the need for appropriate ranking schemes. Furthermore, it is unlikely that ranking schemes for ranking entities (documents, resources, etc.) may be applied to complex structures such as Semantic Associations.In this paper, we present an approach that ranks results based on how predictable a result might be for users. It is based on a relevance model SemRank, which is a rich blend of semantic and information-theoretic techniques with heuristics that supports the novel idea of modulative searches, where users may vary their search modes to effect changes in the ordering of results depending on their need. We also present the infrastructure used in the SSARK system to support the computation of SemRank values for resulting Semantic Associations and their ordering.", "title": "" }, { "docid": "73edaa7319dcf225c081f29146bbb385", "text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.", "title": "" }, { "docid": "c043e7a5d5120f5a06ef6decc06c184a", "text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures", "title": "" }, { "docid": "bb2c1b4b08a25df54fbd46eaca138337", "text": "The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.", "title": "" }, { "docid": "43055975f5ae456f560e3d2fcaa6d65c", "text": "In many real-world scenarios, the ability to automatically classify documents into a fixed set of categories is highly desirable. Common scenarios include classifying a large amount of unclassified archival documents such as newspaper articles, legal records and academic papers. For example, newspaper articles can be classified as ’features’, ’sports’ or ’news’. Other scenarios involve classifying of documents as they are created. Examples include classifying movie review articles into ’positive’ or ’negative’ reviews or classifying only blog entries using a fixed set of labels. Natural language processing o↵ers powerful techniques for automatically classifying documents. These techniques are predicated on the hypothesis that documents in di↵erent categories distinguish themselves by features of the natural language contained in each document. Salient features for document classification may include word structure, word frequency, and natural language structure in each document. Our project looks specifically at the task of automatically classifying newspaper articles from the MIT newspaper The Tech. The Tech has archives of a large number of articles which require classification into specific sections (News, Opinion, Sports, etc). Our project is aimed at investigating and implementing techniques which can be used to perform automatic article classification for this purpose. At our disposal is a large archive of already classified documents so we are able to make use of supervised classification techniques. We randomly split this archive of classified documents into training and testing groups for our classification systems (hereafter referred to simply as classifiers). This project experiments with di↵erent natural language feature sets as well as di↵erent statistical techniques using these feature sets and compares the performance in each case. Specifically, our project involves experimenting with feature sets for Naive Bayes Classification, Maximum Entropy Classification, and examining sentence structure di↵erences in di↵erent categories using probabilistic grammar parsers. The paper proceeds as follows: Section 2 discusses related work in the areas of document classification and give an overivew of each classification technique. Section 3 details our approach and implementation. Section 4 shows the results of testing our classifiers. In Section 5, we discuss possible future extensions and suggestions for improvement. Finally, in Section 6, we discuss retrospective thoughts on our approach and high-level conclusions about our results.", "title": "" }, { "docid": "426a7e1f395213d627cd9fb3b3b561b1", "text": "In the field of computational game theory, games are often compared in terms of their size. This can be measured in several ways, including the number of unique game states, the number of decision points, and the total number of legal actions over all decision points. These numbers are either known or estimated for a wide range of classic games such as chess and checkers. In the stochastic and imperfect information game of poker, these sizes are easily computed in “limit” games which restrict the players’ available actions, but until now had only been estimated for the more complicated “no-limit” variants. In this paper, we describe a simple algorithm for quickly computing the size of two-player no-limit poker games, provide an implementation of this algorithm, and present for the first time precise counts of the number of game states, information sets, actions and terminal nodes in the no-limit poker games played in the Annual Computer Poker Competition.", "title": "" }, { "docid": "8439dbba880179895ab98a521b4c254f", "text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI", "title": "" }, { "docid": "a6c772380f45f9905e31c42b0680d36d", "text": "Current neofunctionalist views of emotion underscore the biologically adaptive and psychologically constructive contributions of emotion to organized behavior, but little is known of the development of the emotional regulatory processes by which this is fostered. Emotional regulation refers to the extrinsic and intrinsic processes responsible for monitoring, evaluating, and modifying emotional reactions. This review provides a developmental outline of emotional regulation and its relation to emotional development throughout the life-span. The biological foundations of emotional self-regulation and individual differences in regulatory tendencies are summarized. Extrinsic influences on the early regulation of a child's emotion and their long-term significance are then discussed, including a parent's direct intervention strategies, selective reinforcement and modeling processes, affective induction, and the caregiver's ecological control of opportunity for heightened emotion and its management. Intrinsic contributors to the growth of emotional self-regulatory capacities include the emergence of language and cognitive skills, the child's growing emotional and self-understanding (and cognized strategies of emotional self-control), and the emergence of a \"theory of personal emotion\" in adolescence.", "title": "" }, { "docid": "1e9d3432a6bb0bdba071420e6383e9b1", "text": "The fixed Cover Polynomial of a graph G of order n has been already introduced in [3]. It is defined as the polynomial C (G, x) = V C   c | | (G) i i= (G) (G, i)x , where C (G, i) is the number of fixed vertex covering sets of G of size i and (G) is the fixed covering number of G. In this paper, we found the fixed covering sets and fixed covering polynomial of the Friendship graphs Fn. Also we exhibited the fixed covering polynomial of the graph Kn K1 with an illustration. An introduction to obtain algorithm for the fixed covering polynomial is initiated.", "title": "" }, { "docid": "0757280353e6e1bd73b3d1cd11f6b031", "text": "OBJECTIVE\nTo investigate seasonal patterns in mood and behavior and estimate the prevalence of seasonal affective disorder (SAD) and subsyndromal seasonal affective disorder (S-SAD) in the Icelandic population.\n\n\nPARTICIPANTS AND SETTING\nA random sample generated from the Icelandic National Register, consisting of 1000 men and women aged 17 to 67 years from all parts of Iceland. It represents 6.4 per million of the Icelandic population in this age group.\n\n\nDESIGN\nThe Seasonal Pattern Assessment Questionnaire, an instrument for investigating mood and behavioral changes with the seasons, was mailed to a random sample of the Icelandic population. The data were compared with results obtained with similar methods in populations in the United States.\n\n\nMAIN OUTCOME MEASURES\nSeasonality score and prevalence rates of seasonal affective disorder and subsyndromal seasonal affective disorder.\n\n\nRESULTS\nThe prevalence of SAD and S-SAD were estimated at 3.8% and 7.5%, respectively, which is significantly lower than prevalence rates obtained with the same method on the east coast of the United States (chi 2 = 9.29 and 7.3; P < .01). The standardized rate ratios for Iceland compared with the United States were 0.49 and 0.63 for SAD and S-SAD, respectively. No case of summer SAD was found.\n\n\nCONCLUSIONS\nSeasonal affective disorder and S-SAD are more common in younger individuals and among women. The weight gained by patients during the winter does not seem to result in chronic obesity. The prevalence of SAD and S-SAD was lower in Iceland than on the East Coast of the United States, in spite of Iceland's more northern latitude. These results are unexpected since the prevalence of these disorders has been found to increase in more northern latitudes. The Icelandic population has remained remarkably isolated during the past 1000 years. It is conceivable that persons with a predisposition to SAD have been at a disadvantage and that there may have been a population selection toward increased tolerance of winter darkness.", "title": "" }, { "docid": "516eb5f2160659cb1ef57a5a826efc64", "text": "To describe physical activity (PA) and sedentary behavior (SB) patterns before and during pregnancy among Chinese, Malay and Indian women. In addition, to investigate determinants of change in PA and SB during pregnancy. The Growing Up in Singapore Towards healthy Outcomes cohort recruited first trimester pregnant women. PA and SB (sitting time and television time) before and during pregnancy were assessed as a part of an interview questionnaire at weeks 26–28 gestational clinic visit. Total energy expenditure (TEE) on PA and time in SB were calculated. Determinants of change in PA and SB were investigated using multiple logistic regression analysis. PA and SB questions were answered by 94 % (n = 1171) of total recruited subjects. A significant reduction in TEE was observed from before to during pregnancy [median 1746.0–1039.5 metabolic equivalent task (MET) min/week, p < 0.001]. The proportion of women insufficiently active (<600 MET-min/week) increased from 19.0 to 34.1 % (p < 0.001). Similarly, sitting time (median 56.0–63.0 h/week, p < 0.001) and television time (mean 16.1–16.7 h/week, p = 0.01) increased. Women with higher household income, lower level of perceived health, nausea/vomiting during pregnancy and higher level of pre-pregnancy PA were more likely to reduce PA. Women with children were less likely to reduce PA. Women reporting nausea/vomiting and lower level of pre-pregnancy sitting time were more likely to increase sitting time. Participants substantially reduced PA and increased SB by 26–28 weeks of pregnancy. Further research is needed to better understand determinants of change in PA and SB and develop effective health promotion strategies.", "title": "" }, { "docid": "429f27ab8039a9e720e9122f5b1e3bea", "text": "We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.", "title": "" } ]
scidocsrr
eeda13d1c1c249473f66d60c5b3f032d
PhishNet: Predictive Blacklisting to Detect Phishing Attacks
[ { "docid": "22554a4716f348a6f43299f193d5534f", "text": "Unsolicited bulk e-mail, or SPAM, is a means to an end. For virtually all such messages, the intent is to attract the recipient into entering a commercial transaction — typically via a linked Web site. While the prodigious infrastructure used to pump out billions of such solicitations is essential, the engine driving this process is ultimately th e “point-of-sale” — the various money-making “scams” that extract value from Internet users. In the hopes of better understanding the business pressures exerted on spammers, this paper focuses squarely on the Internet infrastructure used to host and support such scams. We describe an opportunistic measurement technique called spamscatterthat mines emails in real-time, follows the embedded link structure, and automatically clusters the destination Web sites using image shinglingto capture graphical similarity between rendered sites. We have implemented this approach on a large real-time spam feed (over 1M messages per week) and have identified and analyzed over 2,000 distinct scams on 7,000 distinct servers.", "title": "" }, { "docid": "6cacb8cdc5a1cc17c701d4ffd71bdab1", "text": "Phishing costs Internet users billions of dollars a year. Using various data sets collected in real-time, this paper analyzes various aspects of phisher modi operandi. We examine the anatomy of phishing URLs and domains, registration of phishing domains and time to activation, and the machines used to host the phishing sites. Our findings can be used as heuristics in filtering phishing-related emails and in identifying suspicious domain registrations.", "title": "" } ]
[ { "docid": "20f379e3b4f62c4d319433bb76f3a490", "text": "We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages.", "title": "" }, { "docid": "f636dece7889f998fa10c19736d90a9a", "text": "Our use of language depends upon two capacities: a mental lexicon of memorized words and a mental grammar of rules that underlie the sequential and hierarchical composition of lexical forms into predictably structured larger words, phrases, and sentences. The declarative/procedural model posits that the lexicon/grammar distinction in language is tied to the distinction between two well-studied brain memory systems. On this view, the memorization and use of at least simple words (those with noncompositional, that is, arbitrary form-meaning pairings) depends upon an associative memory of distributed representations that is subserved by temporal-lobe circuits previously implicated in the learning and use of fact and event knowledge. This \"declarative memory\" system appears to be specialized for learning arbitrarily related information (i.e., for associative binding). In contrast, the acquisition and use of grammatical rules that underlie symbol manipulation is subserved by frontal/basal-ganglia circuits previously implicated in the implicit (nonconscious) learning and expression of motor and cognitive \"skills\" and \"habits\" (e.g., from simple motor acts to skilled game playing). This \"procedural\" system may be specialized for computing sequences. This novel view of lexicon and grammar offers an alternative to the two main competing theoretical frameworks. It shares the perspective of traditional dual-mechanism theories in positing that the mental lexicon and a symbol-manipulating mental grammar are subserved by distinct computational components that may be linked to distinct brain structures. However, it diverges from these theories where they assume components dedicated to each of the two language capacities (that is, domain-specific) and in their common assumption that lexical memory is a rote list of items. Conversely, while it shares with single-mechanism theories the perspective that the two capacities are subserved by domain-independent computational mechanisms, it diverges from them where they link both capacities to a single associative memory system with broad anatomic distribution. The declarative/procedural model, but neither traditional dual- nor single-mechanism models, predicts double dissociations between lexicon and grammar, with associations among associative memory properties, memorized words and facts, and temporal-lobe structures, and among symbol-manipulation properties, grammatical rule products, motor skills, and frontal/basal-ganglia structures. In order to contrast lexicon and grammar while holding other factors constant, we have focused our investigations of the declarative/procedural model on morphologically complex word forms. Morphological transformations that are (largely) unproductive (e.g., in go-went, solemn-solemnity) are hypothesized to depend upon declarative memory. These have been contrasted with morphological transformations that are fully productive (e.g., in walk-walked, happy-happiness), whose computation is posited to be solely dependent upon grammatical rules subserved by the procedural system. Here evidence is presented from studies that use a range of psycholinguistic and neurolinguistic approaches with children and adults. It is argued that converging evidence from these studies supports the declarative/procedural model of lexicon and grammar.", "title": "" }, { "docid": "d1513bdee495f972bc3ec97542809e25", "text": "Assessing software security involves steps such as code review, risk analysis, penetration testing and fuzzing. During the fuzzing phase, the tester \" s goal is to find flaws in software by sending unexpected input to the target application and monitoring its behavior. In this paper we introduce the AutoFuzz [1]-extendable, open source framework used for testing network protocol implementations. AutoFuzz is a \" smart \" , man-in-the-middle, semi-deterministic network protocol fuzzing framework. AutoFuzz learns a protocol implementation by constructing a Finite State Automaton (FSA) which captures the observed communications between a client and a server [5]. In addition, AutoFuzz learns individual message syntax, including fields and probable types, by applying the bioinformatics techniques of [2]. Finally, AutoFuzz can fuzz client or server protocol implementations by intelligently modifying the communication sessions between them using the FSA as a guide. AutoFuzz was applied to a variety of File Transfer Protocol (FTP) server implementations, confirming old and discovering new vulnerabilities.", "title": "" }, { "docid": "727990a4880db648c8596efe37993d77", "text": "As CMOS technology scales down to the nanoscale, high leakage power consumption becomes the main problem and challenge of electronic circuits. To overcome this challenge, nano-emerging technologies and logic-in-memory structure are being studied. Magnetic tunnel junction (MTJ) is an emerging technology which has many advantages when used in logic in memory structures in conjunction with CMOS. In this paper, we present novel designs of hybrid MTJ/CMOS circuits; AND, XOR and 1-bit full adder. The proposed MTJ/CMOS full adder design has 71% lower Power-delay-product (PDP) compared to the previous MTJ/CMOS full adder. To further improve the energy efficiency we investigated the use of nanoelectronic devices (CNFET, FinFET) in the proposed circuits and compared them with the CMOS based designs. The hybrid MTJ/CNFET and MJT/FinFET full adders have about 18 and 11 times lower PDP, respectively, when compared to the MJT/CMOS design. Also, the MTJ/CNFET based full adder has 66% lower PDP than the MTJ/FinFET based design.", "title": "" }, { "docid": "cde1419d6b4912b414a3c83139dc3f06", "text": "This book results from a decade of presenting the user-centered design (UCD) methodology for hundreds of companies (p. xxiii) and appears to be the book complement to the professional development short course. Its purpose is to encourage software developers to focus on the total user experience of software products during the whole of the development cycle. The notion of the “total user experience” is valuable because it focuses attention on the whole product-use cycle, from initial awareness through productive use.", "title": "" }, { "docid": "512fe22c9d2bdcba4668b6752fe32791", "text": "There has been much effort recently to probe the long-recognized relationship between the pathological processes of infection, inflammation and cancer. For example, epidemiological studies have shown that ∼15% of human deaths from cancer are associated with chronic viral or bacterial infections. This Review focuses on the molecular mechanisms that connect infection, inflammation and cancer, and it puts forward the hypothesis that activation of nuclear factor-κB (NF-κB) by the classical, IKK-β (inhibitor-of-NF-κB kinase-β)-dependent pathway is a crucial mediator of inflammation-induced tumour growth and progression, as well as an important modulator of tumour surveillance and rejection.", "title": "" }, { "docid": "4c4376a25aa61e891294708b753dcfec", "text": "Ransomware, a class of self-propagating malware that uses encryption to hold the victims’ data ransom, has emerged in recent years as one of the most dangerous cyber threats, with widespread damage; e.g., zero-day ransomware WannaCry has caused world-wide catastrophe, from knocking U.K. National Health Service hospitals offline to shutting down a Honda Motor Company in Japan [1]. Our close collaboration with security operations of large enterprises reveals that defense against ransomware relies on tedious analysis from high-volume systems logs of the first few infections. Sandbox analysis of freshly captured malware is also commonplace in operation. We introduce a method to identify and rank the most discriminating ransomware features from a set of ambient (non-attack) system logs and at least one log stream containing both ambient and ransomware behavior. These ranked features reveal a set of malware actions that are produced automatically from system logs, and can help automate tedious manual analysis. We test our approach using WannaCry and two polymorphic samples by producing logs with Cuckoo Sandbox during both ambient, and ambient plus ransomware executions. Our goal is to extract the features of the malware from the logs with only knowledge that malware was present. We compare outputs with a detailed analysis of WannaCry allowing validation of the algorithm’s feature extraction and provide analysis of the method’s robustness to variations of input data—changing quality/quantity of ambient data and testing polymorphic ransomware. Most notably, our patterns are accurate and unwavering when generated from polymorphic WannaCry copies, on which 63 (of 63 tested) antivirus (AV) products fail.", "title": "" }, { "docid": "a8ddaed8209d09998159014307233874", "text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.", "title": "" }, { "docid": "a094547d8ec7653b6f2754f0add1cfa3", "text": "We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent’s explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. This significantly reduces variance in the gradient updates and removes the need for a variance reduction baseline. We show empirical results on two control domains where MAC performs as well as or better than other policy gradient approaches, and on five Atari games, where MAC is competitive with state-of-the-art policy search algorithms.", "title": "" }, { "docid": "fb3d88a2cfbd6d337d751af94cc2e336", "text": "This paper proposes four low power adder cells using different XOR and XNOR gate architectures. Two sets of circuit designs are presented. One implements full adders with 3 transistors (3-T) XOR and XNOR gates. The other applies Gate-Diffusion-Input (GDI) technique to full adders. Simulations are performed by using Hspice based on 180nm CMOS technology. In comparison with Static Energy Recovery Full (SERF) adder cell module, the proposed four full adder cells demonstrate their advantages, including lower power consumption, smaller area, and higher speed.", "title": "" }, { "docid": "852578afdb63985d93b1d2d0ee8fc3e8", "text": "This paper builds on the recent ASPIC formalism, to develop a general framework for argumentation with preferences. We motivate a revised definition of conflict free sets of arguments, adapt ASPIC to accommodate a broader range of instantiating logics, and show that under some assumptions, the resulting framework satisfies key properties and rationality postulates. We then show that the generalised framework accommodates Tarskian logic instantiations extended with preferences, and then study instantiations of the framework by classical logic approaches to argumentation. We conclude by arguing that ASPIC’s modelling of defeasible inference rules further testifies to the generality of the framework, and then examine and counter recent critiques of Dung’s framework and its extensions to accommodate preferences.", "title": "" }, { "docid": "27b9350b8ea1032e727867d34c87f1c3", "text": "A field study and an experimental study examined relationships among organizational variables and various responses of victims to perceived wrongdoing. Both studies showed that procedural justice climate moderates the effect of organizational variables on the victim's revenge, forgiveness, reconciliation, or avoidance behaviors. In Study 1, a field study, absolute hierarchical status enhanced forgiveness and reconciliation, but only when perceptions of procedural justice climate were high; relative hierarchical status increased revenge, but only when perceptions of procedural justice climate were low. In Study 2, a laboratory experiment, victims were less likely to endorse vengeance or avoidance depending on the type of wrongdoing, but only when perceptions of procedural justice climate were high.", "title": "" }, { "docid": "d18a2e1811f2d11e88c9ae780a8ede23", "text": "In this paper, we present the design of error-resilient machine learning architectures by employing a distributed machine learning framework referred to as classifier ensemble (CE). CE combines several simple classifiers to obtain a strong one. In contrast, centralized machine learning employs a single complex block. We compare the random forest (RF) and the support vector machine (SVM), which are representative techniques from the CE and centralized frameworks, respectively. Employing the dataset from UCI machine learning repository and architecturallevel error models in a commercial 45 nm CMOS process, it is demonstrated that RF-based architectures are significantly more robust than SVM architectures in presence of timing errors due to process variations in near-threshold voltage (NTV) regions (0.3 V 0.7 V). In particular, the RF architecture exhibits a detection accuracy (Pdet) that varies by 3.2% while maintaining a median Pdet ≥ 0.9 at a gate level delay variation of 28.9% . In comparison, SVM exhibits a Pdet that varies by 16.8%. Additionally, we propose an error weighted voting technique that incorporates the timing error statistics of the NTV circuit fabric to further enhance robustness. Simulation results confirm that the error weighted voting achieves a Pdet that varies by only 1.4%, which is 12× lower compared to SVM.", "title": "" }, { "docid": "1e9e3fce7ae4e980658997c2984f05cb", "text": "BACKGROUND\nMotivation in learning behaviour and education is well-researched in general education, but less in medical education.\n\n\nAIM\nTo answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation.\n\n\nMETHODS\nA literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included.\n\n\nRESULTS\nFindings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT.\n\n\nCONCLUSION\nMotivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.", "title": "" }, { "docid": "7eec1e737523dc3b78de135fc71b058f", "text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches", "title": "" }, { "docid": "8f916f7be3048ae2a367096f4f82207d", "text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.", "title": "" }, { "docid": "912c92dd4755cfb280f948bd4264ded7", "text": "A decision is a commitment to a proposition or plan of action based on information and values associated with the possible outcomes. The process operates in a flexible timeframe that is free from the immediacy of evidence acquisition and the real time demands of action itself. Thus, it involves deliberation, planning, and strategizing. This Perspective focuses on perceptual decision making in nonhuman primates and the discovery of neural mechanisms that support accuracy, speed, and confidence in a decision. We suggest that these mechanisms expose principles of cognitive function in general, and we speculate about the challenges and directions before the field.", "title": "" }, { "docid": "2a26a3886309cd65a5e080ca12f438ef", "text": "Studies were conducted on the anoxic phenol removal using granular denitrifying sludge in sequencing batch reactor at different cycle lengths and influent phenol concentrations. Results showed that removal exceeded 80% up to an influent phenol concentration of 1050 mg/l at 6 h cycle length, which corresponded to 6.4 kg COD/m3/d. Beyond this, there was a steep decrease in phenol and COD removal efficiencies. This was accompanied by an increase in nitrite concentration in the effluent. On an average, 1 g nitrate-N was consumed per 3.4 g phenol COD removal. Fraction of COD available for sludge growth was calculated to be 11%.", "title": "" }, { "docid": "644e99282f31a935981778cbe89be323", "text": "Functional Distributional Semantics: Learning Linguistically Informed Representations from a Precisely Annotated Corpus The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? The current state of the art is to represent meanings as vectors – but vectors do not correspond to any traditional notion of meaning. In particular, there is no way to talk about truth, a crucial concept in logic and formal semantics. In this thesis, I develop a framework for distributional semantics which answers this challenge. The meaning of a word is not represented as a vector, but as a function, mapping entities (objects in the world) to probabilities of truth (the probability that the word is true of the entity). Such a function can be interpreted both in the machine learning sense of a classifier, and in the formal semantic sense of a truth-conditional function. This simultaneously allows both the use of machine learning techniques to exploit large datasets, and also the use of formal semantic techniques to manipulate the learnt representations. I define a probabilistic graphical model, which incorporates a probabilistic generalisation of model theory (allowing a strong connection with formal semantics), and which generates semantic dependency graphs (allowing it to be trained on a corpus). This graphical model provides a natural way to model logical inference, semantic composition, and context-dependent meanings, where Bayesian inference plays a crucial role. I demonstrate the feasibility of this approach by training a model on WikiWoods, a parsed version of the English Wikipedia, and evaluating it on three tasks. The results indicate that the model can learn information not captured by vector space models. Guy Edward Toh Emerson", "title": "" } ]
scidocsrr
5b0da2385b233c043918293ce5a4a6d0
A survey on Adversarial Attacks and Defenses in Text
[ { "docid": "565dcf584448f6724a6529c3d2147a68", "text": "People are fond of taking and sharing photos in their social life, and a large part of it is face images, especially selfies. A lot of researchers are interested in analyzing attractiveness of face images. Benefited from deep neural networks (DNNs) and training data, researchers have been developing deep learning models that can evaluate facial attractiveness of photos. However, recent development on DNNs showed that they could be easily fooled even when they are trained on a large dataset. In this paper, we used two approaches to generate adversarial examples that have high attractiveness scores but low subjective scores for face attractiveness evaluation on DNNs. In the first approach, experimental results using the SCUT-FBP dataset showed that we could increase attractiveness score of 20 test images from 2.67 to 4.99 on average (score range: [1, 5]) without noticeably changing the images. In the second approach, we could generate similar images from noise image with any target attractiveness score. Results show by using this approach, a part of attractiveness information could be manipulated artificially.", "title": "" }, { "docid": "f1925c66ed41aa50838d115b235349f0", "text": "Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 68.36% of the natural images in CIFAR10 test dataset and 41.22% of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22% and 5.52% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.", "title": "" }, { "docid": "1d8e2c9bd9cfa2ce283e01cbbcd6ca83", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.", "title": "" } ]
[ { "docid": "e18d85d20fb633ae0c3d641fdddfc6d6", "text": "We propose a model for the neuronal implementation of selective visual attention based on temporal correlation among groups of neurons. Neurons in primary visual cortex respond to visual stimuli with a Poisson distributed spike train with an appropriate, stimulus-dependent mean firing rate. The spike trains of neurons whose receptive fields donot overlap with the “focus of attention” are distributed according to homogeneous (time-independent) Poisson process with no correlation between action potentials of different neurons. In contrast, spike trains of neurons with receptive fields within the focus of attention are distributed according to non-homogeneous (time-dependent) Poisson processes. Since the short-term average spike rates of all neurons with receptive fields in the focus of attention covary, correlations between these spike trains are introduced which are detected by inhibitory interneurons in V4. These cells, modeled as modified integrate-and-fire neurons, function as coincidence detectors and suppress the response of V4 cells associated with non-attended visual stimuli. The model reproduces quantitatively experimental data obtained in cortical area V4 of monkey by Moran and Desimone (1985).", "title": "" }, { "docid": "1f8af42bee4a15d76900d3b69628213f", "text": "This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.", "title": "" }, { "docid": "66e7979aff5860f713dffd10e98eed3d", "text": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1", "title": "" }, { "docid": "2e40cdb0416198c1ec986e0d3da47fd1", "text": "The slotted-page structure is a database page format commonly used for managing variable-length records. In this work, we develop a novel \"failure-atomic slotted page structure\" for persistent memory that leverages byte addressability and durability of persistent memory to minimize redundant write operations used to maintain consistency in traditional database systems. Failure-atomic slotted paging consists of two key elements: (i) in-place commit per page using hardware transactional memory and (ii) slot header logging that logs the commit mark of each page. The proposed scheme is implemented in SQLite and compared against NVWAL, the current state-of-the-art scheme. Our performance study shows that our failure-atomic slotted paging shows optimal performance for database transactions that insert a single record. For transactions that touch more than one database page, our proposed slot-header logging scheme minimizes the logging overhead by avoiding duplicating pages and logging only the metadata of the dirty pages. Overall, we find that our failure-atomic slotted-page management scheme reduces database logging overhead to 1/6 and improves query response time by up to 33% compared to NVWAL.", "title": "" }, { "docid": "153d23d5f736b9a9e0f3cb88e61dc400", "text": "Context\nTrichostasis spinulosa (TS) is a common but underdiagnosed follicular disorder involving retention of successive telogen hair in the hair follicle. Laser hair removal is a newer treatment modality for TS with promising results.\n\n\nAims\nThis study aims to evaluate the efficacy of 800 nm diode laser to treat TS in Asian patients.\n\n\nSubjects and Methods\nWe treated 50 Indian subjects (Fitzpatrick skin phototype IV-V) with untreated trichostasis spinulosa on the nose with 800 nm diode laser at fluence ranging from 22 to 30 J/cm2 and pulse width of 30 ms. The patients were given two sittings at 8 week intervals. The evaluation was done by blinded assessment of photographs by independent dermatologists.\n\n\nResults\nTotally 45 (90%) patients had complete clearance of the lesions at the end of treatment. Five (10%) subjects needed one-third sitting for complete clearance. 45 patients had complete resolution and no recurrence even at 2 years follow-up visit. 5 patients had partial recurrence after 8-9 months and needed an extra laser session.\n\n\nConclusions\nLaser hair reduction in patients with TS targets and removes the hair follicles which are responsible for the plugged appearance. Due to permanent ablation of the hair bulb and bulge, the recurrence which is often seen with other modalities of treatment for TS is not observed here.", "title": "" }, { "docid": "999c7d8d16817d4b991e5b794be3b074", "text": "Smile detection from facial images is a specialized task in facial expression analysis with many potential applications such as smiling payment, patient monitoring and photo selection. The current methods on this study are to represent face with low-level features, followed by a strong classifier. However, these manual features cannot well discover information implied in facial images for smile detection. In this paper, we propose to extract high-level features by a well-designed deep convolutional networks (CNN). A key contribution of this work is that we use both recognition and verification signals as supervision to learn expression features, which is helpful to reduce same-expression variations and enlarge different-expression differences. Our method is end-to-end, without complex pre-processing often used in traditional methods. High-level features are taken from the last hidden layer neuron activations of deep CNN, and fed into a soft-max classifier to estimate. Experimental results show that our proposed method is very effective, which outperforms the state-of-the-art methods. On the GENKI smile detection dataset, our method reduces the error rate by 21% compared with the previous best method.", "title": "" }, { "docid": "83029487b006b1509f65c11fa27c23a4", "text": "OBJECTIVE\nDevelopment of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms.\n\n\nDESIGN\nThe natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain.\n\n\nMEASUREMENTS\nThe impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision.\n\n\nRESULTS\nWithout training specific to the four diseases, recall and precision of the system (combined effect of the processor and query generator) were 70% and 87%. Training of the query component increased recall to 85% without changing precision.", "title": "" }, { "docid": "8876765a7479e179ef0ac74107fc44e3", "text": "In order to remain competitive, firms need to keep the quantity and composition of jobs close to optimal for their given output. Since the beginning of the transition period, Russian industrial firms have been widely reporting that the quantity and composition of hired labour is far from being close to optimal. This paper discusses what kinds of firms in the Russian manufacturing sector are unable to optimize their employment and why. The main conclusion is that the key issue is an excess of non-viable firms and a shortage of highly efficient firms because of weak selection mechanisms. The major solution is seen in creating institutional conditions that stimulate a more efficient reallocation of labour. The analysis presented in this paper is based on data from a large-scale survey of Russian manufacturing firms.", "title": "" }, { "docid": "9ae29655fc75ad277fa541d0930d58bc", "text": "Rapid and ongoing change creates novelty in ecosystems everywhere, both when comparing contemporary systems to their historical baselines, and predicted future systems to the present. However, the level of novelty varies greatly among places. Here we propose a formal and quantifiable definition of abiotic and biotic novelty in ecosystems, map abiotic novelty globally, and discuss the implications of novelty for the science of ecology and for biodiversity conservation. We define novelty as the degree of dissimilarity of a system, measured in one or more dimensions relative to a reference baseline, usually defined as either the present or a time window in the past. In this conceptualization, novelty varies in degree, it is multidimensional, can be measured, and requires a temporal and spatial reference. This definition moves beyond prior categorical definitions of novel ecosystems, and does not include human agency, self-perpetuation, or irreversibility as criteria. Our global assessment of novelty was based on abiotic factors (temperature, precipitation, and nitrogen deposition) plus human population, and shows that there are already large areas with high novelty today relative to the early 20th century, and that there will even be more such areas by 2050. Interestingly, the places that are most novel are often not the places where absolute changes are largest; highlighting that novelty is inherently different from change. For the ecological sciences, highly novel ecosystems present new opportunities to test ecological theories, but also challenge the predictive ability of ecological models and their validation. For biodiversity conservation, increasing novelty presents some opportunities, but largely challenges. Conservation action is necessary along the entire continuum of novelty, by redoubling efforts to protect areas where novelty is low, identifying conservation opportunities where novelty is high, developing flexible yet strong regulations and policies, and establishing long-term experiments to test management approaches. Meeting the challenge of novelty will require advances in the science of ecology, and new and creative. conservation approaches.", "title": "" }, { "docid": "d9e38096571100e3d29b403cc197b4ef", "text": "Sediment microbial fuel cells (SMFCs) are considered to be an alternative renewable power source for remote monitoring. There are two main challenges to using SMFCs as power sources: 1) a SMFC produces a low potential at which most sensor electronics do not operate, and 2) a SMFC cannot provide continuous power, so energy from the SMFC must be stored and then used to repower sensor electronics intermittently. In this study, we developed a SMFC and a power management system (PMS) to power a batteryless, wireless sensor. A SMFC operating with a microbial anode and cathode, located in the Palouse River, Pullman, Washington, U.S.A., was used to demonstrate the utility of the developed system. The designed PMS stored microbial energy and then started powering the wireless sensor when the SMFC potential reached 320 mV. It continued powering until the SMFC potential dropped below 52 mV. The system was repowered when the SMFC potential increased to 320 mV, and this repowering continued as long as microbial reactions continued. We demonstrated that a microbial fuel cell with a microbial anode and cathode can be used as an effective renewable power source for remote monitoring using custom-designed electronics.", "title": "" }, { "docid": "f24f686a705a1546d211ac37d5cc2fdb", "text": "In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.", "title": "" }, { "docid": "56c82fc0310d2b599d423c689f14d14e", "text": "Tumor segmentation from MRI data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue among different patients and, in many cases, similarity between tumor and normal tissue. We propose a semi-automatic interactive brain tumor segmentation system that incorporates 2D interactive and 3D automatic tools with the ability to adjust operator control. The provided methods are based on an energy that incorporates region statistics computed on available MRI modalities and the usual regularization term. The energy is efficiently minimized on-line using graph cut. Experiments with radiation oncologists testing the semi-automatic tool vs. a manual tool show that the proposed system improves both segmentation time and repeatability.", "title": "" }, { "docid": "50471274efcc7fd7547dc6c0a1b3d052", "text": "Recently, the UAS has been extensively exploited for data collection from remote and dangerous or inaccessible areas. While most of its existing applications have been directed toward surveillance and monitoring tasks, the UAS can play a significant role as a communication network facilitator. For example, the UAS may effectively extend communication capability to disaster-affected people (who have lost cellular and Internet communication infrastructures on the ground) by quickly constructing a communication relay system among a number of UAVs. However, the distance between the centers of trajectories of two neighboring UAVs, referred to as IUD, plays an important role in the communication delay and throughput. For instance, the communication delay increases rapidly while the throughput is degraded when the IUD increases. In order to address this issue, in this article, we propose a simple but effective dynamic trajectory control algorithm for UAVs. Our proposed algorithm considers that UAVs with queue occupancy above a threshold are experiencing congestion resulting in communication delay. To alleviate the congestion at UAVs, our proposal adjusts their center coordinates and also, if needed, the radius of their trajectory. The performance of our proposal is evaluated through computer-based simulations. In addition, we conduct several field experiments in order to verify the effectiveness of UAV-aided networks.", "title": "" }, { "docid": "7d8884a7f6137068f8ede464cf63da5b", "text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.", "title": "" }, { "docid": "1fcbc7d6c408d00d3bd1e225e28a32cc", "text": "Active learning aims to train an accurate prediction model with minimum cost by labeling most informative instances. In this paper, we survey existing works on active learning from an instance-selection perspective and classify them into two categories with a progressive relationship: (1) active learning merely based on uncertainty of independent and identically distributed (IID) instances, and (2) active learning by further taking into account instance correlations. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/weaknesses, followed by a simple runtime performance comparison, and discussion about emerging active learning applications and instance-selection challenges therein. This survey intends to provide a high-level summarization for active learning and motivates interested readers to consider instance-selection approaches for designing effective active learning solutions.", "title": "" }, { "docid": "c45bec7edcd1e8337926db90d3663797", "text": "The dramatically growing demand of Cyber Physical and Social Computing (CPSC) has enabled a variety of novel channels to reach services in the financial industry. Combining cloud systems with multimedia big data is a novel approach for Financial Service Institutions (FSIs) to diversify service offerings in an efficient manner. However, the security issue is still a great issue in which the service availability often conflicts with the security constraints when the service media channels are varied. This paper focuses on this problem and proposes a novel approach using the Semantic-Based Access Control (SBAC) techniques for acquiring secure financial services on multimedia big data in cloud computing. The proposed approach is entitled IntercroSsed Secure Big Multimedia Model (2SBM), which is designed to secure accesses between various media through the multiple cloud platforms. The main algorithms supporting the proposed model include the Ontology-Based Access Recognition (OBAR) Algorithm and the Semantic Information Matching (SIM) Algorithm. We implement an experimental evaluation to prove the correctness and adoptability of our proposed scheme.", "title": "" }, { "docid": "a551b1034e5378a2d6437a8e298490aa", "text": "The role of increasingly powerful computers in the modeling and simulation domain has resulted in great advancements in the fields of wireless communications, medicine, and space technology to name a few. In The authors of this book start from the fundamental equations that govern low-frequency electromagnetic phenomenon and go through each stage of solving such problems by striking a balance between mathematical rigor and actual implementation in code. The use of MATLAB makes the advanced concepts discussed in the book immediately testable through experiments. The book pays close attention to various applications in an electrical and biological system that are of immediate relevance in today’s world. The use of state-of-the-art human phantom meshes, especially from the Visible Human Project (VHP) of the U.S. National Library of Medicine, makes this text singular in its field. The text is systematic and very well-organized in presenting the various topics on low-frequency electromagnetic. It also should be known that the first part of this text presents the mathematical theory behind low-frequency electromagnetic modeling and follows it with the topic of meshing. The text starts with the basics of meshing and builds it up in an easy-to-read manner with plenty of illustrations.", "title": "" }, { "docid": "ecb2cb8de437648c7895fc3f93809bfb", "text": "Context: Static analysis approaches have been proposed to assess the security of Android apps, by searching for known vulnerabilities or actual malicious code. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analyzers face when dealing with Android apps. Objective: We aim to provide a clear view of the state-of-the-art works that statically analyze Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put and enumerate the key aspects where future researches are still needed. Method: We have performed a systematic literature review which involves studying around 90 research papers published in software engineering, programming languages and security venues. This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed. Results: Our in-depth examination have led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artifacts publicly available. Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers.", "title": "" }, { "docid": "fb1724b8baf76ceec32647fc6e5f2039", "text": "The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.", "title": "" }, { "docid": "4fd42a2b2de6712ff915e10511aea5a2", "text": "We introduce a novel method for robust and accurate 3D object pose estimation from a single color image under large occlusions. Following recent approaches, we first predict the 2D projections of 3D points related to the target object and then compute the 3D pose from these correspondences using a geometric method. Unfortunately, as the results of our experiments show, predicting these 2D projections using a regular CNN or a Convolutional Pose Machine is highly sensitive to partial occlusions, even when these methods are trained with partially occluded examples. Our solution is to predict heatmaps from multiple small patches independently and to accumulate the results to obtain accurate and robust predictions. Training subsequently becomes challenging because patches with similar appearances but different positions on the object correspond to different heatmaps. However, we provide a simple yet effective solution to deal with such ambiguities. We show that our approach outperforms existing methods on two challenging datasets: The Occluded LineMOD dataset and the YCB-Video dataset, both exhibiting cluttered scenes with highly occluded objects.", "title": "" } ]
scidocsrr
b86c537825020d677beac96b3cc703ac
EmoTweet-28: A Fine-Grained Emotion Corpus for Sentiment Analysis
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "6c14243c49a2d119d768685b59f9548b", "text": "Over the past decade, researchers have shown significant advances in the area of radio frequency identification (RFID) and metamaterials. RFID is being applied to a wide spectrum of industries and metamaterial-based antennas are beginning to perform just as well as existing larger printed antennas. This paper presents two novel metamaterial-based antennas for passive ultra-high frequency (UHF) RFID tags. It is shown that by implementing omega-like elements and split-ring resonators into the design of an antenna for an UHF RFID tag, the overall size of the antenna can be significantly reduced to dimensions of less than 0.15λ0, while preserving the performance of the antenna.", "title": "" }, { "docid": "72944a6ad81c2802d0401f9e0c2d8bb5", "text": "Available online 10 August 2016 Big Data (BD), with their potential to ascertain valued insights for enhanced decision-making process, have recently attracted substantial interest from both academics and practitioners. Big Data Analytics (BDA) is increasingly becoming a trending practice that many organizations are adopting with the purpose of constructing valuable information from BD. The analytics process, including the deployment and use of BDA tools, is seen by organizations as a tool to improve operational efficiency though it has strategic potential, drive new revenue streams and gain competitive advantages over business rivals. However, there are different types of analytic applications to consider. Therefore, prior to hasty use and buying costly BD tools, there is a need for organizations to first understand the BDA landscape. Given the significant nature of theBDandBDA, this paper presents a state-ofthe-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/ employed by organizations to help others understand this landscape with the objective of making robust investment decisions. In doing so, systematically analysing and synthesizing the extant research published on BD and BDA area. More specifically, the authors seek to answer the following two principal questions: Q1 –What are the different types of BD challenges theorized/proposed/confronted by organizations? and Q2 – What are the different types of BDA methods theorized/proposed/employed to overcome BD challenges?. This systematic literature review (SLR) is carried out through observing and understanding the past trends and extant patterns/themes in the BDA research area, evaluating contributions, summarizing knowledge, thereby identifying limitations, implications and potential further research avenues to support the academic community in exploring research themes/patterns. Thus, to trace the implementation of BD strategies, a profiling method is employed to analyze articles (published in English-speaking peer-reviewed journals between 1996 and 2015) extracted from the Scopus database. The analysis presented in this paper has identified relevant BD research studies that have contributed both conceptually and empirically to the expansion and accrual of intellectual wealth to the BDA in technology and organizational resource management discipline. © 2016 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "b0952378fb2214e9cbc7a1846711e0a9", "text": "As technology is exponentially regenerated and developed; new educational methods are needed to assure better educational outcomes. Recently schools have started integrating recent technology into the educational process in order to attract students and increase their engagement in the process. This paper will demonstrate how an augmented immersive reality mobile application can work to enrich the learning environment and attract the students to learn and explore subjects such as sciences, more specifically Chemistry. Augmented immersive reality allows students to learn in their own way and pace while encouraging them to explore and be creative as their `trial-and-error' mistakes will not impose any major consequences in the virtual world. A field study was conducted on grade 11 students to verify the benefits of using the developed application in education.", "title": "" }, { "docid": "982af44d0c5fc3d0bddd2804cee77a04", "text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.", "title": "" }, { "docid": "800dc3e6a3f58d2af1ed7cd526074d54", "text": "The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.", "title": "" }, { "docid": "8cb84abf9a87b2691536ba58bd556a3a", "text": "The purpose of this tutorial paper is to make general type-2 fuzzy logic systems (GT2 FLSs) more accessible to fuzzy logic researchers and practitioners, and to expedite their research, designs, and use. To accomplish this, the paper 1) explains four different mathematical representations for general type-2 fuzzy sets (GT2 FSs); 2) demonstrates that for the optimal design of a GT2 FLS, one should use the vertical-slice representation of its GT2 FSs because it is the only one of the four mathematical representations that is parsimonious; 3) shows how to obtain set theoretic and other operations for GT2 FSs using type-1 (T1) FS mathematics (α- cuts play a central role); 4) reviews Mamdani and TSK interval type-2 (IT2) FLSs so that their mathematical operations can be easily used in a GT2 FLS; 5) provides all of the formulas that describe both Mamdani and TSK GT2 FLSs; 6) explains why center-of sets type-reduction should be favored for a GT2 FLS over centroid type-reduction; 7) provides three simplified GT2 FLSs (two are for Mamdani GT2 FLSs and one is for a TSK GT2 FLS), all of which bypass type reduction and are generalizations from their IT2 FLS counterparts to GT2 FLSs; 8) explains why gradient-based optimization should not be used to optimally design a GT2 FLS; 9) explains how derivative-free optimization algorithms can be used to optimally design a GT2 FLS; and 10) provides a three-step approach for optimally designing FLSs in a progressive manner, from T1 to IT2 to GT2, each of which uses a quantum particle swarm optimization algorithm, by virtue of which the performance for the IT2 FLS cannot be worse than that of the T1 FLS, and the performance for the GT2 FLS cannot be worse than that of the IT2 FLS.", "title": "" }, { "docid": "4a5ced961de32d383427e8825bb5c41b", "text": "1. Top-down control can be an important determinant of ecosystem structure and function, but in oceanic ecosystems, where cascading effects of predator depletions, recoveries, and invasions could be significant, such effects had rarely been demonstrated until recently. 2. Here we synthesize the evidence for oceanic top-down control that has emerged over the last decade, focusing on large, high trophic-level predators inhabiting continental shelves, seas, and the open ocean. 3. In these ecosystems, where controlled manipulations are largely infeasible, 'pseudo-experimental' analyses of predator-prey interactions that treat independent predator populations as 'replicates', and temporal or spatial contrasts in predator populations and climate as 'treatments', are increasingly employed to help disentangle predator effects from environmental variation and noise. 4. Substantial reductions in marine mammals, sharks, and piscivorous fishes have led to mesopredator and invertebrate predator increases. Conversely, abundant oceanic predators have suppressed prey abundances. Predation has also inhibited recovery of depleted species, sometimes through predator-prey role reversals. Trophic cascades have been initiated by oceanic predators linking to neritic food webs, but seem inconsistent in the pelagic realm with effects often attenuating at plankton. 5. Top-down control is not uniformly strong in the ocean, and appears contingent on the intensity and nature of perturbations to predator abundances. Predator diversity may dampen cascading effects except where nonselective fisheries deplete entire predator functional groups. In other cases, simultaneous exploitation of predator and prey can inhibit prey responses. Explicit consideration of anthropogenic modifications to oceanic foodwebs should help inform predictions about trophic control. 6. Synthesis and applications. Oceanic top-down control can have important socio-economic, conservation, and management implications as mesopredators and invertebrates assume dominance, and recovery of overexploited predators is impaired. Continued research aimed at integrating across trophic levels is needed to understand and forecast the ecosystem effects of changing oceanic predator abundances, the relative strength of top-down and bottom-up control, and interactions with intensifying anthropogenic stressors such as climate change.", "title": "" }, { "docid": "ff94a36f6a1420cd0d732976a9a7d10f", "text": "A basic idea of Dirichlet is to study a collection of interesting quantities {an}n≥1 by means of its Dirichlet series in a complex variable w: ∑ n≥1 ann −w. In this paper we examine this construction when the quantities an are themselves infinite series in a second complex variable s, arising from number theory or representation theory. We survey a body of recent work on such series and present a new conjecture concerning them.", "title": "" }, { "docid": "75cf6e81de38f370d629d0041783243d", "text": "CONTEXT\nThe Association of American Medical Colleges' Institute for Improving Medical Education's report entitled 'Effective Use of Educational Technology' called on researchers to study the effectiveness of multimedia design principles. These principles were empirically shown to result in superior learning when used with college students in laboratory studies, but have not been studied with undergraduate medical students as participants.\n\n\nMETHODS\nA pre-test/post-test control group design was used, in which the traditional-learning group received a lecture on shock using traditionally designed slides and the modified-design group received the same lecture using slides modified in accord with Mayer's principles of multimedia design. Participants included Year 3 medical students at a private, midwestern medical school progressing through their surgery clerkship during the academic year 2009-2010. The medical school divides students into four groups; each group attends the surgery clerkship during one of the four quarters of the academic year. Students in the second and third quarters served as the modified-design group (n=91) and students in the fourth-quarter clerkship served as the traditional-design group (n=39).\n\n\nRESULTS\nBoth student cohorts had similar levels of pre-lecture knowledge. Both groups showed significant improvements in retention (p<0.0001), transfer (p<0.05) and total scores (p<0.0001) between the pre- and post-tests. Repeated-measures anova analysis showed statistically significant greater improvements in retention (F=10.2, p=0.0016) and total scores (F=7.13, p=0.0081) for those students instructed using principles of multimedia design compared with those instructed using the traditional design.\n\n\nCONCLUSIONS\nMultimedia design principles are easy to implement and result in improved short-term retention among medical students, but empirical research is still needed to determine how these principles affect transfer of learning. Further research on applying the principles of multimedia design to medical education is needed to verify the impact it has on the long-term learning of medical students, as well as its impact on other forms of multimedia instructional programmes used in the education of medical students.", "title": "" }, { "docid": "200fd3c94e8b064833cfcbe7dfe0d39e", "text": "This article reviews the current opinion of the histopathological findings of common elbow, wrist, and hand tendinopathies. Implications for client management including examination, diagnosis, prognosis, intervention, and outcomes are addressed. Concepts for further research regarding common therapeutic interventions are discussed.", "title": "" }, { "docid": "7d8826c228fa8a3bb8837754d26b8979", "text": "This paper summarizes the latest, final version of ISO standard 24617-2 “Semantic annotation framework, Part 2: Dialogue acts”. Compared to the preliminary version ISO DIS 24617-2:2010, described in Bunt et al. (2010), the final version additionally includes concepts for annotating rhetorical relations between dialogue units, defines a full-blown compositional semantics for the Dialogue Act Markup Language DiAML (resulting, as a side-effect, in a different treatment of functional dependence relations among dialogue acts and feedback dependence relations); and specifies an optimally transparent XML-based reference format for the representation of DiAML annotations, based on the systematic application of the notion of ‘ideal concrete syntax’. We describe these differences and briefly discuss the design and implementation of an incremental method for dialogue act recognition, which proves the usability of the ISO standard for automatic dialogue annotation.", "title": "" }, { "docid": "c69bc25454ba459cac60b59a7f293012", "text": "The morphology of the retinal blood vessels can be an important indicator for diseases like diabetes, hypertension and retinopathy of prematurity (ROP). Thus, the measurement of changes in morphology of arterioles and venules can be of diagnostic value. Here we present a method to automatically segment retinal blood vessels based upon multiscale feature extraction. This method overcomes the problem of variations in contrast inherent in these images by using the first and second spatial derivatives of the intensity image that gives information about vessel topology. This approach also enables the detection of blood vessels of different widths, lengths and orientations. The local maxima over scales of the magnitude of the gradient and the maximum principal curvature of the Hessian tensor are used in a multiple pass region growing procedure. The growth progressively segments the blood vessels using feature information together with spatial information. The algorithm is tested on red-free and fluorescein retinal images, taken from two local and two public databases. Comparison with first public database yields values of 75.05% true positive rate (TPR) and 4.38% false positive rate (FPR). Second database values are of 72.46% TPR and 3.45% FPR. Our results on both public databases were comparable in performance with other authors. However, we conclude that these values are not sensitive enough so as to evaluate the performance of vessel geometry detection. Therefore we propose a new approach that uses measurements of vessel diameters and branching angles as a validation criterion to compare our segmented images with those hand segmented from public databases. Comparisons made between both hand segmented images from public databases showed a large inter-subject variability on geometric values. A last evaluation was made comparing vessel geometric values obtained from our segmented images between red-free and fluorescein paired images with the latter as the \"ground truth\". Our results demonstrated that borders found by our method are less biased and follow more consistently the border of the vessel and therefore they yield more confident geometric values.", "title": "" }, { "docid": "2d0121e8509d09571d8973da784440a5", "text": "In this paper we examine the suitability of BPMN for business process modelling, using the Workflow Patterns as an evaluation framework. The Workflow Patterns are a collection of patterns developed for assessing control-flow, data and resource capabilities in the area of Process Aware Information Systems (PAIS). In doing so, we provide a comprehensive evaluation of the capabilities of BPMN, and its strengths and weaknesses when utilised for business process modelling. The analysis provided for BPMN is part of a larger effort aiming at an unbiased and vendor-independent survey of the suitability and the expressive power of some mainstream process modelling languages. It is a sequel to an analysis series where languages like BPEL and UML 2.0 A.D are evaluated.", "title": "" }, { "docid": "c43514e4db01be2eb07d1251d3f13bc5", "text": "In this paper we present our research on detection of cyberbullying (CB), which stands for humiliating other people through the Internet. CB has become recognized as a social problem, and its mostly juvenile victims usually fall into depression, selfmutilate, or even commit suicide. To deal with the problem, school personnel performs Internet Patrol (IP) by reading through the available Web contents to spot harmful entries. It is crucial to help IP members detect malicious contents more efficiently. A number of research has tackled the problem during recent years. However, due to complexity of language used in cyberbullying, the results has remained only mildly satisfying. We propose a novel method to automatic cyberbullying detection based on Convolutional Neural Networks and increased Feature Density. The experiments performed on actual cyberbullying data showed a major advantage of our approach to all previous methods, including the best performing method so far based on BruteForce Search algorithm.", "title": "" }, { "docid": "5abd28fd61a784941fd5ab1974d81e30", "text": "Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches.", "title": "" }, { "docid": "b3b96e6c1bbc2da8d548fb4b2d1072bc", "text": "This paper reports on insider threat detection research, during which a prototype system (PRODIGAL) was developed and operated as a testbed for exploring a range of detection and analysis methods. The data and test environment, system components, and the core method of unsupervised detection of insider threat leads are presented to document this work and benefit others working in the insider threat domain. We also discuss a core set of experiments evaluating the prototype’s ability to detect both known and unknown malicious insider behaviors. The experimental results show the ability to detect a large variety of insider threat scenario instances imbedded in real data with no prior knowledge of what scenarios are present or when they occur. We report on an ensemble-based, unsupervised technique for detecting potential insider threat instances. When run over 16 months of real monitored computer usage activity augmented with independently developed and unknown but realistic, insider threat scenarios, this technique robustly achieves results within five percent of the best individual detectors identified after the fact. We discuss factors that contribute to the success of the ensemble method, such as the number and variety of unsupervised detectors and the use of prior knowledge encoded in detectors designed for specific activity patterns. Finally, the paper describes the architecture of the prototype system, the environment in which we conducted these experiments and that is in the process of being transitioned to operational users.", "title": "" }, { "docid": "4fc4008c6762a18fef474ad251359bfa", "text": "Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, and provide a review of the relevant literature. Finally, we address security implications from self-improving intelligent software.", "title": "" }, { "docid": "6fa41378af62791731e17db2ea1115b6", "text": "The amount of graph-structured data has recently experienced an enormous growth in many applications. To transform such data into useful information, fast analytics algorithms and software tools are necessary. One common graph analytics kernel is disjoint community detection (or graph clustering). Despite extensive research on heuristic solvers for this task, only few parallel codes exist, although parallelism will be necessary to scale to the data volume of real-world applications. We address the deficit in computing capability by a flexible and extensible community detection framework with shared-memory parallelism. Within this framework we design and implement efficient parallel community detection heuristics: A parallel label propagation scheme; the first large-scale parallelization of the well-known Louvain method, as well as an extension of the method adding refinement; and an ensemble scheme combining the above. In extensive experiments driven by the algorithm engineering paradigm, we identify the most successful parameters and combinations of these algorithms. We also compare our implementations with state-of-the-art competitors. The processing rate of our fastest algorithm often reaches 50 M edges/second. We recommend the parallel Louvain method and our variant with refinement as both qualitatively strong and fast. Our methods are suitable for massive data sets with billions of edges. (A preliminary version of this paper appeared in Proceedings of the 42nd International Conference on Parallel Processing (ICPP 2013) [35].)", "title": "" }, { "docid": "8055b2c65d5774000fe4fa81ff83efb7", "text": "Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.", "title": "" }, { "docid": "f1d4323cbabd294723a2fd68321ad640", "text": "Mycosis fungoides (MF), a low-grade lymphoproliferative disorder, is the most common type of cutaneous T-cell lymphoma. Typically, neoplastic T cells localize to the skin and produce patches, plaques, tumours or erythroderma. Diagnosis of MF can be difficult due to highly variable presentations and the sometimes nonspecific nature of histological findings. Molecular biology has improved the diagnostic accuracy. Nevertheless, clinical experience is of substantial importance as MF can resemble a wide variety of skin diseases. We performed a literature review and found that MF can mimic >50 different clinical entities. We present a structured framework of clinical variations of classical, unusual and distinct forms of MF. Distinct subforms such as ichthyotic MF, adnexotropic (including syringotropic and folliculotropic) MF, MF with follicular mucinosis, granulomatous MF with granulomatous slack skin and papuloerythroderma of Ofuji are delineated in more detail.", "title": "" } ]
scidocsrr
4be1ded5d84d4c68eeabbb0ef7410d48
Between a Block and a Typeface: Designing and Evaluating Hybrid Programming Environments
[ { "docid": "8cb0bc13e10ee37ad2e71f7036bf1f6a", "text": "Blocks-based programming tools are becoming increasingly common in high-school introductory computer science classes. Such contexts are quite different than the younger audience and informal settings where these tools are more often used. This paper reports findings from a study looking at how high school students view blocks-based programming tools, what they identify as contributing to the perceived ease-of-use of such tools, and what they see as the most salient differences between blocks-based and text-based programming. Students report that numerous factors contribute to making blocks-based programming easy, including the natural language description of blocks, the drag-and-drop composition interaction, and the ease of browsing the language. Students also identify drawbacks to blocks-based programming compared to the conventional text-based approach, including a perceived lack of authenticity and being less powerful. These findings, along with the identified differences between blocks-based and text-based programming, contribute to our understanding of the suitability of using such tools in formal high school settings and can be used to inform the design of new, and revision of existing, introductory programming tools.", "title": "" } ]
[ { "docid": "ea1e84dfb1889826b0356dcd85182ec4", "text": "With the support of the wearable devices, healthcare services started a new phase in serving patients need. The new technology adds more facilities and luxury to the healthcare services, Also changes patients' lifestyles from the traditional way of monitoring to the remote home monitoring. Such new approach faces many challenges related to security as sensitive data get transferred through different type of channels. They are four main dimensions in terms of security scope such as trusted sensing, computation, communication, privacy and digital forensics. In this paper we will try to focus on the security challenges of the wearable devices and IoT and their advantages in healthcare sectors.", "title": "" }, { "docid": "2476c8b7f6fe148ab20c29e7f59f5b23", "text": "A high temperature, wire-bondless power electronics module with a double-sided cooling capability is proposed and successfully fabricated. In this module, a low-temperature co-fired ceramic (LTCC) substrate was used as the dielectric and chip carrier. Conducting vias were created on the LTCC carrier to realize the interconnection. The absent of a base plate reduced the overall thermal resistance and also improved the fatigue life by eliminating a large-area solder layer. Nano silver paste was used to attach power devices to the DBC substrate as well as to pattern the gate connection. Finite element simulations were used to compare the thermal performance to several reported double-sided power modules. Electrical measurements of a SiC MOSFET and SiC diode switching position demonstrated the functionality of the module.", "title": "" }, { "docid": "d62dcea792acd1710b7a9f45dacd9336", "text": "Hashing methods have been widely used for applications of large-scale image retrieval and classification. Non-deep hashing methods using handcrafted features have been significantly outperformed by deep hashing methods due to their better feature representation and end-to-end learning framework. However, the most striking successes in deep hashing have mostly involved discriminative models, which require labels. In this paper, we propose a novel unsupervised deep hashing method, named Deep Discrete Hashing (DDH), for large-scale image retrieval and classification. In the proposed framework, we address two main problems: 1) how to directly learn discrete binary codes? 2) how to equip the binary representation with the ability of accurate image retrieval and classification in an unsupervised way? We resolve these problems by introducing an intermediate variable and a loss function steering the learning process, which is based on the neighborhood structure in the original space. Experimental results on standard datasets (CIFAR-10, NUS-WIDE, and Oxford-17) demonstrate that our DDH significantly outperforms existing hashing methods by large margin in terms of mAP for image retrieval and object recognition. Code is available at https://github.com/htconquer/ddh.", "title": "" }, { "docid": "70becc434885af8f59ad39a3cedc8b6d", "text": "The trajectory of the heel and toe during the swing phase of human gait were analyzed on young adults. The magnitude and variability of minimum toe clearance and heel-contact velocity were documented on 10 repeat walking trials on 11 subjects. The energetics that controlled step length resulted from a separate study of 55 walking trials conducted on subjects walking at slow, natural, and fast cadences. A sensitivity analysis of the toe clearance and heel-contact velocity measures revealed the individual changes at each joint in the link-segment chain that could be responsible for changes in those measures. Toe clearance was very small (1.29 cm) and had low variability (about 4 mm). Heel-contact velocity was negligible vertically and small (0.87 m/s) horizontally. Six joints in the link-segment chain could, with very small changes (+/- 0.86 degrees - +/- 3.3 degrees), independently account for toe clearance variability. Only one muscle group in the chain (swing-phase hamstring muscles) could be responsible for altering the heel-contact velocity prior to heel contact. Four mechanical power phases in gait (ankle push-off, hip pull-off, knee extensor eccentric power at push-off, and knee flexor eccentric power prior to heel contact) could alter step length and cadence. These analyses demonstrate that the safe trajectory of the foot during swing is a precise endpoint control task that is under the multisegment motor control of both the stance and swing limbs.", "title": "" }, { "docid": "8a6a26094a9752010bb7297ecc80cd15", "text": "This paper provides standard instructions on how to protect short text messages with one-time pad encryption. The encryption is performed with nothing more than a pencil and paper, but provides absolute message security. If properly applied, it is mathematically impossible for any eavesdropper to decrypt or break the message without the proper key.", "title": "" }, { "docid": "47929b2ff4aa29bf115a6728173feed7", "text": "This paper presents a metaobject protocol (MOP) for C++. This MOP was designed to bring the power of meta-programming to C++ programmers. It avoids penalties on runtime performance by adopting a new meta-architecture in which the metaobjects control the compilation of programs instead of being active during program execution. This allows the MOP to be used to implement libraries of efficient, transparent language extensions.", "title": "" }, { "docid": "544cdcd97568a61e4a02a3ea37d6a0b5", "text": "In this paper, we describe a data-driven approach to leverage repositories of 3D models for scene understanding. Our ability to relate what we see in an image to a large collection of 3D models allows us to transfer information from these models, creating a rich understanding of the scene. We develop a framework for auto-calibrating a camera, rendering 3D models from the viewpoint an image was taken, and computing a similarity measure between each 3D model and an input image. We demonstrate this data-driven approach in the context of geometry estimation and show the ability to find the identities, poses and styles of objects in a scene. The true benefit of 3DNN compared to a traditional 2D nearest-neighbor approach is that by generalizing across viewpoints, we free ourselves from the need to have training examples captured from all possible viewpoints. Thus, we are able to achieve comparable results using orders of magnitude less data, and recognize objects from never-before-seen viewpoints. In this work, we describe the 3DNN algorithm and rigorously evaluate its performance for the tasks of geometry estimation and object detection/segmentation, as well as two novel applications: affordance estimation and photorealistic object insertion.", "title": "" }, { "docid": "eaddba3b27a3a1faf9e957917d102d3f", "text": "Some recent modifications of the protein assay by the method of Lowry, Rosebrough, Farr, and Randall (1951, .I. Biol. Chem. 193, 265-275) have been reexamined and altered to provide a consolidated method which is simple, rapid, objective, and more generally applicable. A DOC-TCA protein precipitation technique provides for rapid quantitative recovery of soluble and membrane proteins from interfering substances even in very dilute solutions (< 1 pg/ml of protein). SDS is added to alleviate possible nonionic and cationic detergent and lipid interferences, and to provide mild conditions for rapid denaturation of membrane and proteolipid proteins. A simple method based on a linear log-log protein standard curve is presented to permit rapid and totally objective protein analysis using small programmable calculators. The new modification compared favorably with the original method of Lowry ef al.", "title": "" }, { "docid": "f4720df58360b726bf2a128547f6d9d1", "text": "Iris texture is commonly thought to be highly discriminative between eyes and stable over individual lifetime, which makes iris particularly suitable for personal identification. However, iris texture also contains more information related to genes, which has been demonstrated by successful use of ethnic and gender classification based on iris. In this paper, we propose a novel ethnic classification method based on supervised codebook optimizing and Locality-constrained Linear Coding (LLC). The optimized codebook is composed of codes which are distinctive or mutual. Iris images from Asian and non-Asian are classified into two classes in experiments. Extensive experimental results show that the proposed method achieves encouraging classification rate and largely improves the ethnic classification performance comparing to existing algorithms.", "title": "" }, { "docid": "3587732b8d855eb8a941edeb58c68fe3", "text": "In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kalman filters (EKF). Our method processes a sequence of images taken by a single camera mounted frontal on a mobile robot. Using different techniques, we are able to produce a precise reconstruction that is free from outliers and therefore can be used for reliable obstacle detection. In real-world field-tests we show that the presented approach is able to detect obstacles that are not seen by other sensors, such as laser-range-finder s. Furthermore, we show that visual obstacle detection combined with a laser-range-finder can increase the detection rate of obstacles considerably allowing the autonomous use of mobile robots in complex public environments.", "title": "" }, { "docid": "35404fbbf92e7a995cdd6de044f2ec0d", "text": "The ball on plate system is the extension of traditional ball on beam balancing problem in control theory. In this paper the implementation of a proportional-integral-derivative controller (PID controller) to balance a ball on a plate has been demonstrated. To increase the system response time and accuracy multiple controllers are piped through a simple custom serial protocol to boost the processing power, and overall performance. A single HD camera module is used as a sensor to detect the ball's position and two RC servo motors are used to tilt the plate to balance the ball. The result shows that by implementing multiple PUs (Processing Units) redundancy and high resolution can be achieved in real-time control systems.", "title": "" }, { "docid": "11625f32434ba977aa513cf4bc66cf01", "text": "PRIMARY OBJECTIVE\nNavigational skills are fundamental to community travel and, hence, personal independence and are often disrupted in people with cognitive impairments. Navigation devices are being developed that can support community navigation by delivering directional information. Selecting an effective mode to provide route-prompts is a critical design issue. This study evaluated the differential effects on pedestrian route finding using different modes of prompting delivered via a handheld electronic device for travellers with severe cognitive impairments.\n\n\nRESEARCH DESIGN\nA within-subject comparison study was used to evaluate potential differences in route navigation performance when travellers received directions using four different prompt modes: (1) aerial map image, (2) point of view map image, (3) text based instructions/no image and (4) audio direction/no image.\n\n\nMETHODS AND PROCEDURES\nTwenty travellers with severe cognitive impairments due to acquired brain injury walked four equivalent routes using four different prompting modes delivered via a wrist-worn navigation device. Navigation scores were computed that captured accuracy and confidence during navigation.\n\n\nMAIN OUTCOME\nResults of the repeated measures Analysis of Variance suggested that participants performed best when given prompts via speech-based audio directions. The majority of the participants also preferred this prompting mode. Findings are interpreted in the context of cognitive resource allocation theory.", "title": "" }, { "docid": "83dec7aa3435effc3040dfb08cb5754a", "text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.", "title": "" }, { "docid": "13cb137f6eda91a92cd5509fb2266323", "text": "In this introductory essay, we explore definitions of the ‘sharing economy’, a concept indicating both social (relational, communitarian) and economic (allocative, profit-seeking) aspects which appear to be in tension. We suggest combining the social and economic logics of the sharing economy to focus on the central features of network enabled, aggregated membership in a pool of offers and demands (for goods, services, creative expressions). This definition of the sharing economy distinguishes it from other related peer-to-peer and collaborative forms of production. Understanding the social and economic motivations for and implications of participating in the sharing economy is important to its regulation. Each of the papers in this special issue contributes to knowledge by linking the social and economic aspects of sharing economy practices to regulatory norms and mechanisms. We conclude this essay by suggesting future research to further clarify and render intelligible the sharing economy, not as a contradiction in terms but as an empirically observable realm of socio-economic activity.", "title": "" }, { "docid": "90fc857db7207f0a94dd91fbaa48be4f", "text": "We present a computational origami construction of Morley’s triangles and automated proof of correctness of the generalized Morley’s theorem in a streamlined process of solving-computing-proving. The whole process is realized by a computational origami system being developed by us. During the computational origami construction, geometric constraints in symbolic and numeric representation are generated and accumulated. Those constraints are then transformed into algebraic relations, which in turn are used to prove the correctness of the construction. The automated proof required non-trivial amount of computer resources, and shows the necessity of networked services of mathematical software. This example is considered to be a case study for innovative mathematical knowledge management.", "title": "" }, { "docid": "9904ac77b96bdd634322701a53149b4e", "text": "Brain-computer interface can have a profound impact on the life of paralyzed or elderly citizens as they offer control over various devices without any necessity of movement of the body parts. This technology has come a long way and opened new dimensions in improving our life. Use of electroencephalogram (EEG wave) based control schemes can change the shape of the lives of the disabled citizens if incorporated with an electric wheelchair through a wearable device. Electric wheelchairs are nowadays commercially available which provides mobility to the disabled persons with relative ease. But most of the commercially available products are much expensive and controlled through the joystick, hand gesture, voice command, etc. which may not be viable control scheme for severely disabled or paralyzed persons. In our research work, we have developed a low-cost electric wheelchair using locally available cheap parts and incorporated brain-computer interface considering the affordability of people from developing countries. So, people who have lost their control over their limbs or have the inability to drive a wheelchair by any means can control the proposed wheelchair only by their attention and willingness to blink. To acquire the signal of attention and blink, single channel electroencephalogram (EEG wave) was captured by a wearable Neurosky MindWave Mobile. One of the salient features of the proposed scheme is ‘Destination Mapping’ by which the wheelchair develops a virtual map as the user moves around and autonomously reaches desired positions afterward by taking command from a smart interface based on EEG signal. From the experiments that were carried out at different stages of the development, it was exposed that, such a wheelchair is easy to train and calibrate for different users and offers a low cost and smart alternative especially for the elderly people in developing countries.", "title": "" }, { "docid": "a9fba1188b97a2097702ff900f35d4d9", "text": "One of the beauties of use cases is their accessible, informal format. Use cases are easy to write, and the graphical notation is trivial. Because of their simplicity, use cases are not intimidating, even for teams that have little experience with formal requirements specification and management. However, the simplicity can be deceptive; writing good use cases takes some skill and practice. Many groups writing use cases for the first time run into similar kinds of problems. This paper presents the author's \"Top Ten\" list of use case pitfalls and problems, based on observations from a number of real projects. The paper outlines the symptoms of the problems, and recommends pragmatic cures for each. Examples are provided to illustrate the problems and their solutions.", "title": "" }, { "docid": "8bd5263d6f1bd0ee3bf988b5afbcdbeb", "text": "We present the fundamentals for a toolkit for scalable and dependable service platforms and architectures that enable flexible and dynamic provisioning of cloud services. The innovations behind the toolkit are aimed at optimizing the whole service life cycle, including service construction, deployment, and operation, on a basis of aspects such as trust, risk, ecoefficiency and cost. Notably, adaptive self-preservation is key to meet predicted and unforeseen changes in resource requirements. By addressing the whole service life cycle, taking into account the multitude of future cloud architectures, and a by taking a holistic approach to sustainable service provisioning, the toolkit is aimed to provide a foundation for a reliable, sustainable, and trustful cloud computing industry.", "title": "" }, { "docid": "4396d53b9cfeb4997b4e7c7293d67586", "text": "Title Type cities and complexity understanding cities with cellular automata agent-based models and fractals PDF the complexity of cooperation agent-based models of competition and collaboration PDF party competition an agent-based model princeton studies in complexity PDF sharing cities a case for truly smart and sustainable cities urban and industrial environments PDF global metropolitan globalizing cities in a capitalist world questioning cities PDF state of the worlds cities 201011 cities for all bridging the urban divide PDF new testament cities in western asia minor light from archaeology on cities of paul and the seven churches of revelation PDF", "title": "" }, { "docid": "6e7a43826490fe80692da334ef38f5a4", "text": "We present a modular system for detection and correction of errors made by nonnative (English as a Second Language = ESL) writers. We focus on two error types: the incorrect use of determiners and the choice of prepositions. We use a decisiontree approach inspired by contextual spelling systems for detection and correction suggestions, and a large language model trained on the Gigaword corpus to provide additional information to filter out spurious suggestions. We show how this system performs on a corpus of non-native English text and discuss strategies for future enhancements.", "title": "" } ]
scidocsrr
bfdd289467ecfdf1128255334f9dc7b2
Application identification via network traffic classification
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "ba7f70d5e360b967a3a2ad65fb39aa30", "text": "The research community has begun looking for IP traffic classification techniques that do not rely on `well known¿ TCP or UDP port numbers, or interpreting the contents of packet payloads. New work is emerging on the use of statistical traffic characteristics to assist in the identification and classification process. This survey paper looks at emerging research into the application of Machine Learning (ML) techniques to IP traffic classification - an inter-disciplinary blend of IP networking and data mining techniques. We provide context and motivation for the application of ML techniques to IP traffic classification, and review 18 significant works that cover the dominant period from 2004 to early 2007. These works are categorized and reviewed according to their choice of ML strategies and primary contributions to the literature. We also discuss a number of key requirements for the employment of ML-based traffic classifiers in operational IP networks, and qualitatively critique the extent to which the reviewed works meet these requirements. Open issues and challenges in the field are also discussed.", "title": "" } ]
[ { "docid": "a3774a953758e650077ac2a33613ff58", "text": "We propose a deep convolutional neural network (CNN) method for natural image matting. Our method takes multiple initial alpha mattes of the previous methods and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs and reconstructed alpha mattes. Among the various existing methods, we focus on using two simple methods as initial alpha mattes: the closed-form matting and KNN matting. They are complementary to each other in terms of local and nonlocal principles. A major benefit of our method is that it can “recognize” different local image structures and then combine the results of local (closed-form matting) and nonlocal (KNN matting) mattings effectively to achieve higher quality alpha mattes than both of the inputs. Furthermore, we verify extendability of the proposed network to different combinations of initial alpha mattes from more advanced techniques such as KL divergence matting and information-flow matting. On the top of deep CNN matting, we build an RGB guided JPEG artifacts removal network to handle JPEG block artifacts in alpha matting. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. We perform deeper experiments including studies to evaluate the importance of balancing training data and to measure the effects of initial alpha mattes and also consider results from variant versions of the proposed network to analyze our proposed DCNN matting. In addition, our method achieved high ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors. Also, our RGB guided JPEG artifacts removal network restores the damaged alpha mattes from compressed images in JPEG format.", "title": "" }, { "docid": "cc3d0d9676ad19f71b4a630148c4211f", "text": "OBJECTIVES\nPrevious studies have revealed that memory performance is diminished in chronic pain patients. Few studies, however, have assessed multiple components of memory in a single sample. It is currently also unknown whether attentional problems, which are commonly observed in chronic pain, mediate the decline in memory. Finally, previous studies have focused on middle-aged adults, and a possible detrimental effect of aging on memory performance in chronic pain patients has been commonly disregarded. This study, therefore, aimed at describing the pattern of semantic, working, and visual and verbal episodic memory performance in participants with chronic pain, while testing for possible contributions of attention and age to task performance.\n\n\nMETHODS\nThirty-four participants with chronic pain and 32 pain-free participants completed tests of episodic, semantic, and working memory to assess memory performance and a test of attention.\n\n\nRESULTS\nParticipants with chronic pain performed worse on tests of working memory and verbal episodic memory. A decline in attention explained some, but not all, group differences in memory performance. Finally, no additional effect of age on the diminished task performance in participants with chronic pain was observed.\n\n\nDISCUSSION\nTaken together, the results indicate that chronic pain significantly affects memory performance. Part of this effect may be caused by underlying attentional dysfunction, although this could not fully explain the observed memory decline. An increase in age in combination with the presence of chronic pain did not additionally affect memory performance.", "title": "" }, { "docid": "4ba941ee9e7840dc18cf873062076456", "text": "We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores", "title": "" }, { "docid": "3827ee6ebfc7813b566ef1d8f94c0f42", "text": "We provide a simple but novel supervised weighting scheme for adjusting term frequency in tf-idf for sentiment analysis and text classification. We compare our method to baseline weighting schemes and find that it outperforms them on multiple benchmarks. The method is robust and works well on both snippets and longer documents.", "title": "" }, { "docid": "1eef5f6e1b6903c46b8ae625ac1beec4", "text": "Distant supervision has become the leading method for training large-scale relation extractors, with nearly universal adoption in recent TAC knowledge-base population competitions. However, there are still many questions about the best way to learn such extractors. In this paper we investigate four orthogonal improvements: integrating named entity linking (NEL) and coreference resolution into argument identification for training and extraction, enforcing type constraints of linked arguments, and partitioning the model by relation type signature. We evaluate sentential extraction performance on two datasets: the popular set of NY Times articles partially annotated by Hoffmann et al. (2011) and a new dataset, called GORECO, that is comprehensively annotated for 48 common relations. We find that using NEL for argument identification boosts performance over the traditional approach (named entity recognition with string match), and there is further improvement from using argument types. Our best system boosts precision by 44% and recall by 70%.", "title": "" }, { "docid": "7525b24d3e0c6332cdc3eb58c7677b63", "text": "OBJECTIVE\nTo compare the efficacy of 2 intensified insulin regimens, continuous subcutaneous insulin infusion (CSII) and multiple daily injections (MDI), by using the short-acting insulin analog lispro in type 1 diabetic patients.\n\n\nRESEARCH DESIGN AND METHODS\nA total of 41 C-peptide-negative type 1 diabetic patients (age 43.5+/-10.3 years; 21 men and 20 women, BMI 24.0+/-2.4 kg/m2, diabetes duration 20.0+/-11.3 years) on intensified insulin therapy (MDI with regular insulin or lispro, n = 9, CSII with regular insulin, n = 32) were included in an open-label randomized crossover study comparing two 4-month periods of intensified insulin therapy with lispro: one period by MDI and the other by CSII. Blood glucose (BG) was monitored before and after each of the 3 meals each day.\n\n\nRESULTS\nThe basal insulin regimen had to be optimized in 75% of the patients during the MDI period (mean number of NPH injections per day = 2.65). HbA1c values were lower when lispro was used in CSII than in MDI (7.89+/-0.77 vs. 8.24+/-0.77%, P<0.001). BG levels were lower with CSII (165+/-27 vs. 175+/-33 mg/dl, P<0.05). The SD of all the BG values (73+/-15 vs. 82+/-18 mg/dl, P<0.01) was lower with CSII. The frequency of hypoglycemic events, defined as BG levels <60 mg/dl, did not differ significantly between the 2 modalities (CSII 3.9+/-4.2 per 14 days vs. MDI 4.3+/-3.9 per 14 days). Mean insulin doses were significantly lower with CSII than with MDI (38.5+/-9.8 vs. 47.3+/-14.9 U/day. respectively, P< 0.0001).\n\n\nCONCLUSIONS\nWhen used with external pumps versus MDI, lispro provides better glycemic control and stability with much lower doses of insulin and does not increase the frequency of hypoglycemic episodes.", "title": "" }, { "docid": "112f10eb825a484850561afa7c23e71f", "text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.", "title": "" }, { "docid": "8988aaa4013ef155cbb09644ca491bab", "text": "Uses and gratification theory aids in the assessment of how audiences use a particular medium and the gratifications they derive from that use. In this paper this theory has been applied to derive Internet uses and gratifications for Indian Internet users. This study proceeds in four stages. First, six first-order gratifications namely self development, wide exposure, user friendliness, relaxation, career opportunities, and global exchange were identified using an exploratory factor analysis. Then the first order gratifications were subjected to firstorder confirmatory factor analysis. Third, using second-order confirmatory factor analysis three types of secondorder gratifications were obtained, namely process gratifications, content gratifications and social gratifications. Finally, with the use of t-tests the study has shown that males and females differ significantly on the gratification factors “self development”, “user friendliness”, “wide exposure” and “relaxation.” The intended audience consists of masters’ level students and doctoral students who want to learn exploratory factor analysis and confirmatory factor analysis. This case study can also be used to teach the basics of structural equation modeling using the software AMOS.", "title": "" }, { "docid": "b52da336c6d70923a1c4606f5076a3ba", "text": "Given the recent explosion of interest in streaming data and online algorithms, clustering of time-series subsequences, extracted via a sliding window, has received much attention. In this work, we make a surprising claim. Clustering of time-series subsequences is meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising because it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method that, based on the concept of time-series motifs, is able to meaningfully cluster subsequences on some time-series datasets.", "title": "" }, { "docid": "a1bb09726327d73cf73c1aa9b0a2c39d", "text": "Advances in neural network language models have demonstrated that these models can effectively learn representations of words meaning. In this paper, we explore a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free-text, rather than directly from terms in free-text.\n This model is employed for the task of measuring semantic similarity between medical concepts, a task that is central to a number of techniques in medical informatics and information retrieval. The model is built with two medical corpora (journal abstracts and patient records) and empirically validated on two ground-truth datasets of human-judged concept pairs assessed by medical professionals. Empirically, our approach correlates closely with expert human assessors (≈0.9) and outperforms a number of state-of-the-art benchmarks for medical semantic similarity.\n The demonstrated superiority of this model for providing an effective semantic similarity measure is promising in that this may translate into effectiveness gains for techniques in medical information retrieval and medical informatics (e.g., query expansion and literature-based discovery).", "title": "" }, { "docid": "b6286076ec2585f24dc33e775ab0fe70", "text": "Trajectory tracking control for quadrotors is important for applications ranging from surveying and inspection, to film making. However, designing and tuning classical controllers, such as proportional-integral-derivative (PID) controllers, to achieve high tracking precision can be time-consuming and difficult, due to hidden dynamics and other non-idealities. The Deep Neural Network (DNN), with its superior capability of approximating abstract, nonlinear functions, proposes a novel approach for enhancing trajectory tracking control. This paper presents a DNN-based algorithm as an add-on module that improves the tracking performance of a classical feedback controller. Given a desired trajectory, the DNNs provide a tailored reference input to the controller based on their gained experience. The input aims to achieve a unity map between the desired and the output trajectory. The motivation for this work is an interactive “fly-as-you-draw” application, in which a user draws a trajectory on a mobile device, and a quadrotor instantly flies that trajectory with the DNN-enhanced control system. Experimental results demonstrate that the proposed approach improves the tracking precision for user-drawn trajectories after the DNNs are trained on selected periodic trajectories, suggesting the method's potential in real-world applications. Tracking errors are reduced by around 40–50% for both training and testing trajectories from users, highlighting the DNNs' capability of generalizing knowledge.", "title": "" }, { "docid": "4e14e9cb95ed8bc3b352e3e1119b53e1", "text": "We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet [1], while its categorywise accuracy is only 8% less. We evaluated ESPNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet [16], ShuffleNet [17], and ENet [20] on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.", "title": "" }, { "docid": "65579d9b79107b708d90d150f3c7ff9b", "text": "The design and simulation of a scalable neural chip with synaptic electronics using nanoscale memristors fully integrated with complementary metal-oxide-semiconductor (CMOS) is presented. The circuit consists of integrate-and-fire neurons and synapses with spike-timing dependent plasticity (STDP). The synaptic conductance values can be stored in memristors with eight levels, and the topology of connections between neurons is reconfigurable. The circuit has been designed using a 90 nm CMOS process with via connections to on-chip post-processed memristor arrays. The design has about 16 million CMOS transistors and 73 728 integrated memristors. We provide circuit level simulations of the entire chip performing neuronal and synaptic computations that result in biologically realistic functional behavior.", "title": "" }, { "docid": "86e5c9defae0135db8466df0bdbe5aef", "text": "Autonomous Underwater Vehicles (AUVs) are robots able to perform tasks without human intervention (remote operators). Research and development of this class of vehicles has growing, due to the excellent characteristics of the AUVs to operate in different situations. Therefore, this study aims to analyze turbulent single fluid flow over different geometric configurations of an AUV hull, in order to obtain test geometry that generates lower drag force, which reduces the energy consumption of the vehicle, thereby increasing their autonomy during operation. In the numerical analysis was used ANSYS-CFX® 11.0 software, which is a powerful tool for solving problems involving fluid mechanics. Results of the velocity (vectors and streamlines), pressure distribution and drag coefficient are showed and analyzed. Optimum hull geometry was found. Lastly, a relationship between the geometric parameters analyzed and the drag coefficient was obtained.", "title": "" }, { "docid": "d00765c898151dd5977fab8e39c4d7e9", "text": "Knowledge graphs (KG) play a crucial role in many modern applications. However, constructing a KG from natural language text is challenging due to the complex structure of the text. Recently, many approaches have been proposed to transform natural language text to triples to obtain KGs. Such approaches have not yet provided efficient results for mapping extracted elements of triples, especially the predicate, to their equivalent elements in a KG. Predicate mapping is essential because it can reduce the heterogeneity of the data and increase the searchability over a KG. In this article, we propose T2KG, an automatic KG creation framework for natural language text, to more effectively map natural language text to predicates. In our framework, a hybrid combination of a rule-based approach and a similarity-based approach is presented for mapping a predicate to its corresponding predicate in a KG. Based on experimental results, the hybrid approach can identify more similar predicate pairs than a baseline method in the predicate mapping task. An experiment on KG creation is also conducted to investigate the performance of the T2KG. The experimental results show that the T2KG also outperforms the baseline in KG creation. Although KG creation is conducted in open domains, in which prior knowledge is not provided, the T2KG still achieves an F1 score of approximately 50% when generating triples in the KG creation task. In addition, an empirical study on knowledge population using various text sources is conducted, and the results indicate the T2KG could be used to obtain knowledge that is not currently available from DBpedia. key words: knowledge graph, knowledge discovery, knowledge extraction, linked data", "title": "" }, { "docid": "49f68a9534a602074066948a13164ad4", "text": "Recent developments in Web technologies and using AI techniques to support efforts in making the Web more intelligent and provide higher-level services to its users have opened the door to building the Semantic Web. That fact has a number of important implications for Web-based education, since Web-based education has become a very important branch of educational technology. Classroom independence and platform independence of Web-based education, availability of authoring tools for developing Web-based courseware, cheap and efficient storage and distribution of course materials, hyperlinks to suggested readings, digital libraries, and other sources of references relevant for the course are but a few of a number of clear advantages of Web-based education. However, there are several challenges in improving Web-based education, such as providing for more adaptivity and intelligence. Developments in the Semantic Web, while contributing to the solution to these problems, also raise new issues that must be considered if we are to progress. This paper surveys the basics of the Semantic Web and discusses its importance in future Web-based educational applications. Instead of trying to rebuild some aspects of a human brain, we are going to build a brain of and for humankind. D. Fensel and M.A. Musen (Fensel & Musen, 2001)", "title": "" }, { "docid": "51448cd6bb6e92d249c6d32ba22971de", "text": "This paper describes the design of an optimal-control-based active safety framework that performs trajectory planning, threat assessment, and semiautonomous control of passenger vehicles in hazard avoidance scenarios. This framework allows for multiple actuation modes, diverse trajectory-planning objectives, and varying levels of autonomy. A model predictive controller iteratively plans a best-case vehicle trajectory through a navigable corridor as a constrained optimal control problem. The framework then uses this trajectory to assess the threat posed to the vehicle and intervenes in proportion to this threat. This approach minimizes controller intervention while ensuring that the vehicle does not depart from a navigable corridor of travel. Simulation and experimental results are presented here to demonstrate the framework’s ability to incorporate configurable intervention laws while sharing control with a human driver.", "title": "" }, { "docid": "502cae1daa2459ed0f826ed3e20c44e4", "text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.", "title": "" }, { "docid": "0ee2dff9fb026b5c117d39fa537ab1b3", "text": "Motor Imagery (MI) is a highly supervised method nowadays for the disabled patients to give them hope. This paper proposes a differentiation method between imagery left and right hands movement using Daubechies wavelet of Discrete Wavelet Transform (DWT) and Levenberg-Marquardt back propagation training algorithm of Neural Network (NN). DWT decomposes the raw EEG data to extract significant features that provide feature vectors precisely. Levenberg-Marquardt Algorithm (LMA) based neural network uses feature vectors as input for classification of the two class data and outcomes overall classification accuracy of 92%. Previously various features and methods used but this recommended method exemplifies that statistical features provide better accuracy for EEG classification. Variation among features indicates differences between neural activities of two brain hemispheres due to two imagery hands movement. Results from the classifier are used to interface human brain with machine for better performance that requires high precision and accuracy scheme.", "title": "" }, { "docid": "ee472d575bb598dcb4d5d8e4218d25e7", "text": "This paper proposes a new target impact point estimation system using acoustic sensors. The proposed system estimates projectile trajectory where it hits a target plane by detecting shock wave created by the passage of a supersonic projectile near the target. The method first measures TDOA (Time Delay Of Arrival) of the shock wave from the two sets of acoustic sensors of the equilateral triangular shape arranged horizontally under the target. Then the acoustic hit coordinate on the target is calculated using triangulation method. The performance of the proposed algorithm was confirmed by comparing the actual impact point with the estimated coordinates of the impact point calculated by proposed algorithm through the actual shooting experiments.", "title": "" } ]
scidocsrr
3514f3a21e783da662263fd601a17835
BlowFish: Dynamic Storage-Performance Tradeoff in Data Stores
[ { "docid": "322f452b95b257c2b95001bfbf5b5063", "text": "We present Schism, a novel workload-aware approach for database partitioning and replication designed to improve scalability of sharednothing distributed databases. Because distributed transactions are expensive in OLTP settings (a fact we demonstrate through a series of experiments), our partitioner attempts to minimize the number of distributed transactions, while producing balanced partitions. Schism consists of two phases: i) a workload-driven, graph-based replication/partitioning phase and ii) an explanation and validation phase. The first phase creates a graph with a node per tuple (or group of tuples) and edges between nodes accessed by the same transaction, and then uses a graph partitioner to split the graph into k balanced partitions that minimize the number of cross-partition transactions. The second phase exploits machine learning techniques to find a predicate-based explanation of the partitioning strategy (i.e., a set of range predicates that represent the same replication/partitioning scheme produced by the partitioner). The strengths of Schism are: i) independence from the schema layout, ii) effectiveness on n-to-n relations, typical in social network databases, iii) a unified and fine-grained approach to replication and partitioning. We implemented and tested a prototype of Schism on a wide spectrum of test cases, ranging from classical OLTP workloads (e.g., TPC-C and TPC-E), to more complex scenarios derived from social network websites (e.g., Epinions.com), whose schema contains multiple n-to-n relationships, which are known to be hard to partition. Schism consistently outperforms simple partitioning schemes, and in some cases proves superior to the best known manual partitioning, reducing the cost of distributed transactions up to 30%.", "title": "" } ]
[ { "docid": "2b7d91c38a140628199cbdbee65c008a", "text": "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and/or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot.", "title": "" }, { "docid": "cfb4a1da7928eaa42fe35df9768dc23b", "text": "Recognizing fine-grained categories (e.g., bird species) is difficult due to the challenges of discriminative region localization and fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that region detection and fine-grained feature learning are mutually correlated and thus can reinforce each other. In this paper, we propose a novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way. The learning at each scale consists of a classification sub-network and an attention proposal sub-network (APN). The APN starts from full images, and iteratively generates region attention from coarse to fine by taking previous prediction as a reference, while the finer scale network takes as input an amplified attended region from previous scale in a recurrent way. The proposed RA-CNN is optimized by an intra-scale classification loss and an inter-scale ranking loss, to mutually learn accurate region attention and fine-grained representation. RA-CNN does not need bounding box/part annotations and can be trained end-to-end. We conduct comprehensive experiments and show that RA-CNN achieves the best performance in three fine-grained tasks, with relative accuracy gains of 3.3%, 3.7%, 3.8%, on CUB Birds, Stanford Dogs and Stanford Cars, respectively.", "title": "" }, { "docid": "e24f60bc524a69976f727cb847ed92fa", "text": "In large scale and complex IT service environments, a problematic incident is logged as a ticket and contains the ticket summary (system status and problem description). The system administrators log the step-wise resolution description when such tickets are resolved. The repeating service events are most likely resolved by inferring similar historical tickets. With the availability of reasonably large ticket datasets, we can have an automated system to recommend the best matching resolution for a given ticket summary. In this paper, we first identify the challenges in real-world ticket analysis and develop an integrated framework to efficiently handle those challenges. The framework first quantifies the quality of ticket resolutions using a regression model built on carefully designed features. The tickets, along with their quality scores obtained from the resolution quality quantification, are then used to train a deep neural network ranking model that outputs the matching scores of ticket summary and resolution pairs. This ranking model allows us to leverage the resolution quality in historical tickets when recommending resolutions for an incoming incident ticket. In addition, the feature vectors derived from the deep neural ranking model can be effectively used in other ticket analysis tasks, such as ticket classification and clustering. The proposed framework is extensively evaluated with a large real-world dataset.", "title": "" }, { "docid": "65b933f72f74a17777baa966658f4c42", "text": "We describe the epidemic of obesity in the United States: escalating rates of obesity in both adults and children, and why these qualify as an epidemic; disparities in overweight and obesity by race/ethnicity and sex, and the staggering health and economic consequences of obesity. Physical activity contributes to the epidemic as explained by new patterns of physical activity in adults and children. Changing patterns of food consumption, such as rising carbohydrate intake--particularly in the form of soda and other foods containing high fructose corn syrup--also contribute to obesity. We present as a central concept, the food environment--the contexts within which food choices are made--and its contribution to food consumption: the abundance and ubiquity of certain types of foods over others; limited food choices available in certain settings, such as schools; the market economy of the United States that exposes individuals to many marketing/advertising strategies. Advertising tailored to children plays an important role.", "title": "" }, { "docid": "95fb51b0b6d8a3a88edfc96157233b10", "text": "Various types of video can be captured with fisheye lenses; their wide field of view is particularly suited to surveillance video. However, fisheye lenses introduce distortion, and this changes as objects in the scene move, making fisheye video difficult to interpret. Current still fisheye image correction methods are either limited to small angles of view, or are strongly content dependent, and therefore unsuitable for processing video streams. We present an efficient and robust scheme for fisheye video correction, which minimizes time-varying distortion and preserves salient content in a coherent manner. Our optimization process is controlled by user annotation, and takes into account a wide set of measures addressing different aspects of natural scene appearance. Each is represented as a quadratic term in an energy minimization problem, leading to a closed-form solution via a sparse linear system. We illustrate our method with a range of examples, demonstrating coherent natural-looking video output. The visual quality of individual frames is comparable to those produced by state-of-the-art methods for fisheye still photograph correction.", "title": "" }, { "docid": "a7623185df940b128af6187d7d1e0b9c", "text": "Inflammasomes are high-molecular-weight protein complexes that are formed in the cytosolic compartment in response to danger- or pathogen-associated molecular patterns. These complexes enable activation of an inflammatory protease caspase-1, leading to a cell death process called pyroptosis and to proteolytic cleavage and release of pro-inflammatory cytokines interleukin (IL)-1β and IL-18. Along with caspase-1, inflammasome components include an adaptor protein, ASC, and a sensor protein, which triggers the inflammasome assembly in response to a danger signal. The inflammasome sensor proteins are pattern recognition receptors belonging either to the NOD-like receptor (NLR) or to the AIM2-like receptor family. While the molecular agonists that induce inflammasome formation by AIM2 and by several other NLRs have been identified, it is not well understood how the NLR family member NLRP3 is activated. Given that NLRP3 activation is relevant to a range of human pathological conditions, significant attempts are being made to elucidate the molecular mechanism of this process. In this review, we summarize the current knowledge on the molecular events that lead to activation of the NLRP3 inflammasome in response to a range of K (+) efflux-inducing danger signals. We also comment on the reported involvement of cytosolic Ca (2+) fluxes on NLRP3 activation. We outline the recent advances in research on the physiological and pharmacological mechanisms of regulation of NLRP3 responses, and we point to several open questions regarding the current model of NLRP3 activation.", "title": "" }, { "docid": "9e91954fbe01ef11fdefa36d357d5eaa", "text": "Despite the heterogeneity of SSc, almost all patients have skin involvement. As such, skin manifestations are critical in the initial diagnosis of SSc and in the subsequent sub-classification into the different subsets of disease. The two principal subsets are lcSSc and dcSSc. The main difference between these two subsets is the speed of disease progression and the extent and severity of skin and visceral involvement; lcSSc has an insidious onset with skin involvement confined largely to the face and extremities. Whilst vascular manifestations of SSc such as pulmonary arterial hypertension are typically more common in lcSSc, patients in both subsets can develop ischaemic digital ulcers. In dcSSc, disease progression is very rapid, with skin thickening extending beyond the extremities and earlier, more widespread internal organ involvement. DcSSc is generally considered to be the more severe subset of the disease. Skin scores in SSc correlate inversely with survival and are considered a valuable marker of disease severity. Skin involvement is easily detectable and, using the modified Rodnan skin score, the degree of skin fibrosis can be quantified. As well as general management measures, a number of targeted therapies are commonly used for treatment of cutaneous manifestations of SSc. These include the intravenous prostanoid iloprost and the dual endothelin receptor antagonist bosentan, which is approved in Europe for the prevention of new digital ulcers.", "title": "" }, { "docid": "27a0c382d827f920c25f7730ddbacdc0", "text": "Some new parameters in Vivaldi Notch antennas are debated over in this paper. They can be availed for the bandwidth application amelioration. The aforementioned limiting factors comprise two parameters for the radial stub dislocation, one parameter for the stub opening angle, and one parameter for the stub’s offset angle. The aforementioned parameters are rectified by means of the optimization algorithm to accomplish a better frequency application. The results obtained in this article will eventually be collated with those of the other similar antennas. The best achieved bandwidth in this article is 17.1 GHz.", "title": "" }, { "docid": "31045b2c3709102abe66906a0e8ae706", "text": "Tandem mass spectrometry fragments a large number of molecules of the same peptide sequence into charged molecules of prefix and suffix peptide subsequences and then measures mass/charge ratios of these ions. The de novo peptide sequencing problem is to reconstruct the peptide sequence from a given tandem mass spectral data of k ions. By implicitly transforming the spectral data into an NC-spectrum graph G (V, E) where /V/ = 2k + 2, we can solve this problem in O(/V//E/) time and O(/V/2) space using dynamic programming. For an ideal noise-free spectrum with only b- and y-ions, we improve the algorithm to O(/V/ + /E/) time and O(/V/) space. Our approach can be further used to discover a modified amino acid in O(/V//E/) time. The algorithms have been implemented and tested on experimental data.", "title": "" }, { "docid": "8bb5acdafefc35f6c1adf00cfa47ac2c", "text": "A general method is introduced for separating points in multidimensional spaces through the use of stochastic processes. This technique is called stochastic discrimination.", "title": "" }, { "docid": "edd39b11eaed2dc89ab74542ce9660bb", "text": "The volume of data is growing at an increasing rate. This growth is both in size and in connectivity, where connectivity refers to the increasing presence of relationships between data. Social networks such as Facebook and Twitter store and process petabytes of data each day. Graph databases have gained renewed interest in the last years, due to their applications in areas such as the Semantic Web and Social Network Analysis. Graph databases provide an effective and efficient solution to data storage and querying data in these scenarios, where data is rich in relationships. In this paper, it is analyzed the fundamental points of graph databases, showing their main characteristics and advantages. We study Neo4j, the top graph database software in the market and evaluate its performance using the Social Network Benchmark (SNB).", "title": "" }, { "docid": "d95cd76008dd65d5d7f00c82bad013d3", "text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.", "title": "" }, { "docid": "9573bb5596dcec8668e9ba1b38d0b310", "text": "Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.", "title": "" }, { "docid": "284587aa1992afe3c90fddc2cf2a8906", "text": "Plant genomes contribute to the structure and function of the plant microbiome, a key determinant of plant health and productivity. High-throughput technologies are revealing interactions between these complex communities and their hosts in unprecedented detail.", "title": "" }, { "docid": "0ec11928cd68cf2711278a4a34bcf90b", "text": "BACKGROUND\nLow back pain (LBP) is a common cause of lost playing time and can be a challenging clinical condition in competitive athletes. LBP in athletes may be associated with joint and ligamentous hypermobility and impairments in activation and coordination of the trunk musculature, however there is limited research in this area.\n\n\nOBJECTIVES\nTo determine if there is an association between altered lumbar motor control, joint mobility and low back pain (LBP) in a sample of athletes.\n\n\nMATERIALS AND METHODS\nFifteen athletes with LBP were matched by age, gender and body mass index (BMI) with controls without LBP. Athletes completed a questionnaire with questions pertaining to demographics, activity level, medical history, need to self-manipulate their spine, pain intensity and location. Flexibility and lumbar motor control were assessed using: active and passive straight leg raise, lumbar range of motion (ROM), hip internal rotation ROM (HIR), Beighton ligamentous laxity scale, prone instability test (PIT), observation of lumbar aberrant movements, double leg lowering and Trendelenburg tests. Descriptive statistics were compiled and the chi square test was used to analyze results.\n\n\nRESULTS\nDescriptive statistics showed that 40% of athletes with LBP exhibited aberrant movements (AM), compared to 6% without LBP. 66% of athletes with LBP reported frequently self-manipulating their spine compared to 40% without LBP. No significant differences in motor control tests were found between groups. Athletes with LBP tended to have less lumbar flexion (63 ± 11°) compared to those without LBP (66 ± 13°). Chi-Square tests revealed that the AM were more likely to be present in athletes with LBP than those without (X2 = 4.66, P = 0.03).\n\n\nCONCLUSIONS\nThe presence of aberrant movement patterns is a significant clinical finding and associated with LBP in athletes.", "title": "" }, { "docid": "c10dd691e79d211ab02f2239198af45c", "text": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-ofthe-art.", "title": "" }, { "docid": "c8fcb46372eb774a0d3c9242ee02fd68", "text": "We prove that all single-vertex origami shapes are reachable from the open flat state via simple, non-crossing motions. We also consider conical paper, where the total sum of the cone angles centered at the origami vertex is not 2π. For an angle sum less than 2π, the configuration space of origami shapes compatible with the given metric has two components, and within each component, a shape can always be reconfigured via simple (non-crossing) motions. Such a reconfiguration may not always be possible for an angle sum larger than 2π. The proofs rely on natural extensions to the sphere of planar Euclidean rigidity results regarding the existence and combinatorial characterization of expansive motions. In particular, we extend the concept of a pseudo-triangulation from the Euclidean to the spherical case. As a consequence, we formulate a set of necessary conditions that must be satisfied by three-dimensional generalizations of pointed pseudo-triangulations.", "title": "" }, { "docid": "02d5abb55d737fe47da98b55fccfbc8e", "text": "Existing biometric fingerprint devices show numerous reliability problems such as wet or fake fingers. In this letter, a secured method using the internal structures of the finger (papillary layer) for fingerprint identification is presented. With a frequency-domain optical coherence tomography (FD-OCT) system, a 3-D image of a finger is acquired and the information of the internal fingerprint extracted. The right index fingers of 51 individuals were recorded three times. Using a commercial fingerprint identification program, 95% of internal fingerprint images were successfully recognized. These results demonstrate that OCT imaging of internal fingerprints can be used for accurate and reliable fingerprint recognition.", "title": "" }, { "docid": "36248b57ff386a6e316b7c8273e351d0", "text": "Mental stress has become a social issue and could become a cause of functional disability during routine work. In addition, chronic stress could implicate several psychophysiological disorders. For example, stress increases the likelihood of depression, stroke, heart attack, and cardiac arrest. The latest neuroscience reveals that the human brain is the primary target of mental stress, because the perception of the human brain determines a situation that is threatening and stressful. In this context, an objective measure for identifying the levels of stress while considering the human brain could considerably improve the associated harmful effects. Therefore, in this paper, a machine learning (ML) framework involving electroencephalogram (EEG) signal analysis of stressed participants is proposed. In the experimental setting, stress was induced by adopting a well-known experimental paradigm based on the montreal imaging stress task. The induction of stress was validated by the task performance and subjective feedback. The proposed ML framework involved EEG feature extraction, feature selection (receiver operating characteristic curve, t-test and the Bhattacharya distance), classification (logistic regression, support vector machine and naïve Bayes classifiers) and tenfold cross validation. The results showed that the proposed framework produced 94.6% accuracy for two-level identification of stress and 83.4% accuracy for multiple level identification. In conclusion, the proposed EEG-based ML framework has the potential to quantify stress objectively into multiple levels. The proposed method could help in developing a computer-aided diagnostic tool for stress detection.", "title": "" } ]
scidocsrr
88047fc1a606bbc94659042be7644d18
Are We Having Fun Yet? Misapplying Motivation to Gamification
[ { "docid": "ddc73328c18db1e4ef585671fb3a838d", "text": "Gamification has drawn the attention of academics, practitioners and business professionals in domains as diverse as education, information studies, human–computer interaction, and health. As yet, the term remains mired in diverse meanings and contradictory uses, while the concept faces division on its academic worth, underdeveloped theoretical foundations, and a dearth of standardized guidelines for application. Despite widespread commentary on its merits and shortcomings, little empirical work has sought to validate gamification as a meaningful concept and provide evidence of its effectiveness as a tool for motivating and engaging users in non-entertainment contexts. Moreover, no work to date has surveyed gamification as a field of study from a human–computer studies perspective. In this paper, we present a systematic survey on the use of gamification in published theoretical reviews and research papers involving interactive systems and human participants. We outline current theoretical understandings of gamification and draw comparisons to related approaches, including alternate reality games (ARGs), games with a purpose (GWAPs), and gameful design. We present a multidisciplinary review of gamification in action, focusing on empirical findings related to purpose and context, design of systems, approaches and techniques, and user impact. Findings from the survey show that a standard conceptualization of gamification is emerging against a growing backdrop of empirical participantsbased research. However, definitional subjectivity, diverse or unstated theoretical foundations, incongruities among empirical findings, and inadequate experimental design remain matters of concern. We discuss how gamification may to be more usefully presented as a subset of a larger effort to improve the user experience of interactive systems through gameful design. We end by suggesting points of departure for continued empirical investigations of gamified practice and its effects. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "372ab07026a861acd50e7dd7c605881d", "text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.", "title": "" } ]
[ { "docid": "c2de180b52c3b49bf32ed89ab55e94aa", "text": "A Wilkinson power divider with a differential output implemented in parallel-strip-line (PSL) is proposed. Taking full advantages of the PSL technology and a three-stage cascaded design, more than 170% impedance and isolation bandwidths are obtained. Inherent to the PSL structure, the 180 differential output is frequency-independent. A class-B push–pull power amplifier employing the devised concept is designed, showing a peak efficiency of 44% over a 4-GHz bandwidth. Without exploiting any extra and external low-pass filters, the proposed design can produce startling second-harmonic suppressions (more than 50 dB) over the whole working dynamics and operated bandwidth.", "title": "" }, { "docid": "38297fe227780c10979988c648dc7574", "text": "Homomorphic signal processing techniques are used to place information imperceivably into audio data streams by the introduction of synthetic resonances in the form of closely spaced echoes These echoes can be used to place digital identi cation tags directly into an audio signal with minimal objectionable degradation of the original signal", "title": "" }, { "docid": "f90efcef80233888fb8c218d1e5365a6", "text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.", "title": "" }, { "docid": "ae2f92c2e3254185a0a459d485c5f266", "text": "Automatic age estimation from facial images is challenging not only for computers, but also for humans in some cases. Therefore, coarse age groups such as children, teen age, adult and senior adult are considered in age classification, instead of evaluating specific age. In this paper, we propose an approach that provides a significant improvement in performance on benchmark databases and standard protocols for age classification. Our approach is based on deep learning techniques. We optimize the network architecture using the Deep IDentification-verification features, which are proved very efficient for face representation. After reducing the redundancy among the large number of output features, we apply different classifiers to classify the facial images to different age group with the final features. The experimental analysis shows that the proposed approach outperforms the reported state-of-the-arts on both constrained and unconstrained databases.", "title": "" }, { "docid": "e460df7864f1373c87baafc19c1a10f0", "text": "Given a visual history, multiple future outcomes for a video scene are equally probable, in other words, the distribution of future outcomes has multiple modes. Multimodality is notoriously hard to handle by standard regressors or classifiers: the former regress to the mean and the latter discretize a continuous high dimensional output space. In this work, we present stochastic neural network architectures that handle such multimodality through stochasticity: future trajectories of objects, body joints or frames are represented as deep, non-linear transformations of random (as opposed to deterministic) variables. Such random variables are sampled from simple Gaussian distributions whose means and variances are parametrized by the output of convolutional encoders over the visual history. We introduce novel convolutional architectures for predicting future body joint trajectories that outperform fully connected alternatives [29]. We introduce stochastic spatial transformers through optical flow warping for predicting future frames, which outperform their deterministic equivalents [17]. Training stochastic networks involves an intractable marginalization over stochastic variables. We compare various training schemes that handle such marginalization through a) straightforward sampling from the prior, b) conditional variational autoencoders [23, 29], and, c) a proposed K-best-sample loss that penalizes the best prediction under a fixed “ prediction budget”. We show experimental results on object trajectory prediction, human body joint trajectory prediction and video prediction under varying future uncertainty, validating quantitatively and qualitatively our architectural choices and training schemes.", "title": "" }, { "docid": "df5604b3569ab8135623a5c9afea8cd3", "text": "Spin-transfer torque (STT) mechanisms in vertical and lateral spin valves and magnetization reversal/domain wall motion with spin-orbit torque (SOT) have opened up new possibilities of efficiently mimicking “neural” and “synaptic” functionalities with much lower area and energy consumption compared to CMOS implementations. In this paper, we review various STT/SOT devices that can provide a compact and area-efficient implementation of artificial neurons and synapses. We provide a device-circuit-system perspective and envision design of an All-Spin neuromorphic processor (with different degrees of bio-fidelity) that can be potentially appealing for ultra-low power cognitive applications.", "title": "" }, { "docid": "fa9d304e6f3ff83818f87d3e69401e5c", "text": "Neurotransmitter receptor trafficking during synaptic plasticity requires the concerted action of multiple signaling pathways and the protein transport machinery. However, little is known about the contribution of lipid metabolism during these processes. In this paper, we addressed the question of the role of cholesterol in synaptic changes during long-term potentiation (LTP). We found that N-methyl-d-aspartate-type glutamate receptor (NMDAR) activation during LTP induction leads to a rapid and sustained loss or redistribution of intracellular cholesterol in the neuron. A reduction in cholesterol, in turn, leads to the activation of Cdc42 and the mobilization of GluA1-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid-type glutamate receptors (AMPARs) from Rab11-recycling endosomes into the synaptic membrane, leading to synaptic potentiation. This process is accompanied by an increase of NMDAR function and an enhancement of LTP. These results imply that cholesterol acts as a sensor of NMDAR activation and as a trigger of downstream signaling to engage small GTPase (guanosine triphosphatase) activation and AMPAR synaptic delivery during LTP.", "title": "" }, { "docid": "12489fe406fa53c6c815ed99a4805f72", "text": "This paper presents the systems submitted by the Abu-MaTran project to the Englishto-Finnish language pair at the WMT 2016 news translation task. We applied morphological segmentation and deep learning in order to address (i) the data scarcity problem caused by the lack of in-domain parallel data in the constrained task and (ii) the complex morphology of Finnish. We submitted a neural machine translation system, a statistical machine translation system reranked with a neural language model and the combination of their outputs tuned on character sequences. The combination and the neural system were ranked first and second respectively according to automatic evaluation metrics and tied for the first place in the human evaluation.", "title": "" }, { "docid": "6fd71fe20e959bfdde866ff54b2b474b", "text": "The IETF developed the RPL routing protocol for Low power and Lossy Networks (LLNs). RPL allows for automated setup and maintenance of the routing tree for a meshed network using a common objective, such as energy preservation or most stable routes. To handle failing nodes and other communication disturbances, RPL includes a number of error correction functions for such situations. These error handling mechanisms, while maintaining a functioning routing tree, introduce an additional complexity to the routing process. Being a relatively new protocol, the effect of the error handling mechanisms within RPL needs to be analyzed. This paper presents an experimental analysis of RPL’s error correction mechanisms by using the Contiki RPL implementation along with an SNMP agent to monitor the performance of RPL.", "title": "" }, { "docid": "fe3afe69ec27189400e65e8bdfc5bf0b", "text": "speech learning changes over the life span and to explain why \"earlier is better\" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (Le., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.", "title": "" }, { "docid": "79a20b9a059a2b4cc73120812c010495", "text": "The present article summarizes the state of the art algorithms to compute the discrete Moreau envelope, and presents a new linear-time algorithm, named NEP for NonExpansive Proximal mapping. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included. The fast Moreau envelope algorithms first factor the Moreau envelope as several one-dimensional transforms and then reduce the brute force quadratic worst-case time complexity to linear time by using either the equivalence with Fast Legendre Transform algorithms, the computation of a lower envelope of parabolas, or, in the convex case, the non expansiveness of the proximal mapping.", "title": "" }, { "docid": "226d8e68f0519ddfc9e288c9151b65f0", "text": "Vector space embeddings can be used as a tool for learning semantic relationships from unstructured text documents. Among others, earlier work has shown how in a vector space of entities (e.g. different movies) fine-grained semantic relationships can be identified with directions (e.g. more violent than). In this paper, we use stacked denoising auto-encoders to obtain a sequence of entity embeddings that model increasingly abstract relationships. After identifying directions that model salient properties of entities in each of these vector spaces, we induce symbolic rules that relate specific properties to more general ones. We provide illustrative examples to demonstrate the potential of this ap-", "title": "" }, { "docid": "74ccb28a31d5a861bea1adfaab2e9bf1", "text": "For many decades CMOS devices have been successfully scaled down to achieve higher speed and increased performance of integrated circuits at lower cost. Today’s charge-based CMOS electronics encounters two major challenges: power dissipation and variability. Spintronics is a rapidly evolving research and development field, which offers a potential solution to these issues by introducing novel ‘more than Moore’ devices. Spin-based magnetoresistive random-access memory (MRAM) is already recognized as one of the most promising candidates for future universal memory. Magnetic tunnel junctions, the main elements of MRAM cells, can also be used to build logic-in-memory circuits with non-volatile storage elements on top of CMOS logic circuits, as well as versatile compact on-chip oscillators with low power consumption. We give an overview of CMOS-compatible spintronics applications. First, we present a brief introduction to the physical background considering such effects as magnetoresistance, spin-transfer torque (STT), spin Hall effect, and magnetoelectric effects. We continue with a comprehensive review of the state-of-the-art spintronic devices for memory applications (STT-MRAM, domain wallmotion MRAM, and spin–orbit torque MRAM), oscillators (spin torque oscillators and spin Hall nano-oscillators), logic (logic-in-memory, all-spin logic, and buffered magnetic logic gate grid), sensors, and random number generators. Devices with different types of resistivity switching are analyzed and compared, with their advantages highlighted and challenges revealed. CMOScompatible spintronic devices are demonstrated beginning with predictive simulations, proceeding to their experimental confirmation and realization, and finalized by the current status of application in modern integrated systems and circuits. We conclude the review with an outlook, where we share our vision on the future applications of the prospective devices in the area.", "title": "" }, { "docid": "f3b1e1c9effb7828a62187e9eec5fba7", "text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.", "title": "" }, { "docid": "40db41aa0289dbf45bef067f7d3e3748", "text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.", "title": "" }, { "docid": "d96eecc4b27d8717c07562686f702066", "text": "The paper’s research purpose is to discuss the key firm-specific IT capability and its impact on the business value of IT. In the context of IT application in China, the paper builds research model based on Resource-Based View, this model describes how the partnership between business and IT management partially mediates the effects of IT infrastructure capability and managerial IT skills on the organization-level of IT assimilation(as proxy for business value of IT ). This research releases 105 questionnaires to part-time MBA in the Renmin University of China and gets 70 valid questionnaires, then analyzed the measurement and structural research model by PLS method. The result of the structural model shows the investment in infrastructure capability and managerial IT skills should be transformed into the partnership between IT and business, and then influence the IT assimilation. The paper can give suggestions to the firms about how to identify and improve IT capability, which will help organization to get superior business value from IT investment.", "title": "" }, { "docid": "84a22b5539293887781db072a10d4a64", "text": "Multimodal sentiment analysis is the analysis of emotions, attitude, and opinion from audiovisual format. A company can improve the quality of its product and services by analyzing the reviews about the product [5]. Sentiment analysis is widely used in managing customer relations. There are many textual reviews from which we cannot extract emotions by traditional sentiment analysis techniques. Some sentences in the textual reviews may derive deep emotions but do not contain any keyword to detect those emotions, so we used audiovisual reviews in order to detect emotions from the facial expressions of the customer. In this paper we take audiovisual input and extract emotions from video and audio in parallel from audiovisual input, finally classify the overall review as positive, negative or neutral based on the emotions detected.", "title": "" }, { "docid": "525cd643153305af852f2df7b3f48ffb", "text": "3D modeling of building architecture from mobile scanning is a rapidly advancing field. These models are used in virtual reality, gaming, navigation, and simulation applications. State-of-the-art scanning produces accurate point-clouds of building interiors containing hundreds of millions of points. This paper presents several scalable surface reconstruction techniques to generate watertight meshes that preserve sharp features in the geometry common to buildings. Our techniques can automatically produce high-resolution meshes that preserve the fine detail of the environment by performing a ray-carving volumetric approach to surface reconstruction. We present methods to automatically generate 2D floor plans of scanned building environments by detecting walls and room separations. These floor plans can be used to generate simplified 3D meshes that remove furniture and other temporary objects. We propose a method to texture-map these models from captured camera imagery to produce photo-realistic models. We apply these techniques to several data sets of building interiors, including multi-story datasets.", "title": "" }, { "docid": "670d389bf2250bc8d0f10235495e755e", "text": "This study suggests narcissism as an important psychological factor that predicts one’s behavioral intention to control information privacy on SNS. Particularly, we approach narcissism as a two-dimensional construct—vulnerable and grandiose narcissism—to provide a better understanding of the role of narcissism in SNS usage. As one of the first studies to apply a two-dimensional approach to narcissism in computer-mediated communication, our results show that vulnerable narcissism has a significant positive effect on behavioral intention to control privacy on SNS, while grandiose narcissism has no effect. This effect was found when considering other personality traits, including self-esteem, computer anxiety, and concern for information privacy. The results indicate that unidimensional approaches to narcissism cannot solely predict SNS behaviors, and the construct of narcissism should be broken down into two orthogonal constructs. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9b575699e010919b334ac3c6bc429264", "text": "Over the last decade, keyword search over relational data has attracted considerable attention. A possible approach to face this issue is to transform keyword queries into one or more SQL queries to be executed by the relational DBMS. Finding these queries is a challenging task since the information they represent may be modeled across different elements where the data of interest is stored, but also to find out how these elements are interconnected. All the approaches that have been proposed so far provide a monolithic solution. In this work, we, instead, divide the problem into three steps: the first one, driven by the user's point of view, takes into account what the user has in mind when formulating keyword queries, the second one, driven by the database perspective, considers how the data is represented in the database schema. Finally, the third step combines these two processes. We present the theory behind our approach, and its implementation into a system called QUEST (QUEry generator for STructured sources), which has been deeply tested to show the efficiency and effectiveness of our approach. Furthermore, we report on the outcomes of a number of experimental results that we", "title": "" } ]
scidocsrr
231cc4a4f203927e787345717c06edbf
Quasi-newton methods for real-time simulation of hyperelastic materials
[ { "docid": "8cfa2086e1c73bae6945d1a19d52be26", "text": "We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications.", "title": "" } ]
[ { "docid": "cff3b4f6db26e66893a9db95fb068ef1", "text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.", "title": "" }, { "docid": "befc5dbf4da526963f8aa180e1fda522", "text": "Charities publicize the donations they receive, generally according to dollar categories rather than the exact amount. Donors in turn tend to give the minimum amount necessary to get into a category. These facts suggest that donors have a taste for having their donations made public. This paper models the effects of such a taste for ‘‘prestige’’ on the behavior of donors and charities. I show how a taste for prestige means that charities can increase donations by using categories. The paper also discusses the effect of a taste for prestige on competition between charities.  1998 Elsevier Science S.A.", "title": "" }, { "docid": "4313c87376e6ea9fac7dc32f359c2ae9", "text": "Game engines are specialized middleware which facilitate rapid game development. Until now they have been highly optimized to extract maximum performance from single processor hardware. In the last couple of years improvements in single processor hardware have approached physical limits and performance gains have slowed to become incremental. As a consequence, improvements in game engine performance have also become incremental. Currently, hardware manufacturers are shifting to dual and multi-core processor architectures, and the latest game consoles also feature multiple processors. This presents a challenge to game engine developers because of the unfamiliarity and complexity of concurrent programming. The next generation of game engines must address the issues of concurrency if they are to take advantage of the new hardware. This paper discusses the issues, approaches, and tradeoffs that need to be considered in the design of a multi-threaded game engine.", "title": "" }, { "docid": "ee9d84f08326cf48116337595dbe07f7", "text": "Facial fractures were described as early as the seventeenth century BC in the Edwin Smith surgical papyrus. In the eighteenth century, the French surgeon Desault described the unique propensity of the mandible to fracture in the narrow subcondylar region, which is commonly observed to this day. In a recent 5-year review of the National Trauma Data Base with more than 13,000 mandible fractures, condylar and subcondylar fractures made up 14.8% and 12.6% of all fractures respectively; taken together, more than any other site alone. This study, along with others, have confirmed that most modern-age condylar fractures occur in men, and are most often caused by motor vehicle accidents, and assaults. Historically, condylar fractures were managed in a closed fashion with various forms of immobilization or maxillomandibular fixation, with largely favorable results. Although the goals of treatment are the restoration of form and function, closed treatment relies on patient adaptation to an altered anatomy, because anatomic repositioning of the proximal segment is not achieved. However, the human body has a remarkable ability to adapt, and it remains an appropriate treatment of a large number of condylar fractures, including intracapsular fractures, fractures with minimal or no displacement, almost all pediatric condylar fractures, and fractures in patients whose medical or social situations preclude other forms of treatment. With advances in the understanding of osteosynthesis and an appreciation of surgical anatomy, open", "title": "" }, { "docid": "49c87552a43f75200fb869aa13de0cf5", "text": "We combine data from a field survey with transaction log data from a mobile phone operator to provide new insight into daily patterns of mobile phone use in Rwanda. The analysis is divided into three parts. First, we present a statistical comparison of the general Rwandan population to the population of mobile phone owners in Rwanda. We find that phone owners are considerably wealthier, better educated, and more predominantly male than the general population. Second, we analyze patterns of phone use and access, based on self-reported survey data. We note statistically significant differences by gender; for instance, women are more likely to use shared phones than men. Third, we perform a quantitative analysis of calling patterns and social network structure using mobile operator billing logs. By these measures, the differences between men and women are more modest, but we observe vast differences in utilization between the relatively rich and the relatively poor. Taken together, the evidence in this paper suggests that phones are disproportionately owned and used by the privileged strata of Rwandan society.", "title": "" }, { "docid": "85aa1fb0b2e902ca2f52e597590c5736", "text": "Identities are known as the most sensitive information. With the increasing number of connected objects and identities (a connected object may have one or many identities), the computing and communication capabilities improved to manage these connected devices and meet the needs of this progress. Therefore, new IoT Identity Management System (IDMS) requirements have been introduced. In this work, we suggest an IDMS approach to protect private information and ensures domain change in IoT for mobile clients using a personal authentication device. Firstly, we present basic concepts, existing requirements and limits of related works. We also propose new requirements and show our motivations. Next, we describe our proposal. Finally, we give our security approach validation, perspectives, and some concluding remarks.", "title": "" }, { "docid": "88eaf07c8ef59bad1ea9f29f83050149", "text": "A monocular 3D object tracking system generally has only up-to-scale pose estimation results without any prior knowledge of the tracked object. In this paper, we propose a novel idea to recover the metric scale of an arbitrary dynamic object by optimizing the trajectory of the objects in the world frame, without motion assumptions. By introducing an additional constraint in the time domain, our monocular visual-inertial tracking system can obtain continuous six degree of freedom (6-DoF) pose estimation without scale ambiguity. Our method requires neither fixed multi-camera nor depth sensor settings for scale observability, instead, the IMU inside the monocular sensing suite provides scale information for both camera itself and the tracked object. We build the proposed system on top of our monocular visual-inertial system (VINS) to obtain accurate state estimation of the monocular camera in the world frame. The whole system consists of a 2D object tracker, an object region-based visual bundle adjustment (BA), VINS and a correlation analysis-based metric scale estimator. Experimental comparisons with ground truth demonstrate the tracking accuracy of our 3D tracking performance while a mobile augmented reality (AR) demo shows the feasibility of potential applications.", "title": "" }, { "docid": "3d72ed32a523f4c51b9c57b0d7d0f9ab", "text": "A theoretical study on the design of broadbeam leaky-wave antennas (LWAs) of uniform type and rectilinear geometry is presented. A new broadbeam LWA structure based on the hybrid printed-circuit waveguide is proposed, which allows for the necessary flexible and independent control of the leaky-wave phase and leakage constants. The study shows that both the real and virtual focus LWAs can be synthesized in a simple manner by tapering the printed-slot along the LWA properly, but the real focus LWA is preferred in practice. Practical issues concerning the tapering of these LWA are investigated, including the tuning of the radiation pattern asymmetry level and beamwidth, the control of the ripple level inside the broad radiated main beam, and the frequency response of the broadbeam LWA. The paper provides new insight and guidance for the design of this type of LWAs.", "title": "" }, { "docid": "3a7a97154f2754acb8e4d4362fe490ed", "text": "Data replication is an increasingly important topic as databases are more and more deployed over clusters of workstations. One of the challenges in database replicatio n is to introduce replication without severely affecting per formance. Because of this difficulty, current database product s use lazy replication, which is very efficient but can compromise consistency. As an alternative, eager replication guarantees consistency but most existing protocols have a prohibitive cost. In order to clarify the current state of th e art and open up new avenues for research, this paper analyses existing eager techniques using three key parameters. In our analysis, we distinguish eight classes of eager repli cation protocols and, for each category, discuss its requir ements, capabilities, and cost. The contribution lies in sho wing when eager replication is feasible and in spelling out the different aspects a database replication protocol must account for.", "title": "" }, { "docid": "4254ad134a2359d42dea2bcf64d6bdce", "text": "Radio Frequency Identification (RFID) systems aim to identify objects in open environments with neither physical nor visual contact. They consist of transponders inserted into objects, of readers, and usually of a database which contains information about the objects. The key point is that authorised readers must be able to identify tags without an adversary being able to trace them. Traceability is often underestimated by advocates of the technology and sometimes exaggerated by its detractors. Whatever the true picture, this problem is a reality when it blocks the deployment of this technology and some companies, faced with being boycotted, have already abandoned its use. Using cryptographic primitives to thwart the traceability issues is an approach which has been explored for several years. However, the research carried out up to now has not provided satisfactory results as no universal formalism has been defined. In this paper, we propose an adversarial model suitable for RFID environments. We define the notions of existential and universal untraceability and we model the access to the communication channels from a set of oracles. We show that our formalisation fits the problem being considered and allows a formal analysis of the protocols in terms of traceability. We use our model on several well-known RFID protocols and we show that most of them have weaknesses and are vulnerable to traceability.", "title": "" }, { "docid": "11c6c2a539b08fb13f1e7ffad7726e50", "text": "Virtual and augmented reality are becoming the new medium that transcend the way we interact with virtual content, paving the way for many immersive and interactive forms of applications. The main purpose of my research is to create a seamless combination of physiological sensing with virtual reality to provide users with a new layer of input modality or as a form of implicit feedback. To achieve this, my research focuses in novel augmented reality (AR) and virtual reality (VR) based application for a multi-user, multi-view, multi-modal system augmented by physiological sensing methods towards an increased public and social acceptance.", "title": "" }, { "docid": "da4b86329c12b0747c2df55f5a6f6cdb", "text": "As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.", "title": "" }, { "docid": "e5a1f6546de9683e7dc90af147d73d40", "text": "Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.", "title": "" }, { "docid": "fa3c52e9b3c4a361fd869977ba61c7bf", "text": "The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.", "title": "" }, { "docid": "b42f4d645e2a7e24df676a933f414a6c", "text": "Epilepsy is a common neurological condition which affects the central nervous system that causes people to have a seizure and can be assessed by electroencephalogram (EEG). Electroencephalography (EEG) signals reflect two types of paroxysmal activity: ictal activity and interictal paroxystic events (IPE). The relationship between IPE and ictal activity is an essential and recurrent question in epileptology. The spike detection in EEG is a difficult problem. Many methods have been developed to detect the IPE in the literature. In this paper we propose three methods to detect the spike in real EEG signal: Page Hinkley test, smoothed nonlinear energy operator (SNEO) and fractal dimension. Before using these methods, we filter the signal. The Singular Spectrum Analysis (SSA) filter is used to remove the noise in an EEG signal.", "title": "" }, { "docid": "0fb9b4577da65280e664eee48a76fd3a", "text": "We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery.", "title": "" }, { "docid": "548499e5588f95e45993049dfa03723b", "text": "We present the architecture of a deep learning pipeline for natural language processing. Based on this architecture we built a set of tools both for creating distributional vector representations and for performing specific NLP tasks. Three methods are available for creating embeddings: feedforward neural network, sentiment specific embeddings and embeddings based on counts and Hellinger PCA. Two methods are provided for training a network to perform sequence tagging, a window approach and a convolutional approach. The window approach is used for implementing a POS tagger and a NER tagger, the convolutional network is used for Semantic Role Labeling. The library is implemented in Python with core numerical processing written in C++ using parallel linear algebra library for efficiency and scalability.", "title": "" }, { "docid": "c2571afd6f2b9e9856c8f8c4eeb60b81", "text": "In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages – not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.", "title": "" }, { "docid": "576e590fe50a5c7be9acc4d413187b79", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading data matching, you can take more advantages with limited budget.", "title": "" }, { "docid": "6bc31257bfbcc9531a3acf1ec738c790", "text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.", "title": "" } ]
scidocsrr
5a992b99709ed3247ffcfab7aae6fe1f
LogView: Visualizing Event Log Clusters
[ { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" } ]
[ { "docid": "114d6c97f19bc29152ecda8fa2447f63", "text": "The game of Bridge provides a number of research areas to AI researchers due to the many components that constitute the game. Bidding provides the subtle challenge of potential outcome maximization while learning through information gathering, but constrained to a limited rule set. Declarer play can be accomplished through planning and inference. Both the bidding and the play can also be accomplished through Monte Carlo analysis using a perfect information solver. Double-dummy play is a perfect information search, but over an enormous state-space, and thus requires α-β pruning, transposition tables and other tree-minimization techniques. As such, researchers have made much progress in each of these sub-fields over the years, particularly double-dummy play, but are yet to produce a consistent expert level player.", "title": "" }, { "docid": "bc74c28794d9d6ae36ee6cfdc5fd04ac", "text": "This paper describes development of joint materials using only base metals (Cu and Sn) for power semiconductor assembly. The optimum composition at this moment is Cu8wt%Sn92wt% (8Cu92Sn hereafter) particles: pure Cu (100Cu hereafter) particles = 20:80 (wt% ratio), which indicates good stability under Thermal Cycling Test (TCT, −55°C∼+200°C, 20cycles). The composition indicated to be effective to eliminate voids and chip cracks. As an initial choice of joint material using TLPS (Transient Liquid Phase Sintering), we considered SAC305 might have good role as TLPS trigger. But, actual TCT results indicated that existence of Ag must have negative effect to eliminate voids from the joint region. Tentative behavior model using 8Cu92Sn and 100Cu joint material is proposed. Optimized composition indicated shear force 40MPa at 300°C. Re-melting point of the composition is 409°C after TLPS when there is additional Cu supply from substrate and terminal of mounted die.", "title": "" }, { "docid": "de99a984795645bc2e9fb4b3e3173807", "text": "Neural networks are a family of powerful machine learning models. is book focuses on the application of neural network models to natural language data. e first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. e second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. ese architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.", "title": "" }, { "docid": "0206cbec556e66fd19aa42c610cdccfa", "text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.", "title": "" }, { "docid": "bebd034597144d4656f6383d9bd22038", "text": "The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in today’s social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.", "title": "" }, { "docid": "923745305f28130dc1e709360de4b97c", "text": "Segmenting brain MR scans could be highly beneficial for diagnosing, treating and evaluating the progress of specific diseases. Up to this point, manual segmentation, performed by experts, is the conventional method in hospitals and clinical environments. Although manual segmentation is accurate, it is time consuming, expensive and might not be reliable. Many non-automatic and semi automatic methods have been proposed in the literature in order to segment MR brain images, but the level of accuracy is not sufficiently comparable with the one of manual. The aim of this project is to implement and make a preliminary evaluation of a method based on machine learning technique for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) of brain MR scans using images available within the open MICCAI grand challenge (MRBrainS13). The proposed method employs supervised artificial neural network based auto-context algorithm, exploiting intensity-based, spatial-based and shape model-based level set segmentation results as features of the network. The obtained average results based on Dice similarity index were 96.98%, 95.35%, 80.95%, 88.36% and 84.71% for intracranial volume, brain (WM + GM), CSF, WM and GM respectively. This method achieved competitive results with considerably shorter required training time on MRBrainsS13 challenge.", "title": "" }, { "docid": "865cfae2da5ad3d1d10d21b1defdc448", "text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.", "title": "" }, { "docid": "41a3a4174a0fade6fb96ade0294c3eda", "text": "Recent development in fully convolutional neural network enables efficient end-to-end learning of semantic segmentation. Traditionally, the convolutional classifiers are taught to learn the representative semantic features of labeled semantic objects. In this work, we propose a reverse attention network (RAN) architecture that trains the network to capture the opposite concept (i.e., what are not associated with a target class) as well. The RAN is a three-branch network that performs the direct, reverse and reverse-attention learning processes simultaneously. Extensive experiments are conducted to show the effectiveness of the RAN in semantic segmentation. Being built upon the DeepLabv2-LargeFOV, the RAN achieves the state-of-the-art mean IoU score (48.1%) for the challenging PASCAL-Context dataset. Significant performance improvements are also observed for the PASCAL-VOC, Person-Part, NYUDv2 and ADE20K datasets.", "title": "" }, { "docid": "d5d621b131fa1f09e161a0f59c0e1313", "text": "This paper describes the modeling of distance relay using Matlab/Simulink package. SimPowerSystem toolbox was used for detailed modeling of distance relay, transmission line and fault simulation. Inside the modeling, single line to ground (SLG) fault was choose to be the fault type and Mho type distance characteristic was choose to be as the protection scheme. A graphical user interface (GUI) was created using GUI package inside Matlab for the developed model. With the interactive environment of graphical user interface, the difficulties in teaching of distance relay for undergraduate students can be eliminated. © 2013 The Authors. Published by Elsevier Ltd. Selection and/or peer-review under responsibility of the Research Management & Innovation Centre, Universiti Malaysia Perlis.", "title": "" }, { "docid": "15906c9bd84e55aec215843ef9e542a0", "text": "Recent growing interest in predicting and influencing consu mer behavior has generated a parallel increase in research efforts on Recommend er Systems. Many of the state-of-the-art Recommender Systems algorithms rely on o btaining user ratings in order to later predict unknown ratings. An underlying assumpt ion in this approach is that the user ratings can be treated as ground truth of the user’s t aste. However, users are inconsistent in giving their feedback, thus introducing an un known amount of noise that challenges the validity of this assumption. In this paper, we tackle the problem of analyzing and charact e izing the noise in user feedback through ratings of movies. We present a user st udy aimed at quantifying the noise in user ratings that is due to inconsistencies. We m easure RMSE values that range from0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise.", "title": "" }, { "docid": "1dc41e5c43fc048bc1f1451eaa1ff764", "text": "249 words) + Body (6178 words) + 4 Figures = 7,427 Total Words Luis Fernando Molina molinac1@illinois.edu (217) 244-6063 Esther Resendiz eresendi@illinois.edu (217) 244-4174 J. Riley Edwards jedward2@illinois.edu (217) 244-7417 John M. Hart j-hart3@illinois.edu (217) 244-4174 Christopher P. L. Barkan cbarkan@illinois.edu (217) 244-6338 Narendra Ahuja ahuja@illinois.edu (217) 333-1837 3 Corresponding author Molina et al. 11-1442 2 ABSTRACT Individual railroad track maintenance standards and the Federal Railroad Administration (FRA)Individual railroad track maintenance standards and the Federal Railroad Administration (FRA) Track Safety Standards require periodic inspection of railway infrastructure to ensure safe and efficient operation. This inspection is a critical, but labor-intensive task that results in large annual operating expenditures and has limitations in speed, quality, objectivity, and scope. To improve the cost-effectiveness of the current inspection process, machine vision technology can be developed and used as a robust supplement to manual inspections. This paper focuses on the development and performance of machine vision algorithms designed to recognize turnout components, as well as the performance of algorithms designed to recognize and detect defects in other track components. In order to prioritize which components are the most critical for the safe operation of trains, a risk-based analysis of the FRA Accident Database was performed. Additionally, an overview of current technologies for track and turnout component condition assessment is presented. The machine vision system consists of a video acquisition system for recording digital images of track and customized algorithms to identify defects and symptomatic conditions within the images. A prototype machine vision system has been developed for automated inspection of rail anchors and cut spikes, as well as tie recognition. Experimental test results from the system have shown good reliability for recognizing ties, anchors, and cut spikes. This machine vision system, in conjunction with defect analysis and trending of historical data, will enhance the ability for longer-term predictive assessment of the health of the track system and its components. Molina et al. 11-1442 3 INTRODUCTION Railroads conduct regular inspections of their track in order to maintain safe and efficient operation. In addition to internal railroad inspection procedures, periodic track inspections are required under the Federal Railroad Administration (FRA) Track Safety Standards. The objective of this research is to investigate the feasibility of developing a machine vision system to make track inspection more efficient, effective, and objective. In addition, interim approaches to automated track inspection are possible, which will potentially lead to greater inspection effectiveness and efficiency prior to full machine vision system development and implementation. Interim solutions include video capture using vehicle-mounted cameras, image enhancement using image-processing software, and assisted automation using machine vision algorithms (1). The primary focus of this research is inspection of North American Class I railroad mainline and siding tracks, as these generally experience the highest traffic densities. High traffic densities necessitate frequent inspection and more stringent maintenance requirements, and leave railroads less time to accomplish it. This makes them the most likely locations for cost-effective investment in new, more efficient, but potentially more capital-intensive inspection technology. The algorithms currently under development will also be adaptable to many types of infrastructure and usage, including transit and some components of high-speed rail (HSR) infrastructure. The machine vision system described in this paper was developed through an interdisciplinary research collaboration at the University of Illinois at Urbana-Champaign (UIUC) between the Computer Vision and Robotics Laboratory (CVRL) at the Beckman Institute for Advanced Science and Technology and the Railroad Engineering Program in the Department of Civil and Environmental Engineering. CURRENT TRACK INSPECTION TECHNOLOGIES USING MACHINE VISION The international railroad community has undertaken significant research to develop innovative applications for advanced technologies with the objective of improving the process of visual track inspection. The development of machine vision, one such inspection technology which uses video cameras, optical sensors, and custom designed algorithms, began in the early 1990’s with work analyzing rail surface defects (2). Machine vision systems are currently in use or under development for a variety of railroad inspection tasks, both wayside and mobile, including inspection of joint bars, surface defects in the rail, rail profile, ballast profile, track gauge, intermodal loading efficiency, railcar structural components, and railcar safety appliances (1, 3-21, 23). The University of Illinois at Urbana-Champaign (UIUC) has been involved in multiple railroad machine-vision research projects sponsored by the Association of American Railroads (AAR), BNSF Railway, NEXTRANS Region V Transportation Center, and the Transportation Research Board (TRB) High-Speed Rail IDEA Program (6-11). In this section, we provide a brief overview of machine vision condition monitoring applications currently in use or under development for inspection of railway infrastructure. Railway applications of machine vision technology have three main elements: the image acquisition system, the image analysis system, and the data analysis system (1). The attributes and performance of each of these individual components determines the overall performance of a machine vision system. Therefore, the following review includes a discussion of the overall Molina et al. 11-1442 4 machine vision system, as well as approaches to image acquisition, algorithm development techniques, lighting methodologies, and experimental results. Rail Surface Defects The Institute of Digital Image Processing (IDIP) in Austria has developed a machine vision system for rail surface inspection during the rail manufacturing process (12). Currently, rail inspection is carried out by humans and complemented with eddy current systems. The objective of this machine vision system is to replace visual inspections on rail production lines. The machine vision system uses spectral image differencing procedure (SIDP) to generate threedimensional (3D) images and detect surface defects in the rails. Additionally, the cameras can capture images at speeds up to 37 miles per hour (mph) (60 kilometers per hour (kph)). Although the system is currently being used only in rail production lines, it can also be attached to an inspection vehicle for field inspection of rail. Additionally, the Institute of Intelligent Systems for Automation (ISSIA) in Italy has been researching and developing a system for detecting rail corrugation (13). The system uses images of 512x2048 pixels in resolution, artificial light, and classification of texture to identify surface defects. The system is capable of acquiring images at speeds of up to 125 mph (200 kph). Three image-processing methods have been proposed and evaluated by IISA: Gabor, wavelet, and Gabor wavelet. Gabor was selected as the preferred processing technique. Currently, the technology has been implemented through the patented system known as Visual Inspection System for Railways (VISyR). Rail Wear The Moscow Metro and the State of Common Means of Moscow developed photonic system to measure railhead wear (14). The system consists of 4 CCD cameras and 4 laser lights mounted on an inspection vehicle. The cameras are connected to a central computer that receives images every 20 nanoseconds (ns). The system extracts the profile of the rail using two methods (cut-off and tangent) and the results are ultimately compared with pre-established rail wear templates. Tie Condition The Georgetown Rail Equipment Company (GREX) has developed and commercialized a crosstie inspection system called AURORA (15). The objective of the system is to inspect and classify the condition of timber and concrete crossties. Additionally, the system can be adapted to measure rail seat abrasion (RSA) and detect defects in fastening systems. AURORA uses high-definition cameras and high-voltage lasers as part of the lighting arrangement and is capable of inspecting 70,000 ties per hour at a speed of 30-45 mph (48-72 kph). The system has been shown to replicate results obtained by track inspectors with an accuracy of 88%. Since 2008, Napier University in Sweden has been researching the use of machine vision technology for inspection of timber crossties (16). Their system evaluates the condition of the ends of the ties and classifies them into one of two categories: good or bad. This classification is performed by evaluating quantitative parameters such as the number, length, and depth of cracks, as well as the condition of the tie plate. Experimental results showed that the system has an accuracy of 90% with respect to the correct classification of ties. Future research work includes evaluation of the center portion of the ties and integration with other non-destructive testing (NDT) applications. Molina et al. 11-1442 5 In 2003, the University of Zaragoza in Spain began research on the development of machine vision techniques to inspect concrete crossties using a stereo-metric system to measure different surface shapes (17). The system is used to estimate the deviation from the required dimensional tolerances of the concrete ties in production lines. Two CCD cameras with a resolution of 768x512 pixels are used for image capture and lasers are used for artificial lighting. The system has been shown to produce reliable results, but quantifiable results were not found in the available literature. Ballast The ISS", "title": "" }, { "docid": "953851cb9cf9e755ec156fab79e3a818", "text": "We study minimization of the difference of l1 and l2 norms as a non-convex and Lipschitz continuous metric for solving constrained and unconstrained compressed sensing problems. We establish exact (stable) sparse recovery results under a restricted isometry property (RIP) condition for the constrained problem, and a full-rank theorem of the sensing matrix restricted to the support of the sparse solution. We present an iterative method for l1−2 minimization based on the difference of convex functions algorithm (DCA), and prove that it converges to a stationary point satisfying first order optimality condition. We propose a sparsity oriented simulated annealing (SA) procedure with non-Gaussian random perturbation and prove the almost sure convergence of the combined algorithm (DCASA) to a global minimum. Computation examples on success rates of sparse solution recovery show that if the sensing matrix is ill-conditioned (non RIP satisfying), then our method is better than existing non-convex compressed sensing solvers in the literature. Likewise in the magnetic resonance imaging (MRI) phantom image recovery problem, l1−2 succeeds with 8 projections. Irrespective of the conditioning of the sensing matrix, l1−2 is better than l1 in both the sparse signal and the MRI phantom image recovery problems.", "title": "" }, { "docid": "3e4fd502a999dcafb030a6898bd11f9b", "text": "We present several Hermite-type interpolation methods for rational cubics. In case the input data come from a circular arc, the rational cubic will reproduce it. keywords: Hermite interpolation, rational cubics, circular precision.", "title": "" }, { "docid": "f15508a8cd342cb6ea0ec2d0328503d7", "text": "An order book consists of a list of all buy and sell offers, represented by price and quantity, available to a market agent. The order book changes rapidly, within fractions of a second, due to new orders being entered into the book. The volume at a certain price level may increase due to limit orders, i.e. orders to buy or sell placed at the end of the queue, or decrease because of market orders or cancellations. In this paper a high-dimensional Markov chain is used to represent the state and evolution of the entire order book. The design and evaluation of optimal algorithmic strategies for buying and selling is studied within the theory of Markov decision processes. General conditions are provided that guarantee the existence of optimal strategies. Moreover, a value-iteration algorithm is presented that enables finding optimal strategies numerically. As an illustration a simple version of the Markov chain model is calibrated to high-frequency observations of the order book in a foreign exchange market. In this model, using an optimally designed strategy for buying one unit provides a significant improvement, in terms of the expected buy price, over a naive buy-one-unit strategy.", "title": "" }, { "docid": "9d90b8e88790e43d95d99bcfb8b3240a", "text": "With advances in knowledge disease, boundaries may change. Occasionally, these changes are of such a magnitude that they require redefinition of the disease. In recognition of the profound changes in our understanding of Parkinson's disease (PD), the International Parkinson and Movement Disorders Society (MDS) commissioned a task force to consider a redefinition of PD. This review is a discussion article, intended as the introductory statement of the task force. Several critical issues were identified that challenge current PD definitions. First, new findings challenge the central role of the classical pathologic criteria as the arbiter of diagnosis, notably genetic cases without synuclein deposition, the high prevalence of incidental Lewy body (LB) deposition, and the nonmotor prodrome of PD. It remains unclear, however, whether these challenges merit a change in the pathologic gold standard, especially considering the limitations of alternate gold standards. Second, the increasing recognition of dementia in PD challenges the distinction between diffuse LB disease and PD. Consideration might be given to removing dementia as an exclusion criterion for PD diagnosis. Third, there is increasing recognition of disease heterogeneity, suggesting that PD subtypes should be formally identified; however, current subtype classifications may not be sufficiently robust to warrant formal delineation. Fourth, the recognition of a nonmotor prodrome of PD requires that new diagnostic criteria for early-stage and prodromal PD should be created; here, essential features of these criteria are proposed. Finally, there is a need to create new MDS diagnostic criteria that take these changes in disease definition into consideration.", "title": "" }, { "docid": "a2f65eb4a81bc44ea810d834ab33d891", "text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.", "title": "" }, { "docid": "10b6750b3f7a589463122b55b5776a7a", "text": "This article reviews research and interventions that have grown up around a model of psychological well-being generated more than two decades ago to address neglected aspects of positive functioning such as purposeful engagement in life, realization of personal talents and capacities, and enlightened self-knowledge. The conceptual origins of this formulation are revisited and scientific products emerging from 6 thematic areas are examined: (1) how well-being changes across adult development and later life; (2) what are the personality correlates of well-being; (3) how well-being is linked with experiences in family life; (4) how well-being relates to work and other community activities; (5) what are the connections between well-being and health, including biological risk factors, and (6) via clinical and intervention studies, how psychological well-being can be promoted for ever-greater segments of society. Together, these topics illustrate flourishing interest across diverse scientific disciplines in understanding adults as striving, meaning-making, proactive organisms who are actively negotiating the challenges of life. A take-home message is that increasing evidence supports the health protective features of psychological well-being in reducing risk for disease and promoting length of life. A recurrent and increasingly important theme is resilience - the capacity to maintain or regain well-being in the face of adversity. Implications for future research and practice are considered.", "title": "" }, { "docid": "a32c635c1f4f4118da20cee6ffb5c1ea", "text": "We analyzed the influence of education and of culture on the neuropsychological profile of an indigenous and a nonindigenous population. The sample included 27 individuals divided into four groups: (a) seven illiterate Maya indigenous participants, (b) six illiterate Pame indigenous participants, (c) seven nonindigenous participants with no education, and (d) seven Maya indigenous participants with 1 to 4 years of education . A brief neuropsychological test battery developed and standardized in Mexico was individually administered. Results demonstrated differential effects for both variables. Both groups of indigenous participants (Maya and Pame) obtained higher scores in visuospatial tasks, and the level of education had significant effects on working and verbal memory. Our data suggested that culture dictates what it is important for survival and that education could be considered as a type of subculture that facilitates the development of certain skills.", "title": "" }, { "docid": "ef3cb4e591f52498584495caacc74069", "text": "The Hill-Sachs lesion is an osseous defect of the humeral head that is typically associated with anterior shoulder instability. The incidence of these lesions in the setting of glenohumeral instability is relatively high and approaches 100% in persons with recurrent anterior shoulder instability. Reverse Hill-Sachs lesion has been described in patients with posterior shoulder instability. Glenoid bone loss is typically associated with the Hill-Sachs lesion in patients with recurrent anterior shoulder instability. The lesion is a bipolar injury, and identification of concomitant glenoid bone loss is essential to optimize clinical outcome. Other pathology (eg, Bankart tear, labral or capsular injuries) must be identified, as well. Treatment is dictated by subjective and objective findings of shoulder instability and radiographic findings. Nonsurgical management, including focused rehabilitation, is acceptable in cases of small bony defects and nonengaging lesions in which the glenohumeral joint remains stable during desired activities. Surgical options include arthroscopic and open techniques.", "title": "" } ]
scidocsrr
0392d26b9b71f5fdff01dab0ae2ccc9b
Learning to rank recommendations with the k-order statistic loss
[ { "docid": "4c596974ba7dde7525e028bd7f168e61", "text": "In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach.", "title": "" }, { "docid": "4984f9e1995cd69aac609374778d45c0", "text": "We discuss the video recommendation system in use at YouTube, the world's most popular online video community. The system recommends personalized sets of videos to users based on their activity on the site. We discuss some of the unique challenges that the system faces and how we address them. In addition, we provide details on the experimentation and evaluation framework used to test and tune new algorithms. We also present some of the findings from these experiments.", "title": "" } ]
[ { "docid": "0f979712b19f19f84f36c838a036ed99", "text": "In this paper we describe the development and deployment of a wireless sensor network (WSN) to monitor a train tunnel duri ng adjacent construction activity. The tunnel in question is a pa rt of the London Underground system. Construction of tunnels beneat h the existing tunnel is expected to cause deformations. The expe cted deformation values were determined by a detailed geotechnica l analysis. A real-time monitoring system, comprising of 18 sensin g u its and a base-station, was installed along the critical zone of the tunnel to measure the deformations. The sensing units report th ei data to the base-station at periodic intervals. The system was us ed for making continuous measurements for a period of 72 days. This window of time covered the period during which the tunnel bor ing machine (TBM) was active near the critical zone. The deploye d WSN provided accurate data for measuring the displacements and this is corroborated from the tunnel contractor’s data.", "title": "" }, { "docid": "218ef054603cf5955015946f0606a614", "text": "The purpose of this work was to obtain a componentwise breakdown of the power consumption a modern laptop. We measured the power usage of the key components in an IBM ThinkPad R40 laptop using an Agilent Oscilloscope and current probes. We obtained the power consumption for the CPU, optical drive, hard disk, display, graphics card, memory, and wireless card subsystems--either through direct measurement or subtractive measurement and calculation. Moreover, we measured the power consumption of each component for a variety of workloads. We found that total system power consumption varies a lot (8 W to 30 W) depending on the workload, and moreover that the distribution of power consumption among the components varies even more widely. We also found that though power saving techniques such as DVS can reduce CPU power considerably, the total system power is still dominated by CPU power in the case of CPU intensive workloads. The display is the other main source of power consumption in a laptop; it dominates when the CPU is idle. We also found that reducing the backlight brightness can reduce the system power significantly, more than any other display power saving techniques. Finally, we observed OS differences in the power consumption.", "title": "" }, { "docid": "507cddc2df8ab2775395efb8387dad93", "text": "A novel band-reject element for the design of inline waveguide pseudoelliptic band-reject filters is introduced. The element consists of an offset partial-height post in a rectangular waveguide in which the dominant TE10 mode is propagating. The location of the attenuation pole is primarily determined by the height of the post that generates it. The element allows the implementation of weak, as well as strong coupling coefficients that are encountered in asymmetric band-reject responses with broad stopbands. The coupling strength is controlled by the offset of the post with respect to the center of the main waveguide. The posts are separated by uniform sections of the main waveguide. An equivalent low-pass circuit based on the extracted pole technique is first used in a preliminary design. An improved equivalent low-pass circuit that includes a more accurate equivalent circuit of the band-reject element is then introduced. A synthesis method of the enhanced network is also presented. Filters based on the introduced element are designed, fabricated, and tested. Good agreement between measured and simulated results is achieved", "title": "" }, { "docid": "ed97b6815085d2664c6548abcf68a767", "text": "Good mental health literacy in young people and their key helpers may lead to better outcomes for those with mental disorders, either by facilitating early help-seeking by young people themselves, or by helping adults to identify early signs of mental disorders and seek help on their behalf. Few interventions to improve mental health literacy of young people and their helpers have been evaluated, and even fewer have been well evaluated. There are four categories of interventions to improve mental health literacy: whole-of-community campaigns; community campaigns aimed at a youth audience; school-based interventions teaching help-seeking skills, mental health literacy, or resilience; and programs training individuals to better intervene in a mental health crisis. The effectiveness of future interventions could be enhanced by using specific health promotion models to guide their development.", "title": "" }, { "docid": "dcfc6f3c1eba7238bd6c6aa18dcff6df", "text": "With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need. This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures. Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated. Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G.", "title": "" }, { "docid": "f1e293b4b896547b17b5becb1e06cb47", "text": "Occupational therapy has been an invisible profession, largely because the public has had difficulty grasping the concept of occupation. The emergence of occupational science has the potential of improving this situation. Occupational science is firmly rooted in the founding ideas of occupational therapy. In the future, the nature of human occupation will be illuminated by the development of a basic theory of occupational science. Occupational science, through research and theory development, will guide the practice of occupational therapy. Applications of occupational science to the practice of pediatric occupational therapy are presented. Ultimately, occupational science will prepare pediatric occupational therapists to better meet the needs of parents and their children.", "title": "" }, { "docid": "354bbe38d4571bf7f1f95453f9958eb6", "text": "This paper focuses and talks about the wide and varied areas of applications wireless sensor networks have taken over today, right from military surveillance and smart home automation to medical and environmental monitoring. It also gives a gist why security is a primary issue of concern even today for the same, discussing the existing solutions along with outlining the security issues and suggesting possible directions of research over the same. This paper is about the security of wireless sensor networks. These networks create new security threats in comparison to the traditional methods due to some unique characteristics of these networks. A detailed study of the threats, risks and attacks need to be done in order to come up with proper security solutions. Here the paper presents the unique characteristics of these networks and how they pose new security threats. There are several security goals of these networks. These goals and requirements must be kept in mind while designing of security solutions for these networks. It also describes the various attacks that are possible at important layers such as data-link, network, physical and transport layer.", "title": "" }, { "docid": "3d007291b5ca2220c15e6eee72b94a76", "text": "While the number of knowledge bases in the Semantic Web increases, the maintenance and creation of ontology schemata still remain a challenge. In particular creating class expressions constitutes one of the more demanding aspects of ontology engineering. In this article we describe how to adapt a semi-automatic method for learning OWL class expressions to the ontology engineering use case. Specifically, we describe how to extend an existing learning algorithm for the class learning problem. We perform rigorous performance optimization of the underlying algorithms for providing instant suggestions to the user. We also present two plugins, which use the algorithm, for the popular Protégé and OntoWiki ontology editors and provide a preliminary evaluation on real ontologies.", "title": "" }, { "docid": "a880da76ccdf9f77d334025a2798c14e", "text": "After three decades of developments, single particle tracking (SPT) has become a powerful tool to interrogate dynamics in a range of materials including live cells and novel catalytic supports because of its ability to reveal dynamics in the structure-function relationships underlying the heterogeneous nature of such systems. In this review, we summarize the algorithms behind, and practical applications of, SPT. We first cover the theoretical background including particle identification, localization, and trajectory reconstruction. General instrumentation and recent developments to achieve two- and three-dimensional subdiffraction localization and SPT are discussed. We then highlight some applications of SPT to study various biological and synthetic materials systems. Finally, we provide our perspective regarding several directions for future advancements in the theory and application of SPT.", "title": "" }, { "docid": "5589dfc1ff9246b85e326e8f394cd514", "text": "justice. Women, by contrast, were believed to be at a lower stage because they were found to have a sense of agency still tied primarily to their social relationships and to make political and moral decisions based on context-specific principles based on these relationships rather than on the grounds of their own autonomous judgments. Students of gender studies know well just how busy social scientists have been kept by their efforts to come up with ever more sociological \"alibis\" for the question of why women did not act like men. Gilligan's response was to refuse the terms of the debate altogether. She thus did not develop yet another explanation for why women are \"deviant.\" Instead, she turned the question on its head by asking what was wrong with the theory a theory whose central premises defines 50% of social beings as \"abnormal.\" Gilligan translated this question into research by subjecting the abstraction of universal and discrete agency to comparative research into female behavior evaluated on its own terms The new research revealed women to be more \"concrete\" in their thinking and more attuned to \"fairness\" while men acted on \"abstract reasoning\" and \"rules of justice.\" These research findings transformed female otherness into variation and difference but difference now freed from the normative de-", "title": "" }, { "docid": "8b7cc94a7284d4380537418ed9ee0f01", "text": "The subject matter of this research; employee motivation and performance seeks to look at how best employees can be motivated in order to achieve high performance within a company or organization. Managers and entrepreneurs must ensure that companies or organizations have a competent personnel that is capable to handle this task. This takes us to the problem question of this research “why is not a sufficient motivation for high performance?” This therefore establishes the fact that money is for high performance but there is need to look at other aspects of motivation which is not necessarily money. Four theories were taken into consideration to give an explanation to the question raised in the problem formulation. These theories include: Maslow’s hierarchy of needs, Herzberg two factor theory, John Adair fifty-fifty theory and Vroom’s expectancy theory. Furthermore, the performance management process as a tool to measure employee performance and company performance. This research equally looked at the various reward systems which could be used by a company. In addition to the above, culture and organizational culture and it influence on employee behaviour within a company was also examined. An empirical study was done at Ultimate Companion Limited which represents the case study of this research work. Interviews and questionnaires were conducted to sample employee and management view on motivation and how it can increase performance at the company. Finally, a comparison of findings with theories, a discussion which raises critical issues on motivation/performance and conclusion constitute the last part of the research. Subject headings, (keywords) Motivation, Performance, Intrinsic, Extrinsic, Incentive, Tangible and Intangible, Reward", "title": "" }, { "docid": "5ee21318b1601a1d42162273a7c9026c", "text": "We used a knock-in strategy to generate two lines of mice expressing Cre recombinase under the transcriptional control of the dopamine transporter promoter (DAT-cre mice) or the serotonin transporter promoter (SERT-cre mice). In DAT-cre mice, immunocytochemical staining of adult brains for the dopamine-synthetic enzyme tyrosine hydroxylase and for Cre recombinase revealed that virtually all dopaminergic neurons in the ventral midbrain expressed Cre. Crossing DAT-cre mice with ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice revealed a near perfect correlation between staining for tyrosine hydroxylase and beta-galactosidase or YFP. YFP-labeled fluorescent dopaminergic neurons could be readily identified in live slices. Crossing SERT-cre mice with the ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice similarly revealed a near perfect correlation between staining for serotonin-synthetic enzyme tryptophan hydroxylase and beta-galactosidase or YFP. Additional Cre expression in the thalamus and cortex was observed, reflecting the known pattern of transient SERT expression during early postnatal development. These findings suggest a general strategy of using neurotransmitter transporter promoters to drive selective Cre expression and thus control mutations in specific neurotransmitter systems. Crossed with fluorescent-gene reporters, this strategy tags neurons by neurotransmitter status, providing new tools for electrophysiology and imaging.", "title": "" }, { "docid": "c0584e11a64c6679ad43a0a91d92740d", "text": "A challenge in teaching usability engineering is providing appropriate hands-on project experience. Students need projects that are realistic enough to address meaningful issues, but manageable within one semester. We describe our use of online case studies to motivate and model course projects in usability engineering. The cases illustrate scenario-based usability methods, and are accessed via a custom browser. We summarize the content and organization of the case studies, several case-based learning activities, and students' reactions to the activities. We conclude with a discussion of future directions for case studies in HCI education.", "title": "" }, { "docid": "24ade252fcc6bd5404484cb9ad5987a3", "text": "The cornerstone of the IBM System/360 philosophy is that the architecture of a computer is basically independent of its physical implementation. Therefore, in System/360, different physical implementations have been made of the single architectural definition which is illustrated in Figure 1.", "title": "" }, { "docid": "72345bf404d21d0f7aa1e54a5710674c", "text": "Many real-world data sets exhibit skewed class distributions in which almost all cases are allotted to a class and far fewer cases to a smaller, usually more interesting class. A classifier induced from an imbalanced data set has, typically, a low error rate for the majority class and an unacceptable error rate for the minority class. This paper firstly provides a systematic study on the various methodologies that have tried to handle this problem. Finally, it presents an experimental study of these methodologies with a proposed mixture of expert agents and it concludes that such a framework can be a more effective solution to the problem. Our method seems to allow improved identification of difficult small classes in predictive analysis, while keeping the classification ability of the other classes in an acceptable level.", "title": "" }, { "docid": "2b03868a73808a0135547427112dcaf8", "text": "In this article we focus attention on ethnography’s place in CSCW by reflecting on how ethnography in the context of CSCW has contributed to our understanding of the sociality and materiality of work and by exploring how the notion of the ‘field site’ as a construct in ethnography provides new ways of conceptualizing ‘work’ that extends beyond the workplace. We argue that the well known challenges of drawing design implications from ethnographic research have led to useful strategies for tightly coupling ethnography and design. We also offer some thoughts on recent controversies over what constitutes useful and proper ethnographic research in the context of CSCW. Finally, we argue that as the temporal and spatial horizons of inquiry have expanded, along with new domains of collaborative activity, ethnography continues to provide invaluable perspectives.", "title": "" }, { "docid": "481e8bf359e6e4e9ce94a4ad0973412b", "text": "0140-3664/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.comcom.2012.06.004 ⇑ Corresponding author. Tel.: +44 (0) 28 71375418; E-mail address: Deak-G@email.ulster.ac.uk (G. Dea 1 School of Computing and Intelligent Systems, Engineering, University of Ulster, Derry, N. Ireland, BT4 In recent years the need for indoor localisation has increased. Earlier systems have been deployed in order to demonstrate that indoor localisation can be done. Many researchers are referring to location estimation as a crucial component in numerous applications. There is no standard in indoor localisation thus the selection of an existing system needs to be done based on the environment being tracked, the accuracy and the precision required. Modern localisation systems use various techniques such as Received Signal Strength Indicator (RSSI), Time of Arrival (TOA), Time Difference of Arrival (TDOA) and Angle of Arrival (AOA). This paper is a survey of various active and passive localisation techniques developed over the years. The majority of the localisation techniques are part of the active systems class due to the necessity of tags/electronic devices carried by the person being tracked or mounted on objects in order to estimate their position. The second class called passive localisation represents the estimation of a person’s position without the need for a physical device i.e. tags or sensors. The assessment of the localisation systems is based on the wireless technology used, positioning algorithm, accuracy and precision, complexity, scalability and costs. In this paper we are comparing various systems presenting their advantages and disadvantages. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4348f2af97c7a02f988df350a0729040", "text": "Societies are complex systems, which tend to polarize into subgroups of individuals with dramatically opposite perspectives. This phenomenon is reflected-and often amplified-in online social networks, where, however, humans are no longer the only players and coexist alongside with social bots-that is, software-controlled accounts. Analyzing large-scale social data collected during the Catalan referendum for independence on October 1, 2017, consisting of nearly 4 millions Twitter posts generated by almost 1 million users, we identify the two polarized groups of Independentists and Constitutionalists and quantify the structural and emotional roles played by social bots. We show that bots act from peripheral areas of the social system to target influential humans of both groups, bombarding Independentists with violent contents, increasing their exposure to negative and inflammatory narratives, and exacerbating social conflict online. Our findings stress the importance of developing countermeasures to unmask these forms of automated social manipulation.", "title": "" }, { "docid": "8fc87a5f89792b3ea69833dcae90cd6e", "text": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.", "title": "" }, { "docid": "90bce307651bd6441b216e1aded9cdf3", "text": "This work addresses the problem of segmenting an object of interest out of a video. We show that video object segmentation can be naturally cast as a semi-supervised learning problem and be efficiently solved using harmonic functions. We propose an incremental self-training approach by iteratively labeling the least uncertain frame and updating similarity metrics. Our self-training video segmentation produces superior results both qualitatively and quantitatively. Moreover, usage of harmonic functions naturally supports interactive segmentation. We suggest active learning methods for providing guidance to user on what to annotate in order to improve labeling efficiency. We present experimental results using a ground truth data set and a quantitative comparison to a representative object segmentation system.", "title": "" } ]
scidocsrr
0beab3e99259c697748456cbf8ea89ec
Depth Estimation from Image Structure
[ { "docid": "9bf157e016f4fc124128a3008dc1c47c", "text": "The appearance of an object is composed of local structure. This local structure can be described and characterized by a vector of local features measured by local operators such as Gaussian derivatives or Gabor filters. This article presents a technique where appearances of objects are represented by the joint statistics of such local neighborhood operators. As such, this represents a new class of appearance based techniques for computer vision. Based on joint statistics, the paper develops techniques for the identification of multiple objects at arbitrary positions and orientations in a cluttered scene. Experiments show that these techniques can identify over 100 objects in the presence of major occlusions. Most remarkably, the techniques have low complexity and therefore run in real-time.", "title": "" } ]
[ { "docid": "e6f506c3c90a15b5e4079ccb75eb3ff0", "text": "Stories of people's everyday experiences have long been the focus of psychology and sociology research, and are increasingly being used in innovative knowledge-based technologies. However, continued research in this area is hindered by the lack of standard corpora of sufficient size and by the costs of creating one from scratch. In this paper, we describe our efforts to develop a standard corpus for researchers in this area by identifying personal stories in the tens of millions of blog posts in the ICWSM 2009 Spinn3r Dataset. Our approach was to employ statistical text classification technology on the content of blog entries, which required the creation of a sufficiently large set of annotated training examples. We describe the development and evaluation of this classification technology and how it was applied to the dataset in order to identify nearly a million", "title": "" }, { "docid": "59b26acc158c728cf485eae27de665f7", "text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.", "title": "" }, { "docid": "392f7b126431b202d57d6c25c07f7f7c", "text": "Serine racemase (SRace) is an enzyme that catalyzes the conversion of L-serine to pyruvate or D-serine, an endogenous agonist for NMDA receptors. Our previous studies showed that inflammatory stimuli such as Abeta could elevate steady-state mRNA levels for SRace, perhaps leading to inappropriate glutamatergic stimulation under conditions of inflammation. We report here that a proinflammatory stimulus (lipopolysaccharide) elevated the activity of the human SRace promoter, as indicated by expression of a luciferase reporter system transfected into a microglial cell line. This effect corresponded to an elevation of SRace protein levels in microglia, as well. By contrast, dexamethasone inhibited the SRace promoter activity and led to an apparent suppression of SRace steady-state mRNA levels. A potential binding site for NFkappaB was explored, but this sequence played no significant role in SRace promoter activation. Instead, large deletions and site-directed mutagenesis indicated that a DNA element between -1382 and -1373 (relative to the start of translation) was responsible for the activation of the promoter by lipopolysaccharide. This region fits the consensus for an activator protein-1 binding site. Lipopolysaccharide induced an activity capable of binding this DNA element in electrophoretic mobility shift assays. Supershifts with antibodies against c-Fos and JunB identified these as the responsible proteins. An inhibitor of Jun N-terminal kinase blocked SRace promoter activation, further implicating activator protein-1. These data indicate that proinflammatory stimuli utilize a signal transduction pathway culminating in activator protein-1 activation to induce expression of serine racemase.", "title": "" }, { "docid": "333b21433d17a9d271868e203c8a9481", "text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).", "title": "" }, { "docid": "4cd36ace8473aeaa61ced34b548c6585", "text": "OBJECTIVE\nSmaller hippocampal volume has been reported only in some but not all studies of unipolar major depressive disorder. Severe stress early in life has also been associated with smaller hippocampal volume and with persistent changes in the hypothalamic-pituitary-adrenal axis. However, prior hippocampal morphometric studies in depressed patients have neither reported nor controlled for a history of early childhood trauma. In this study, the volumes of the hippocampus and of control brain regions were measured in depressed women with and without childhood abuse and in healthy nonabused comparison subjects.\n\n\nMETHOD\nStudy participants were 32 women with current unipolar major depressive disorder-21 with a history of prepubertal physical and/or sexual abuse and 11 without a history of prepubertal abuse-and 14 healthy nonabused female volunteers. The volumes of the whole hippocampus, temporal lobe, and whole brain were measured on coronal MRI scans by a single rater who was blind to the subjects' diagnoses.\n\n\nRESULTS\nThe depressed subjects with childhood abuse had an 18% smaller mean left hippocampal volume than the nonabused depressed subjects and a 15% smaller mean left hippocampal volume than the healthy subjects. Right hippocampal volume was similar across the three groups. The right and left hippocampal volumes in the depressed women without abuse were similar to those in the healthy subjects.\n\n\nCONCLUSIONS\nA smaller hippocampal volume in adult women with major depressive disorder was observed exclusively in those who had a history of severe and prolonged physical and/or sexual abuse in childhood. An unreported history of childhood abuse in depressed subjects could in part explain the inconsistencies in hippocampal volume findings in prior studies in major depressive disorder.", "title": "" }, { "docid": "e7646a79b25b2968c3c5b668d0216aa6", "text": "In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediatelevel descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.", "title": "" }, { "docid": "8999e010ddbc0aa7ef579d8a9e055769", "text": "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute-/memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "title": "" }, { "docid": "19c3c2ac5e35e8e523d796cef3717d90", "text": "The printing press long ago and the computer today have made widespread access to information possible. Learning theorists have suggested, however, that mere information is a poor way to learn. Instead, more effective learning comes through doing. While the most popularized element of today's MOOCs are the video lectures, many MOOCs also include interactive activities that can afford learning by doing. This paper explores the learning benefits of the use of informational assets (e.g., videos and text) in MOOCs, versus the learning by doing opportunities that interactive activities provide. We find that students doing more activities learn more than students watching more videos or reading more pages. We estimate the learning benefit from extra doing (1 SD increase) to be more than six times that of extra watching or reading. Our data, from a psychology MOOC, is correlational in character, however we employ causal inference mechanisms to lend support for the claim that the associations we find are causal.", "title": "" }, { "docid": "3eccedb5a9afc0f7bc8b64c3b5ff5434", "text": "The design of a high impedance, high Q tunable load is presented with operating frequency between 400MHz and close to 6GHz. The bandwidth is made independently tunable of the carrier frequency by using an active inductor resonator with multiple tunable capacitances. The Q factor can be tuned from a value 40 up to 300. The circuit is targeted at 5G wideband applications requiring narrow band filtering where both centre frequency and bandwidth needs to be tunable. The circuit impedance is applied to the output stage of a standard CMOS cascode and results show that high Q factors can be achieved close to 6GHz with 11dB rejection at 20MHz offset from the centre frequency. The circuit architecture takes advantage of currently available low cost, low area tunable capacitors based on micro-electromechanical systems (MEMS) and Barium Strontium Titanate (BST).", "title": "" }, { "docid": "144480a9154226cf4a72f149ff6c9c56", "text": "The availability of medical imaging data from clinical archives, research literature, and clinical manuals, coupled with recent advances in computer vision offer the opportunity for image-based diagnosis, teaching, and biomedical research. However, the content and semantics of an image can vary depending on its modality and as such the identification of image modality is an important preliminary step. The key challenge for automatically classifying the modality of a medical image is due to the visual characteristics of different modalities: some are visually distinct while others may have only subtle differences. This challenge is compounded by variations in the appearance of images based on the diseases depicted and a lack of sufficient training data for some modalities. In this paper, we introduce a new method for classifying medical images that uses an ensemble of different convolutional neural network (CNN) architectures. CNNs are a state-of-the-art image classification technique that learns the optimal image features for a given classification task. We hypothesise that different CNN architectures learn different levels of semantic image representation and thus an ensemble of CNNs will enable higher quality features to be extracted. Our method develops a new feature extractor by fine-tuning CNNs that have been initialized on a large dataset of natural images. The fine-tuning process leverages the generic image features from natural images that are fundamental for all images and optimizes them for the variety of medical imaging modalities. These features are used to train numerous multiclass classifiers whose posterior probabilities are fused to predict the modalities of unseen images. Our experiments on the ImageCLEF 2016 medical image public dataset (30 modalities; 6776 training images, and 4166 test images) show that our ensemble of fine-tuned CNNs achieves a higher accuracy than established CNNs. Our ensemble also achieves a higher accuracy than methods in the literature evaluated on the same benchmark dataset and is only overtaken by those methods that source additional training data.", "title": "" }, { "docid": "d17622889db09b8484d94392cadf1d78", "text": "Software development has always inherently required multitasking: developers switch between coding, reviewing, testing, designing, and meeting with colleagues. The advent of software ecosystems like GitHub has enabled something new: the ability to easily switch between projects. Developers also have social incentives to contribute to many projects; prolific contributors gain social recognition and (eventually) economic rewards. Multitasking, however, comes at a cognitive cost: frequent context-switches can lead to distraction, sub-standard work, and even greater stress. In this paper, we gather ecosystem-level data on a group of programmers working on a large collection of projects. We develop models and methods for measuring the rate and breadth of a developers' context-switching behavior, and we study how context-switching affects their productivity. We also survey developers to understand the reasons for and perceptions of multitasking. We find that the most common reason for multitasking is interrelationships and dependencies between projects. Notably, we find that the rate of switching and breadth (number of projects) of a developer's work matter. Developers who work on many projects have higher productivity if they focus on few projects per day. Developers that switch projects too much during the course of a day have lower productivity as they work on more projects overall. Despite these findings, developers perceptions of the benefits of multitasking are varied.", "title": "" }, { "docid": "46004ee1f126c8a5b76166c5dc081bc8", "text": "In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.", "title": "" }, { "docid": "d8badd23313c7ea4baa0231ff1b44e32", "text": "Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.", "title": "" }, { "docid": "53575c45a60f93c850206f2a467bc8e8", "text": "We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages better than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb.", "title": "" }, { "docid": "c3e371b0c13f431cbf9b9278a6d40ace", "text": "Until today, most lecturers in universities are found still using the conventional methods of taking students' attendance either by calling out the student names or by passing around an attendance sheet for students to sign confirming their presence. In addition to the time-consuming issue, such method is also at higher risk of having students cheating about their attendance, especially in a large classroom. Therefore a method of taking attendance by employing an application running on the Android platform is proposed in this paper. This application, once installed can be used to download the students list from a designated web server. Based on the downloaded list of students, the device will then act like a scanner to scan each of the student cards one by one to confirm and verify the student's presence. The device's camera will be used as a sensor that will read the barcode printed on the students' cards. The updated attendance list is then uploaded to an online database and can also be saved as a file to be transferred to a PC later on. This system will help to eliminate the current problems, while also promoting a paperless environment at the same time. Since this application can be deployed on lecturers' own existing Android devices, no additional hardware cost is required.", "title": "" }, { "docid": "2c3e6373feb4352a68ec6fd109df66e0", "text": "A broadband transition design between broadside coupled stripline (BCS) and conductor-backed coplanar waveguide (CBCPW) is proposed and studied. The E-field of CBCPW is designed to be gradually changed to that of BCS via a simple linear tapered structure. Two back-to-back transitions are simulated, fabricated and measured. It is reported that maximum insertion loss of 2.3 dB, return loss of higher than 10 dB and group delay flatness of about 0.14 ns are obtained from 50 MHz to 20 GHz.", "title": "" }, { "docid": "7c783834f6ad0151f944766a91f0a67d", "text": "Estradiol is the most potent and ubiquitous member of a class of steroid hormones called estrogens. Fetuses and newborns are exposed to estradiol derived from their mother, their own gonads, and synthesized locally in their brains. Receptors for estradiol are nuclear transcription factors that regulate gene expression but also have actions at the membrane, including activation of signal transduction pathways. The developing brain expresses high levels of receptors for estradiol. The actions of estradiol on developing brain are generally permanent and range from establishment of sex differences to pervasive trophic and neuroprotective effects. Cellular end points mediated by estradiol include the following: 1) apoptosis, with estradiol preventing it in some regions but promoting it in others; 2) synaptogenesis, again estradiol promotes in some regions and inhibits in others; and 3) morphometry of neurons and astrocytes. Estradiol also impacts cellular physiology by modulating calcium handling, immediate-early-gene expression, and kinase activity. The specific mechanisms of estradiol action permanently impacting the brain are regionally specific and often involve neuronal/glial cross-talk. The introduction of endocrine disrupting compounds into the environment that mimic or alter the actions of estradiol has generated considerable concern, and the developing brain is a particularly sensitive target. Prostaglandins, glutamate, GABA, granulin, and focal adhesion kinase are among the signaling molecules co-opted by estradiol to differentiate male from female brains, but much remains to be learned. Only by understanding completely the mechanisms and impact of estradiol action on the developing brain can we also understand when these processes go awry.", "title": "" }, { "docid": "2ae96a524ba3b6c43ea6bfa112f71a30", "text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.", "title": "" }, { "docid": "fd786ae1792e559352c75940d84600af", "text": "In this paper, we obtain an (1 − e−1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n) function value computations. c © 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "fad4ff82e9b11f28a70749d04dfbf8ca", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.", "title": "" } ]
scidocsrr
0e2f0d74ed84ab64d81332bddd6bf9a1
Secure Estimation and Control for Cyber-Physical Systems Under Adversarial Attacks
[ { "docid": "48c28572e5eafda1598a422fa1256569", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" } ]
[ { "docid": "66b337e0b6b2d28f7414cf5f88a724a0", "text": "Sensor networks are currently an active research area mainly due to the potential of their applications. In this paper we investigate the use of Wireless Sensor Networks (WSN) for air pollution monitoring in Mauritius. With the fast growing industrial activities on the island, the problem of air pollution is becoming a major concern for the health of the population. We proposed an innovative system named Wireless Sensor Network Air Pollution Monitoring System (WAPMS) to monitor air pollution in Mauritius through the use of wireless sensors deployed in huge numbers around the island. The proposed system makes use of an Air Quality Index (AQI) which is presently not available in Mauritius. In order to improve the efficiency of WAPMS, we have designed and implemented a new data aggregation algorithm named Recursive Converging Quartiles (RCQ). The algorithm is used to merge data to eliminate duplicates, filter out invalid readings and summarise them into a simpler form which significantly reduce the amount of data to be transmitted to the sink and thus saving energy. For better power management we used a hierarchical routing protocol in WAPMS and caused the motes to sleep during idle time.", "title": "" }, { "docid": "fc935bf600e49db18c0a89f0945bac59", "text": "Psychological positive health and health complaints have long been ignored scientifically. Sleep plays a critical role in children and adolescents development. We aimed at studying the association of sleep duration and quality with psychological positive health and health complaints in children and adolescents from southern Spain. A randomly selected two-phase sample of 380 healthy Caucasian children (6–11.9 years) and 304 adolescents (12–17.9 years) participated in the study. Sleep duration (total sleep time), perceived sleep quality (morning tiredness and sleep latency), psychological positive health and health complaints were assessed using the Health Behaviour in School-aged Children questionnaire. The mean (standard deviation [SD]) reported sleep time for children and adolescents was 9.6 (0.6) and 8.8 (0.6) h/day, respectively. Sleep time ≥10 h was significantly associated with an increased likelihood of reporting no health complaints (OR 2.3; P = 0.005) in children, whereas sleep time ≥9 h was significantly associated with an increased likelihood of overall psychological positive health and no health complaints indicators (OR ~ 2; all P < 0.05) in adolescents. Reporting better sleep quality was associated with an increased likelihood of reporting excellent psychological positive health (ORs between 1.5 and 2.6; all P < 0.05). Furthermore, children and adolescents with no difficulty falling asleep were more likely to report no health complaints (OR ~ 3.5; all P < 0.001). Insufficient sleep duration and poor perceived quality of sleep might directly impact quality of life in children, decreasing general levels of psychological positive health and increasing the frequency of having health complaints.", "title": "" }, { "docid": "effd64aec4e246f8c83ef67a21fb86d6", "text": "Estimation of grapevine vigour using mobile proximal sensors can provide an indirect method for determining grape yield and quality. Of the various indexes related to the characteristics of grapevine foliage, the leaf area index (LAI) is probably the most widely used in viticulture. To assess the feasibility of using light detection and ranging (LiDAR) sensors for predicting the LAI, several field trials were performed using a tractor-mounted LiDAR system. This system measured the crop in a transverse direction along the rows of vines and geometric and structural parameters were computed. The parameters evaluated were the height of the vines (H), the cross-sectional area (A), the canopy volume (V) and the tree area index (TAI). This last parameter was formulated as the ratio of the crop estimated area per unit ground area, using a local Poisson distribution to approximate the laser beam transmission probability within vines. In order to compare the calculated indexes with the actual values of LAI, the scanned vines were defoliated to obtain LAI values for different row sections. Linear regression analysis showed a good correlation (R 2 = 0.81) between canopy volume and the measured values of LAI for 1 m long sections. Nevertheless, the best estimation of the LAI was given by the TAI (R 2 = 0.92) for the same length, confirming LiDAR sensors as an interesting option for foliage characterization of grapevines. However, current limitations exist related to the complexity of data process and to the need to accumulate a sufficient number of scans to adequately estimate the LAI.", "title": "" }, { "docid": "a1d300bd5ac779e1b21a7ed20b3b01ad", "text": "a r t i c l e i n f o Keywords: Luxury brands Perceived social media marketing (SMM) activities Value equity Relationship equity Brand equity Customer equity Purchase intention In light of a growing interest in the use of social media marketing (SMM) among luxury fashion brands, this study set out to identify attributes of SMM activities and examine the relationships among those perceived activities, value equity, relationship equity, brand equity, customer equity, and purchase intention through a structural equation model. Five constructs of perceived SSM activities of luxury fashion brands are entertainment , interaction, trendiness, customization, and word of mouth. Their effects on value equity, relationship equity, and brand equity are significantly positive. For the relationship between customer equity drivers and customer equity, brand equity has significant negative effect on customer equity while value equity and relationship equity show no significant effect. As for purchase intention, value equity and relationship equity had significant positive effects, while relationship equity had no significant influence. Finally, the relationship between purchase intention and customer equity has significance. The findings of this study can enable luxury brands to forecast the future purchasing behavior of their customers more accurately and provide a guide to managing their assets and marketing activities as well. The luxury market has attained maturity, along with the gradual expansion of the scope of its market and a rapid growth in the number of customers. Luxury market is a high value-added industry basing on high brand assets. Due to the increased demand for luxury in emerging markets such as China, India, and the Middle East, opportunities abound to expand the business more than ever. In the past, luxury fashion brands could rely on strong brand assets and secure regular customers. However, the recent entrance of numerous fashion brands into the luxury market, followed by heated competition, signals unforeseen changes in the market. A decrease in sales related to a global economic downturn drives luxury businesses to change. Now they can no longer depend solely on their brand symbol but must focus on brand legacy, quality, esthetic value, and trustworthy customer relationships in order to succeed. A key element to luxury industry becomes providing values to customers in every way possible. As a means to constitute customer assets through effective communication with consumers, luxury brands have tilted their eyes toward social media. Marketing communication using social media such as Twitter, Facebook, and …", "title": "" }, { "docid": "ac740402c3e733af4d690e34e567fabe", "text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.", "title": "" }, { "docid": "eea39002b723aaa9617c63c1249ef9a6", "text": "Generative Adversarial Networks (GAN) [1] are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.", "title": "" }, { "docid": "51b6b50fb9ea3b578a476a4c12cfa83f", "text": "Deficient cognitive top-down executive control has long been hypothesized to underlie inattention and impulsivity in attention-deficit/hyperactivity disorder (ADHD). However, top-down cognitive dysfunction explains a modest proportion of the ADHD phenotype whereas the salience of emotional dysregulation is being noted increasingly. Together, these two types of dysfunction have the potential to account for more of the phenotypic variance in patients diagnosed with ADHD. We develop this idea and suggest that top-down dysregulation constitutes a gradient extending from mostly non-emotional top-down control processes (i.e., \"cool\" executive functions) to mainly emotional regulatory processes (including \"hot\" executive functions). While ADHD has been classically linked primarily to the former, conditions involving emotional instability such as borderline and antisocial personality disorder are closer to the other. In this model, emotional subtypes of ADHD are located at intermediate levels of this gradient. Neuroanatomically, gradations in \"cool\" processing appear to be related to prefrontal dysfunction involving dorsolateral prefrontal cortex (dlPFC) and caudal anterior cingulate cortex (cACC), while \"hot\" processing entails orbitofrontal cortex and rostral anterior cingulate cortex (rACC). A similar distinction between systems related to non-emotional and emotional processing appears to hold for the basal ganglia (BG) and the neuromodulatory effects of the dopamine system. Overall we suggest that these two systems could be divided according to whether they process non-emotional information related to the exteroceptive environment (associated with \"cool\" regulatory circuits) or emotional information related to the interoceptive environment (associated with \"hot\" regulatory circuits). We propose that this framework can integrate ADHD, emotional traits in ADHD, borderline and antisocial personality disorder into a related cluster of mental conditions.", "title": "" }, { "docid": "defda73fe0145db8bcc9a80946c15de3", "text": "AIM\nThis study was conducted to evaluate the presence of Aeromonas spp. in raw and ready-to-eat (RTE) fish commonly consumed in Assiut city, Egypt, and to determine virulence factors due to they play a key role in their pathogenicity.\n\n\nMATERIALS AND METHODS\nA total of 125 samples of raw and RTE fish samples were taken from different fish markets and fish restaurants in Assiut Governorate and screened for the presence of Aeromonas spp. by enrichment on tryptic soy broth then incubated at 30°C for 24 h. Plating unto the sterile Petri dishes containing Aeromonas agar base to which Aeromonas selective supplement was added. The plates were incubated at 37°C for 24 h. Presumptive Aeromonas colonies were biochemically confirmed and analyzed for pathogenicity by hemolysin production, protease, and lipase detection.\n\n\nRESULTS\nThe results indicated that raw fish were contaminated with Aeromonas spp. (40% in wild and 36% in cultured Nile tilapia). Regarding RTE, Aeromonas spp. could be isolated with the percentage of 16%, 28% and 20% in fried Bolti, grilled Bolti and fried Bayad, respectively. Out of 35 isolates obtained, 22 were categorized as Aeromonas hydrophila, 12 were classified as Aeromonas sobria and Aeromonas caviae were found in only one isolate. The virulence factors of Aeromonas spp. were detected and the results showed that all isolates produced of hemolysin (91.4%), protease (77.1%), and lipase enzyme (17.1%).\n\n\nCONCLUSION\nThis study indicates that the presence of A. hydrophila with virulence potential in fresh and RTE fish may be a major threat to public health.", "title": "" }, { "docid": "c4387f3c791acc54d0a0655221947c8b", "text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.", "title": "" }, { "docid": "85bdac91c8c7d456a7e76ce5927cc994", "text": "Current CNN-based solutions to salient object detection (SOD) mainly rely on the optimization of cross-entropy loss (CELoss). Then the quality of detected saliency maps is often evaluated in terms of F-measure. In this paper, we investigate an interesting issue: can we consistently use the F-measure formulation in both training and evaluation for SOD? By reformulating the standard F-measure we propose the relaxed F-measure which is differentiable w.r.t the posterior and can be easily appended to the back of CNNs as the loss function. Compared to the conventional cross-entropy loss of which the gradients decrease dramatically in the saturated area, our loss function, named FLoss, holds considerable gradients even when the activation approaches the target. Consequently, the FLoss can continuously force the network to produce polarized activations. Comprehensive benchmarks on several popular datasets show that FLoss outperforms the stateof-the-arts with a considerable margin. More specifically, due to the polarized predictions, our method is able to obtain high quality saliency maps without carefully tuning the optimal threshold, showing significant advantages in real world applications.", "title": "" }, { "docid": "852f745d3d5b63d8739439020674a509", "text": "Most of the countries evaluate their energy networks in terms of national security and define as critical infrastructure. Monitoring and controlling of these systems are generally provided by Industrial Control Systems (ICSs) and/or Supervisory Control and Data Acquisition (SCADA) systems. Therefore, this study focuses on the cyber-attack vectors on SCADA systems to research the threats and risks targeting them. For this purpose, TCP/IP based protocols used in SCADA systems have been determined and analyzed at first. Then, the most common cyber-attacks are handled systematically considering hardware-side threats, software-side ones and the threats for communication infrastructures. Finally, some suggestions are given.", "title": "" }, { "docid": "95689f439fababe920921ee419965b90", "text": "In traditional text clustering methods, documents are represented as \"bags of words\" without considering the semantic information of each document. For instance, if two documents use different collections of core words to represent the same topic, they may be falsely assigned to different clusters due to the lack of shared core words, although the core words they use are probably synonyms or semantically associated in other forms. The most common way to solve this problem is to enrich document representation with the background knowledge in an ontology. There are two major issues for this approach: (1) the coverage of the ontology is limited, even for WordNet or Mesh, (2) using ontology terms as replacement or additional features may cause information loss, or introduce noise. In this paper, we present a novel text clustering method to address these two issues by enriching document representation with Wikipedia concept and category information. We develop two approaches, exact match and relatedness-match, to map text documents to Wikipedia concepts, and further to Wikipedia categories. Then the text documents are clustered based on a similarity metric which combines document content information, concept information as well as category information. The experimental results using the proposed clustering framework on three datasets (20-newsgroup, TDT2, and LA Times) show that clustering performance improves significantly by enriching document representation with Wikipedia concepts and categories.", "title": "" }, { "docid": "5ba72505e19ded19685f43559868bfdf", "text": "In this paper, we present an optimally-modi#ed log-spectral amplitude (OM-LSA) speech estimator and a minima controlled recursive averaging (MCRA) noise estimation approach for robust speech enhancement. The spectral gain function, which minimizes the mean-square error of the log-spectra, is obtained as a weighted geometric mean of the hypothetical gains associated with the speech presence uncertainty. The noise estimate is given by averaging past spectral power values, using a smoothing parameter that is adjusted by the speech presence probability in subbands. We introduce two distinct speech presence probability functions, one for estimating the speech and one for controlling the adaptation of the noise spectrum. The former is based on the time–frequency distribution of the a priori signal-to-noise ratio. The latter is determined by the ratio between the local energy of the noisy signal and its minimum within a speci6ed time window. Objective and subjective evaluation under various environmental conditions con6rm the superiority of the OM-LSA and MCRA estimators. Excellent noise suppression is achieved, while retaining weak speech components and avoiding the musical residual noise phenomena. ? 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "9beeee852ce0d077720c212cf17be036", "text": "Spoofing speech detection aims to differentiate spoofing speech from natural speech. Frame-based features are usually used in most of previous works. Although multiple frames or dynamic features are used to form a super-vector to represent the temporal information, the time span covered by these features are not sufficient. Most of the systems failed to detect the non-vocoder or unit selection based spoofing attacks. In this work, we propose to use a temporal convolutional neural network (CNN) based classifier for spoofing speech detection. The temporal CNN first convolves the feature trajectories with a set of filters, then extract the maximum responses of these filters within a time window using a max-pooling layer. Due to the use of max-pooling, we can extract useful information from a long temporal span without concatenating a large number of neighbouring frames, as in feedforward deep neural network (DNN). Five types of feature are employed to access the performance of proposed classifier. Experimental results on ASVspoof 2015 corpus show that the temporal CNN based classifier is effective for synthetic speech detection. Specifically, the proposed method brings a significant performance boost for the unit selection based spoofing speech detection.", "title": "" }, { "docid": "4c2a936cf236009993e32faee549c268", "text": "In this paper, we proposed Discrete Radon Transform (DRT) technique for feature extraction of static signature recognition to identify forgeries. Median filter has been introduced for noise cancellation of handwritten signature. This paper describes static signature verification techniques where signature samples of each person was collected and cropped by automatic cropping system. Projection based global features are extracted like Horizontal, Vertical and combination of both the projections, these all are one dimensional feature vectors to recognize the handwritten static signature. The distance between two corresponding vectors can be measured with Dynamic Time Warping algorithm (DTW) and using only six genuine signatures samples of each person has been employed here in order to train our system. In the proposed system process time required for training our system for each person is between 1.5 to 4.2 seconds and requires less memory for storage. The optimal performance of the system was found using proposed technique for Combined projection features and it gives FAR of 5.60%, FRR of 8.49% and EER 7.60%, which illustrates such new approach to be quite effective and reliable.", "title": "" }, { "docid": "15dd0e2e238f46901ce1dca6e38dd973", "text": "Leukocyte adhesion to vascular endothelium contributes to vaso-occlusion and widespread organ damage in sickle cell disease (SCD). Previously, we found high expression of the adhesion molecules αMβ2 integrin and L-selectin in HbSS individuals with severe disease. Since membrane n-6 and n-3 polyunsaturated fatty acids modulate cell adhesion, inflammation, aggregation and vascular tone, we investigated the fatty acid composition of mononuclear cells (MNC) and platelets of HbSS patients in steady state (n=28) and racially matched, healthy HbAA controls with similar age and sex distribution living in the same environment (n=13). MNC phospholipids of the patients had lower levels of docosahexaenoic acid (DHA, p<0.01) and increased arachidonic acid (AA, p<0.005) relative to HbAA controls. Similarly, platelets from HbSS patients had less eicosapentaenoic acid (EPA, p<0.05) and more AA (p<0.05) in choline phosphoglycerides (CPG), with reduced DHA (p<0.05) in ethanolamine phosphoglycerides. Platelet CPG had lower DHA levels in SCD patients with complications compared to those without (p<0.05). Reduced cell content of EPA and DHA relative to AA favours the production of aggregatory and proinflammatory eicosanoids that activate leukocytes and platelets. This facilitates inflammation, leukocyte adhesion, platelet aggregation and vaso-occlusion in SCD.", "title": "" }, { "docid": "415e704d747a00447cc1ec8fa0ff2a3d", "text": "We propose a new method to detect when users express the intent to leave a service, also known as churn. While previous work focuses solely on social media, we show that this intent can be detected in chatbot conversations. As companies increasingly rely on chatbots, they need an overview of potentially churny users. To this end, we crowdsource and publish a dataset of churn intent expressions in chatbot interactions in German and English. We show that classifiers trained on social media data can detect the same intent in the context of chatbots. We introduce a classification architecture that outperforms existing work on churn intent detection in social media. Moreover, we show that, using bilingual word embeddings, a system trained on combined English and German data outperforms monolingual approaches. As the only existing dataset is in English, we crowdsource and publish a novel dataset of German tweets. We thus underline the universal aspect of the problem, as examples of churn intent in English help us identify churn in German tweets and chatbot conversations.", "title": "" }, { "docid": "ef6acb93b78d92c76f4f1429b2113ec8", "text": "Priority queues are essential for implementing various types of service disciplines on network devices. However, state-of-the-art priority queues can barely catch up with the OC192 line speed and the size of active priorities is limited to a few hundred KB which is far from the worst-case requirement. Our hybrid design stores most priorities sorted in the FIFO queue and they are used for dequeue only. Since the dequeue operation always takes the highest priorities from the head of the FIFO queue, it can be efficiently handled using caching in the SRAM and wide word accesses to the DRAM. Meanwhile, incoming priorities are stored temporarily in a small input heap. Once the input heap is full, it becomes creation heap and the priorities are then quickly dequeued from the creation heap and transformed into a sorted array to be stored in the FIFO queues. When these operations are in progress, previously emptied creation heap is connected to the input and continues to get incoming priorities. Swapping between input and creation heap sustains continuous operation of the system. Our results show that this hybrid design can meet the requirements of OC-3072 line speed and provide enough capacity queuing a large number of priorities in the worst case. Also, the required DRAM bandwidth and SRAM size (which is precious) are reasonable. Keywords-Network Processor; Priority Queue; FIFO Queue.", "title": "" }, { "docid": "795bede0ff85ce04e956cdc23f8ecb0a", "text": "Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k-230k neurons and 1.2M-5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500x (15x) efficiency in energy benefits at 300x (60x) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.", "title": "" } ]
scidocsrr
a9a67851f9645921c3323aafcd5942e1
Enhanced Security for Cloud Storage using File Encryption
[ { "docid": "88bf67ec7ff0cfa3f1dc6af12140d33b", "text": "Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. General example of cloud services is Google apps, provided by Google and Microsoft SharePoint. The rapid growth in field of “cloud computing” also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data, and examining the utilization of cloud by the cloud computing vendors. The wide acceptance www has raised security risks along with the uncountable benefits, so is the case with cloud computing. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. How the end users of cloud computing know that their information is not having any availability and security issues? Every one poses, Is their information secure? This study aims to identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors to know about the key security threats associated with cloud computing. Our work will enable researchers and security professionals to know about users and vendors concerns and critical analysis about the different security models and tools proposed.", "title": "" } ]
[ { "docid": "d81b67d0a4129ac2e118c9babb59299e", "text": "Motivation\nA large number of newly sequenced proteins are generated by the next-generation sequencing technologies and the biochemical function assignment of the proteins is an important task. However, biological experiments are too expensive to characterize such a large number of protein sequences, thus protein function prediction is primarily done by computational modeling methods, such as profile Hidden Markov Model (pHMM) and k-mer based methods. Nevertheless, existing methods have some limitations; k-mer based methods are not accurate enough to assign protein functions and pHMM is not fast enough to handle large number of protein sequences from numerous genome projects. Therefore, a more accurate and faster protein function prediction method is needed.\n\n\nResults\nIn this paper, we introduce DeepFam, an alignment-free method that can extract functional information directly from sequences without the need of multiple sequence alignments. In extensive experiments using the Clusters of Orthologous Groups (COGs) and G protein-coupled receptor (GPCR) dataset, DeepFam achieved better performance in terms of accuracy and runtime for predicting functions of proteins compared to the state-of-the-art methods, both alignment-free and alignment-based methods. Additionally, we showed that DeepFam has a power of capturing conserved regions to model protein families. In fact, DeepFam was able to detect conserved regions documented in the Prosite database while predicting functions of proteins. Our deep learning method will be useful in characterizing functions of the ever increasing protein sequences.\n\n\nAvailability and implementation\nCodes are available at https://bhi-kimlab.github.io/DeepFam.", "title": "" }, { "docid": "6ebf60b36d9a13c5ae6ded91ee7d95fe", "text": "In this paper, a novel approach for Kannada, Telugu and Devanagari handwritten numerals recognition based on global and local structural features is proposed. Probabilistic Neural Network (PNN) Classifier is used to classify the Kannada, Telugu and Devanagari numerals separately. Algorithm is validated with Kannada, Telugu and Devanagari numerals dataset by setting various radial values of PNN classifier under different experimental setup. The experimental results obtained are encouraging and comparable with other methods found in literature survey. The novelty of the proposed method is free from thinning and size", "title": "" }, { "docid": "6f2162f883fce56eaa6bd8d0fbcedc0b", "text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.", "title": "" }, { "docid": "33c497748082b3c62fc1b5e8d5ab9d05", "text": "The prevention and treatment of malaria is heavily dependent on antimalarial drugs. However, beginning with the emergence of chloroquine (CQ)-resistant Plasmodium falciparum parasites 50 years ago, efforts to control the disease have been thwarted by failed or failing drugs. Mutations in the parasite’s ‘chloroquine resistance transporter’ (PfCRT) are the primary cause of CQ resistance. Furthermore, changes in PfCRT (and in several other transport proteins) are associated with decreases or increases in the parasite’s susceptibility to a number of other antimalarial drugs. Here, we review recent advances in our understanding of CQ resistance and discuss these in the broader context of the parasite’s susceptibilities to other quinolines and related drugs. We suggest that PfCRT can be viewed both as a ‘multidrug-resistance carrier’ and as a drug target, and that the quinoline-resistance mechanism is a potential ‘Achilles’ heel’ of the parasite. We examine a number of the antimalarial strategies currently undergoing development that are designed to exploit the resistance mechanism, including relatively simple measures, such as alternative CQ dosages, as well as new drugs that either circumvent the resistance mechanism or target it directly.", "title": "" }, { "docid": "58efd234d4ca9b10ccfc363db4c501d3", "text": "In order to understand the role of the medium osmolality on the metabolism of glumate-producing Corynebacterium glutamicum, effects of saline osmotic upshocks from 0.4 osnol. kg−1 to 2 osmol. kg−1 have been investigated on the growth kinetics and the intracellular content of the bacteria. Addition of a high concentration of NaCl after a few hours of batch culture results in a temporary interruption of the cellular growth. Cell growth resumes after about 1 h but at a specific rate that decreases with increasing medium osmolality. Investigation of the intracellular content showed, during the first 30 min following the shock, a rapid but transient influx of sodium ions. This was followed by a strong accumulation of proline, which rose from 5 to 110 mg/g dry weight at the end of the growth phase. A slight accumulation of intracellular glutamate from 60 to 75 mg/g dry weight was also observed. Accordingly, for Corynebacterium glutamicum an increased osmolality in the glutamate and proline synthesis during the growth phase.", "title": "" }, { "docid": "c9fc426722df72b247093779ad6e2c0e", "text": "Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. When the ground conditions and stability constraint are satisfied, it is desirable to select a walking pattern that requires small torque and velocity of the joint actuators. In this paper, we first formulate the constraints of the foot motion parameters. By varying the values of the constraint parameters, we can produce different types of foot motion to adapt to ground conditions. We then propose a method for formulating the problem of the smooth hip motion with the largest stability margin using only two parameters, and derive the hip trajectory by iterative computation. Finally, the correlation between the actuator specifications and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.", "title": "" }, { "docid": "641811eac0e8a078cf54130c35fd6511", "text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.", "title": "" }, { "docid": "893942f986718d639aa46930124af679", "text": "In this work we consider the problem of controlling a team of microaerial vehicles moving quickly through a three-dimensional environment while maintaining a tight formation. The formation is specified by a shape matrix that prescribes the relative separations and bearings between the robots. Each robot plans its trajectory independently based on its local information of other robot plans and estimates of states of other robots in the team to maintain the desired shape. We explore the interaction between nonlinear decentralized controllers, the fourth-order dynamics of the individual robots, the time delays in the network, and the effects of communication failures on system performance. An experimental evaluation of our approach on a team of quadrotors suggests that suitable performance is maintained as the formation motions become increasingly aggressive and as communication degrades.", "title": "" }, { "docid": "ccddd7df2b5246c44d349bfb0aae499a", "text": "We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms’ rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.", "title": "" }, { "docid": "67fc5fffc5f032007ac89dda8d0f877c", "text": "Phishing scam is a well-known fraudulent activity in which victims are tricked to reveal their confidential information especially those related to financial information. There are various phishing schemes such as deceptive phishing, malware based phishing, DNS-based phishing and many more. Therefore in this paper, a systematic review analysis on existing works related with the phishing detection and response techniques together with apoptosis have been further investigated and evaluated. Furthermore, one case study to show the proof of concept how the phishing works is also discussed in this paper. This paper also discusses the challenges and the potential research for future work related with the integration of phishing detection model and response with apoptosis. This research paper also can be used as a reference and guidance for further study on phishing detection and response. Keywords—Phishing; apoptosis; phishing detection; phishing", "title": "" }, { "docid": "4f1070b988605290c1588918a716cef2", "text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.", "title": "" }, { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "212f128450a141b5b4c83c8c57d14677", "text": "Local Authority road networks commonly include roads with different functional characteristics and a variety of construction types, which require maintenance solutions tailored to their needs. Given this background, on local road network, pavement management is founded on the experience of the agency engineers and is often constrained by low budgets and a variety of environmental and external requirements. This paper forms part of a research work that investigates the use of digital techniques for obtaining field data in order to increase safety and reduce labour cost requirements using a semi-automated distress collection and measurement system. More specifically, a definition of a distress detection procedure is presented which aims at producing a result complying more closely to the distress identification manuals and protocols. The process comprises the following two steps: Automated pavement image collection. Images are collected using the high speed digital acquisition system of the Mobile Laboratory designed and implemented by the Department of Civil and Environmental Engineering of the University of Catania; Distress Detection. By way of the Pavement Distress Analyser (PDA), a specialised software, images are adjusted to eliminate their optical distortion. Cracks, potholes and patching are automatically detected and subsequently classified by means of an operator assisted approach. An intense, experimental field survey has made it possible to establish that the procedure obtains more consistent distress measurements than a manual survey thus increasing its repeatability, reducing costs and increasing safety during the survey. Moreover, the pilot study made it possible to validate results coming from a survey carried out under normal traffic conditions, concluding that it is feasible to integrate the procedure into a roadway pavement management system.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "cc2cd5868ca8b2e9713e5659c61747c5", "text": "Phylogenetic analysis is sometimes regarded as being an intimidating, complex process that requires expertise and years of experience. In fact, it is a fairly straightforward process that can be learned quickly and applied effectively. This Protocol describes the several steps required to produce a phylogenetic tree from molecular data for novices. In the example illustrated here, the program MEGA is used to implement all those steps, thereby eliminating the need to learn several programs, and to deal with multiple file formats from one step to another (Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 28:2731-2739). The first step, identification of a set of homologous sequences and downloading those sequences, is implemented by MEGA's own browser built on top of the Google Chrome toolkit. For the second step, alignment of those sequences, MEGA offers two different algorithms: ClustalW and MUSCLE. For the third step, construction of a phylogenetic tree from the aligned sequences, MEGA offers many different methods. Here we illustrate the maximum likelihood method, beginning with MEGA's Models feature, which permits selecting the most suitable substitution model. Finally, MEGA provides a powerful and flexible interface for the final step, actually drawing the tree for publication. Here a step-by-step protocol is presented in sufficient detail to allow a novice to start with a sequence of interest and to build a publication-quality tree illustrating the evolution of an appropriate set of homologs of that sequence. MEGA is available for use on PCs and Macs from www.megasoftware.net.", "title": "" }, { "docid": "bf7305ceee06b3672825032b78c5e22f", "text": "Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.", "title": "" }, { "docid": "dea235c392f876cae8004166209ace3d", "text": "Vehicular ad hoc networking is an emerging technology for future on-the-road communications. Due to the virtue of vehicle-to-vehicle and vehicle-to-infrastructure communications, vehicular ad hoc networks (VANETs) are expected to enable a plethora of communication-based automotive applications including diverse in-vehicle infotainment applications and road safety services. Even though vehicles are organized mostly in an ad hoc manner in the network topology, directly applying the existing communication approaches designed for traditional mobile ad hoc networks to large-scale VANETs with fast-moving vehicles can be ineffective and inefficient. To achieve success in a vehicular environment, VANET-specific communication solutions are imperative. In this paper, we provide a comprehensive overview of various radio channel access protocols and resource management approaches, and discuss their suitability for infotainment and safety service support in VANETs. Further, we present recent research activities and related projects on vehicular communications. Potential challenges and open research issues are also", "title": "" }, { "docid": "014f1369be6a57fb9f6e2f642b3a4926", "text": "VNC is platform-independent – a VNC viewer on one operating system may connect to a VNC server on the same or any other operating system. There are clients and servers for many GUIbased operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.", "title": "" }, { "docid": "a76be3ebe7b169f3669243271d2474a6", "text": "Sophisticated video processing effects require both image and geometry information. We explore the possibility to augment a video camera with a recent infrared time-of-flight depth camera, to capture high-resolution RGB and low-resolution, noisy depth at video frame rates. To turn such a setup into a practical RGBZ video camera, we develop efficient data filtering techniques that are tailored to the noise characteristics of IR depth cameras. We first remove typical artefacts in the RGBZ data and then apply an efficient spatiotemporal denoising and upsampling scheme. This allows us to record temporally coherent RGBZ videos at interactive frame rates and to use them to render a variety of effects in unprecedented quality. We show effects such as video relighting, geometry-based abstraction and stylisation, background segmentation and rendering in stereoscopic 3D.", "title": "" }, { "docid": "488b0adfe43fc4dbd9412d57284fc856", "text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.", "title": "" } ]
scidocsrr
18c69cedec2954bdb985f91d05bddac6
DCT-Based Iris Recognition
[ { "docid": "b125649628d46871b2212c61e355ec43", "text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.", "title": "" } ]
[ { "docid": "599fa9f883c12a57a1dfa9cdb71e94c7", "text": "We propose a new decentralized access control scheme for secure data storage in clouds that supports anonymous authentication. In the proposed scheme, the cloud verifies the authenticity of the series without knowing the user's identity before storing data. Our scheme also has the added feature of access control in which only valid users are able to decrypt the stored information. The scheme prevents replay attacks and supports creation, modification, and reading data stored in the cloud. We also address user revocation. Moreover, our authentication and access control scheme is decentralized and robust, unlike other access control schemes designed for clouds which are centralized. The communication, computation, and storage overheads are comparable to centralized approaches.", "title": "" }, { "docid": "8ceb8a3f659b18e5d95da60c10ca7ae3", "text": "In recent years the power systems research community has seen an explosion of work applying operations research techniques to challenging power network optimization problems. Regardless of the application under consideration, all of these works rely on power system test cases for evaluation and validation. However, many of the well established power system test cases were developed as far back as the 1960s with the aim of testing AC power flow algorithms. It is unclear if these power flow test cases are suitable for power system optimization studies. This report surveys all of the publicly available AC transmission system test cases, to the best of our knowledge, and assess their suitability for optimization tasks. It finds that many of the traditional test cases are missing key network operation constraints, such as line thermal limits and generator capability curves. To incorporate these missing constraints, data driven models are developed from a variety of publicly available data sources. The resulting extended test cases form a compressive archive, NESTA, for the evaluation and validation of power system optimization algorithms.", "title": "" }, { "docid": "28fd803428e8f40a4627e05a9464e97b", "text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "title": "" }, { "docid": "617ec3be557749e0646ad7092a1afcb6", "text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.", "title": "" }, { "docid": "6720ae7a531d24018bdd1d3d1c7eb28b", "text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r", "title": "" }, { "docid": "280e83986138daf0237e7502747b8a50", "text": "E-government adoption is the focus of many research studies. However, few studies have compared the adoption factors to identify the most salient predictors of e-government use. This study compares popular adoption constructs to identify the most influential. A survey was administered to elicit citizen perceptions of e-government services. The results of stepwise regression indicate perceived usefulness, trust of the internet, previous use of an e-government service and perceived ease of use all have a significant impact on one’s intention to use an e-government service. The implications for research and practice are discussed below.", "title": "" }, { "docid": "03fe04ade32c6c3112fa0f1a74dceaac", "text": "In this paper, we demonstrate and evaluate a method to perform real-time object detection on-board a UAV using the state of the art YOLOv2 object detection algorithm running on an NVIDIA Jetson TX2, an GPU platform targeted at power constrained mobile applications that use neural networks under the hood. This, as a result of comparing several cutting edge object detection algorithms. Multiple evaluations we present provide insights that help choose the optimal object detection configuration given certain frame rate and detection accuracy requirements. We propose how this setup running on-board a UAV can be used to process a video feed during emergencies in real-time, and feed a decision support warning system using the generated detections.", "title": "" }, { "docid": "f49364d463c3225e52e22c8c043e9590", "text": "Palpation is a physical examination technique where objects, e.g., organs or body parts, are touched with fingers to determine their size, shape, consistency and location. Many medical procedures utilize palpation as a supplementary interaction technique and it can be therefore considered as an essential basic method. However, palpation is mostly neglected in medical training simulators, with the exception of very specialized simulators that solely focus on palpation, e.g., for manual cancer detection. In this article we propose a novel approach to enable haptic palpation interaction for virtual reality-based medical simulators. The main contribution is an extensive user study conducted with a large group of medical experts. To provide a plausible simulation framework for this user study, we contribute a novel and detailed interaction algorithm for palpation with tissue dragging, which utilizes a multi-object force algorithm to support multiple layers of anatomy and a pulse force algorithm for simulation of an arterial pulse. Furthermore, we propose a modification for an off-the-shelf haptic device by adding a lightweight palpation pad to support a more realistic finger grip configuration for palpation tasks. The user study itself has been conducted on a medical training simulator prototype with a specific procedure from regional anesthesia, which strongly depends on palpation. The prototype utilizes a co-rotational finite-element approach for soft tissue simulation and provides bimanual interaction by combining the aforementioned techniques with needle insertion for the other hand. The results of the user study suggest reasonable face validity of the simulator prototype and in particular validate medical plausibility of the proposed palpation interaction algorithm.", "title": "" }, { "docid": "5d4a82d152a05f78ddaeced84b3899b5", "text": "This paper presents a technique for smoothing polygonal surface meshes that avoids the well-known problem of deformation and shrinkage caused by many smoothing methods, like e.g. the Laplacian algorithm. The basic idea is to push the vertices of the smoothed mesh back towards their previous locations. This technique can be also used in order to smooth unstructured point sets, by reconstructing a surface mesh to which the smoothing technique is applied. The key observation is that a surface mesh which is not necessarily topologically correct, but which can efficiently be reconstructed, is sufficient for that purpose.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8f73a521d7703fa00bbaf7b68e470c55", "text": "Purpose – The purpose of this paper is to introduce the concept of strategic integration of knowledge management (KM ) and customer relationship management (CRM). The integration is a strategic issue that has strong ramifications in the long-term competitiveness of organizations. It is not limited to CRM; the concept can also be applied to supply chain management (SCM), product development management (PDM), eterprise resource planning (ERP) and retail network management (RNM) that offer different perspectives into knowledge management adoption. Design/methodology/approach – Through literature review and establishing new perspectives with examples, the components of knowledge management, customer relationship management, and strategic planning are amalgamated. Findings – Findings include crucial details in the various components of knowledge management, customer relationship management, and strategic planning, i.e. strategic planning process, value formula, intellectual capital measure, different levels of CRM and their core competencies. Practical implications – Although the strategic integration of knowledge management and customer relationship management is highly conceptual, a case example has been provided where the concept is applied. The same concept could also be applied to other industries that focus on customer service. Originality/value – The concept of strategic integration of knowledge management and customer relationship management is new. There are other areas, yet to be explored in terms of additional integration such as SCM, PDM, ERP, and RNM. The concept of integration would be useful for future research as well as for KM and CRM practitioners.", "title": "" }, { "docid": "9e4bb8ced136a3e09f93d319a87b1db7", "text": "Requirements are the basis upon which software architecture lies. As a consequence they should be expressed as precisely as possible in order to propose the best compromise between stakeholder needs and engineering constraints.\n While some measurements such as frame rate or latency are a widely known mean of expressing requirements in the 3D community, they often are loosely defined. This leads to software engineering decisions which exclude some of the most promising options.\n This paper proposes to adapt a non-functional requirements expression template used in general software architecture to the specific case of 3D based systems engineering. It shows that in the process some interesting proposals appear as a straightforward consequence of the better definition of the system to be built.", "title": "" }, { "docid": "c28dc261ddc770a6655eb1dbc528dd3b", "text": "Software applications are no longer stand-alone systems. They are increasingly the result of integrating heterogeneous collections of components, both executable and data, possibly dispersed over a computer network. Different components can be provided by different producers and they can be part of different systems at the same time. Moreover, components can change rapidly and independently, making it difficult to manage the whole system in a consistent way. Under these circumstances, a crucial step of the software life cycle is deployment—that is, the activities related to the release, installation, activation, deactivation, update, and removal of components, as well as whole systems. This paper presents a framework for characterizing technologies that are intended to support software deployment. The framework highlights four primary factors concerning the technologies: process coverage; process changeability; interprocess coordination; and site, product, and deployment policy abstraction. A variety of existing technologies are surveyed and assessed against the framework. Finally, we discuss promising research directions in software deployment. This work was supported in part by the Air Force Material Command, Rome Laboratory, and the Defense Advanced Research Projects Agency under Contract Number F30602-94-C-0253. The content of the information does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.", "title": "" }, { "docid": "cfd01fa97733c0df6e07b3b7ddebb4e2", "text": "Radio frequency identification (RFID) is an emerging technology in the building industry. Many researchers have demonstrated how to enhance material control or production management with RFID. However, there is a lack of integrated understanding of lifecycle management. This paper develops and demonstrates a framework to Information Lifecycle Management (ILM) with RFID for material control. The ILM framework includes key RFID checkpoints and material types to facilitate material control on construction sites. In addition, this paper presents a context-aware scenario to examine multiple on-site context and RFID parameters. From tagging nodes at the factory to reading nodes at each lifecycle stage, this paper demonstrates how to manage complex construction materials with RFID and how to construct integrated information flows at different lifecycle stages. To validate key material types and the scenario, the study reports on two on-site trials: read distance test and on-site simulation. Finally, the research provides discussion and recommended approaches to implementing ILM. The results show that the ILM framework has the potential for a variety of stakeholders to adopt RFID in the building industry. This paper provides the understanding about the effectiveness of ILM with RFID for material control, which can serve as a base for adopting other IT technologies in the building industry. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "358e4c55233f3837cf95b8c269447cd2", "text": "In this correspondence, the construction of low-density parity-check (LDPC) codes from circulant permutation matrices is investigated. It is shown that such codes cannot have a Tanner graph representation with girth larger than 12, and a relatively mild necessary and sufficient condition for the code to have a girth of 6, 8,10, or 12 is derived. These results suggest that families of LDPC codes with such girth values are relatively easy to obtain and, consequently, additional parameters such as the minimum distance or the number of redundant check sums should be considered. To this end, a necessary condition for the codes investigated to reach their maximum possible minimum Hamming distance is proposed.", "title": "" }, { "docid": "b31286a1ed91cffcfab7cb5e17392fb9", "text": "This paper presents the use of frequency modulation as a spread spectrum technique to reduce conducted electromagnetic interference (EMI) in the A frequency band (9-150 kHz) caused by resonant inverters used in induction heating home appliances. For sinusoidal, triangular, and sawtooth modulation profiles, the influence of peak period deviation in EMI reduction and in the power delivered to the load is analyzed. A digital circuit that generates the best of the analyzed modulation profiles is implemented in a field programmable gate array. The design is modeled in a very-high-speed integrated circuits hardware description language (VHDL). The digital circuit, the power converter, and the spectrum analyzer are simulated all together using a mixed-signal simulation tool to verify the functionality of the VHDL description. The spectrum analyzer is modeled in VHDL-analog and mixed-signal extension language (VHDL-AMS) and takes into account the resolution bandwidth stipulated by the EMI measurement standard. Finally, the simulations are experimentally verified on a 3.5 kW resonant inverter operating at 35 kHz.", "title": "" }, { "docid": "198b084248ea03fb1398df036db800bf", "text": "Assistive technology (AT) is defined in this paper as ‘any device or system that allows an individual to perform a task that they would otherwise be unable to do, or increases the ease and safety with which the task can be performed’ (Cowan and Turner-Smith 1999). Its importance in contributing to older people’s independence and autonomy is increasingly recognised, but there has been little research into the viability of extensive installations of AT. This paper focuses on the acceptability of AT to older people, and reports one component of a multidisciplinary research project that examined the feasibility, acceptability, costs and outcomes of introducing AT into their homes. Sixty-seven people aged 70 or more years were interviewed in-depth during 2001 to find out about their use and experience of a wide range of assistive technologies. The findings suggest a complex model of acceptability, in which a ‘ felt need’ for assistance combines with ‘product quality ’. The paper concludes by considering the tensions that may arise in the delivery of acceptable assistive technology.", "title": "" }, { "docid": "eb31d3d6264e3a6aba0753b5ba14f572", "text": "Using aggregate product search data from Amazon.com, we jointly estimate consumer information search and online demand for consumer durable goods. To estimate the demand and search primitives, we introduce an optimal sequential search process into a model of choice and treat the observed marketlevel product search data as aggregations of individual-level optimal search sequences. The model builds on the dynamic programming framework by Weitzman (1979) and combines it with a choice model. It can accommodate highly complex demand patterns at the market level. At the individual level, the model has a number of attractive properties in estimation, including closed-form expressions for the probability distribution of alternative sets of searched goods and breaking the curse of dimensionality. Using numerical experiments, we verify the model's ability to identify the heterogeneous consumer tastes and search costs from product search data. Empirically, the model is applied to the online market for camcorders and is used to answer manufacturer questions about market structure and competition, and to address policy maker issues about the e ect of selectively lowered search costs on consumer surplus outcomes. We nd that consumer search for camcorders at Amazon.com is typically limited to little over 10 choice options, and that this a ects the estimates of own and cross elasticities. In a policy simulation, we also nd that the vast majority of the households bene t from the Amazon.com's product recommendations via lower search costs.", "title": "" }, { "docid": "cd8bd76ecebbd939400b4724499f7592", "text": "Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.", "title": "" } ]
scidocsrr
04cace65c6db196e88bc58c5de52dce4
Dissociating valence of outcome from behavioral control in human orbital and ventral prefrontal cortices.
[ { "docid": "e45ad997b5a4c7f1ed52c30f4156cd81", "text": "The somatic marker hypothesis provides a systems-level neuroanatomical and cognitive framework for decision making and the influence on it by emotion. The key idea of this hypothesis is that decision making is a process that is influenced by marker signals that arise in bioregulatory processes, including those that express themselves in emotions and feelings. This influence can occur at multiple levels of operation, some of which occur consciously and some of which occur non-consciously. Here we review studies that confirm various predictions from the hypothesis. The orbitofrontal cortex represents one critical structure in a neural system subserving decision making. Decision making is not mediated by the orbitofrontal cortex alone, but arises from large-scale systems that include other cortical and subcortical components. Such structures include the amygdala, the somatosensory/insular cortices and the peripheral nervous system. Here we focus only on the role of the orbitofrontal cortex in decision making and emotional processing, and the relationship between emotion, decision making and other cognitive functions of the frontal lobe, namely working memory.", "title": "" } ]
[ { "docid": "ce24b783f2157fdb4365b60aa2e6163a", "text": "Geosciences is a field of great societal relevance that requires solutions to several urgent problems facing our humanity and the planet. As geosciences enters the era of big data, machine learning (ML)— that has been widely successful in commercial domains—offers immense potential to contribute to problems in geosciences. However, problems in geosciences have several unique challenges that are seldom found in traditional applications, requiring novel problem formulations and methodologies in machine learning. This article introduces researchers in the machine learning (ML) community to these challenges offered by geoscience problems and the opportunities that exist for advancing both machine learning and geosciences. We first highlight typical sources of geoscience data and describe their properties that make it challenging to use traditional machine learning techniques. We then describe some of the common categories of geoscience problems where machine learning can play a role, and discuss some of the existing efforts and promising directions for methodological development in machine learning. We conclude by discussing some of the emerging research themes in machine learning that are applicable across all problems in the geosciences, and the importance of a deep collaboration between machine learning and geosciences for synergistic advancements in both disciplines.", "title": "" }, { "docid": "409fb707fb6e2c54038eb0b50af1160f", "text": "This paper describes an approach to assessing semantic annotation activities based on formal concept analysis (FCA). In this approach, annotators use taxonomical ontologies created by domain experts to annotate digital resources. Then, using FCA, domain experts are provided with concept lattices that graphically display how their ontologies were used during the semantic annotation process. In consequence, they can advise annotators on how to better use the ontologies, as well as how to refine these ontologies to better suit the needs of the semantic annotators. To illustrate the approach, we describe its implementation in @note, a Rich Internet Application (RIA) for the collaborative annotation of digitized literary texts, we exemplify its use with a case study, and we provide some evaluation results using the method. The enormous efforts to digitize physical resources (documents, books, museum exhibits, etc.), along with recent advances in information and communication technologies, have democratized access to a cultural, scientific and academic heritage previously available to only a few. Likewise, the current trend is to produce new resources in a digital format (e.g., in the context of social networks), which entails an in-depth paradigm shift in almost all the humanistic, social, scientific and technological fields. In particular, the field of the humanities is one which is going through a significant transformation as a result of these digitalization efforts and the paradigm shift associated with the digital age. Indeed, we are witnessing the emergence of a whole host of disciplines, those of Digital Humanities (Berry, 2012), which are closely dependent on the production and proper organization of digital collections. As a result of the undoubted importance of digital collections in modern society, the search for effective and efficient methods to carry out the production, preservation and enhancement of such digital collections has become a key challenge in modern society (Calhoun, 2013). In particular, the annotation of resources with metadata that enables their proper cataloging, search, retrieval and use in different application scenarios is one of the key elements to ensuring the profitability of these collections of digital objects. While the cataloging and retrieval of resources (whether digital or non-digital) have been the object of study in library sciences for decades (Calhoun, 2013), modern applications require annotating resources in semantically richer and more flexible ways, in many cases allowing multiple alternative annotations in the same collection. In consequence, the tendency is to introduce the use of ontology-based semantic technologies, in …", "title": "" }, { "docid": "2f50d412c0ee47d66718cb734bc25e1b", "text": "Nowadays, a big part of people rely on available content in social media in their decisions (e.g., reviews and feedback on a topic or product). The possibility that anybody can leave a review provides a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research, and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this paper, we propose a novel framework, named NetSpam, which utilizes spam features for modeling review data sets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features helps us to obtain better results in terms of different metrics experimented on real-world review data sets from Yelp and Amazon Web sites. The results show that NetSpam outperforms the existing methods and among four categories of features, including review-behavioral, user-behavioral, review-linguistic, and user-linguistic, the first type of features performs better than the other categories.", "title": "" }, { "docid": "4408d485de63034cb2225ee7aa9e3afe", "text": "We present the characterization of dry spiked biopotential electrodes and test their suitability to be used in anesthesia monitoring systems based on the measurement of electroencephalographic signals. The spiked electrode consists of an array of microneedles penetrating the outer skin layers. We found a significant dependency of the electrode-skin-electrode impedance (ESEI) on the electrode size (i.e., the number of spikes) and the coating material of the spikes. Electrodes larger than 3/spl times/3 mm/sup 2/ coated with Ag-AgCl have sufficiently low ESEI to be well suited for electroencephalograph (EEG) recordings. The maximum measured ESEI was 4.24 k/spl Omega/ and 87 k/spl Omega/, at 1 kHz and 0.6 Hz, respectively. The minimum ESEI was 0.65 k/spl Omega/ an 16 k/spl Omega/, at the same frequencies. The ESEI of spiked electrodes is stable over an extended period of time. The arithmetic mean of the generated DC offset voltage is 11.8 mV immediately after application on the skin and 9.8 mV after 20-30 min. A spectral study of the generated potential difference revealed that the AC part was unstable at frequencies below approximately 0.8 Hz. Thus, the signal does not interfere with a number of clinical applications using real-time EEG. Comparing raw EEG recordings of the spiked electrode with commercial Zipprep electrodes showed that both signals were similar. Due to the mechanical strength of the silicon microneedles and the fact that neither skin preparation nor electrolytic gel is required, use of the spiked electrode is convenient. The spiked electrode is very comfortable for the patient.", "title": "" }, { "docid": "d3a18f5ad29f2eddd7eb32c561389212", "text": "Interpretation of magnetic resonance angiography (MRA) is problematic due to complexities of vascular shape and to artifacts such as the partial volume effect. The authors present new methods to assist in the interpretation of MRA. These include methods for detection of vessel paths and for determination of branching patterns of vascular trees. They are based on the ordered region growing (ORG) algorithm that represents the image as an acyclic graph, which can be reduced to a skeleton by specifying vessel endpoints or by a pruning process. Ambiguities in the vessel branching due to vessel overlap are effectively resolved by heuristic methods that incorporate a priori knowledge of bifurcation spacing. Vessel paths are detected at interactive speeds on a 500-MHz processor using vessel endpoints. These methods apply best to smaller vessels where the image intensity peaks at the center of the lumen which, for the abdominal MRA, includes vessels whose diameter is less than 1 cm.", "title": "" }, { "docid": "68ebf8ec7a1e2908a75d0291435d8fea", "text": "Architectures of Recurrent Neural Networks (RNN) recently become a very popular choice for Spoken Language Understanding (SLU) problems; however, they represent a big family of different architectures that can furthermore be combined to form more complex neural networks. In this work, we compare different recurrent networks, such as simple Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) networks, Gated Memory Units (GRU) and their bidirectional versions, on the popular ATIS dataset and on MEDIA, a more complex French dataset. Additionally, we propose a novel method where information about the presence of relevant word classes in the dialog history is combined with a bidirectional GRU, and we show that combining relevant word classes from the dialog history improves the performance over recurrent networks that work by solely analyzing the current sentence.", "title": "" }, { "docid": "4497a102224acacf664f916dbd44b164", "text": "Th is article examines the relationship between electronic participation (e-participation) and trust in local government by focusing on fi ve dimensions of the e-participation process: (1) satisfaction with e-participation applications, (2) satisfaction with government responsiveness to e-participants, (3) e-participants’ development through the participation, (4) perceived infl uence on decision making, and (5) assessment of government transparency. Using data from the 2009 E-Participation Survey in Seoul Metropolitan Government, this article fi nds that e-participants’ satisfaction with e-participation applications is directly associated with their development and their assessment of government transparency. Th e fi ndings reveal that e-participants’ satisfaction with government responsiveness is positively associated with their perceptions of infl uencing government decision making. Furthermore, there is a positive association between e-participants’ perception of infl uencing government decision making and their assessment of government transparency. Finally, the article fi nds that there is a positive association between e-participants’ assessment of government transparency and their trust in the local government providing the e-participation program.", "title": "" }, { "docid": "60fa128f4673046b971fe6a86f271306", "text": "Intel has introduced a hardware-based trusted execution environment, Intel Software Guard Extensions (SGX), that provides a secure, isolated execution environment, or enclave, for a user program without trusting any underlying software (e.g., an operating system) or firmware. Researchers have demonstrated that SGX is vulnerable to a page-fault-based attack. However, the attack only reveals page-level memory accesses within an enclave. In this paper, we explore a new, yet critical, sidechannel attack, branch shadowing, that reveals finegrained control flows (branch granularity) in an enclave. The root cause of this attack is that SGX does not clear branch history when switching from enclave to nonenclave mode, leaving fine-grained traces for the outside world to observe, which gives rise to a branch-prediction side channel. However, exploiting this channel in practice is challenging because 1) measuring branch execution time is too noisy for distinguishing fine-grained controlflow changes and 2) pausing an enclave right after it has executed the code block we target requires sophisticated control. To overcome these challenges, we develop two novel exploitation techniques: 1) a last branch record (LBR)-based history-inferring technique and 2) an advanced programmable interrupt controller (APIC)-based technique to control the execution of an enclave in a finegrained manner. An evaluation against RSA shows that our attack infers each private key bit with 99.8% accuracy. Finally, we thoroughly study the feasibility of hardwarebased solutions (i.e., branch history flushing) and propose a software-based approach that mitigates the attack.", "title": "" }, { "docid": "5dcf33299ebbf8b1de1a8e162a7859c1", "text": "Firstly, olfactory association learning was used to determine the modulating effect of 5-HT4 receptor involvement in learning and long-term memory. Secondly, the effects of systemic injections of a 5-HT4 partial agonist and an antagonist on long-term potentiation (LTP) and depotentiation in the dentate gyrus (DG) were tested in freely moving rats. The modulating role of the 5-HT4 receptors was studied by using a potent, 5-HT4 partial agonist RS 67333 [1-(4-amino-5-chloro-2-methoxyphenyl)-3-(1-n-butyl-4-piperidinyl)-1-propanone] and a selective 5-HT4 receptor antagonist RS 67532 [1-(4-amino-5-chloro-2-(3,5-dimethoxybenzyloxyphenyl)-5-(1-piperidinyl)-1-propanone]. Agonist or antagonist systemic chronic injections prior to five training sessions yielded a facilitatory effect on procedural memory during the first session only with the antagonist. Systemic injection of the antagonist only before the first training session improved procedural memory during the first session and associative memory during the second session. Similar injection with the 5-HT4 partial agonist had an opposite effect. The systemic injection of the 5-HT4 partial agonist prior to the induction of LTP in the dentate gyrus by high-frequency stimulation was followed by a population spike increase, while the systemic injection of the antagonist accelerated the depotentiation 48 h later. The behavioural and physiological results pointed out the involvement of 5-HT4 receptors in processing related to the long-term hippocampal-dependent memory system, and suggest that specific 5-HT4 agonists could be used to treat amnesic patients with a dysfunction in this particular system.", "title": "" }, { "docid": "074ae5038c6b7eb9445ef808c980df3c", "text": "INTRODUCTION\nObstructing uterine septum is a rare uterine malformation. Patients with obstructing uterine septum are usually treated with laparouterotomy, causing obvious injury to both the uterus and body of the patients. Therefore, using the natural channel of the vagina is undoubtedly the best way to carry out the surgery. However, obstructing uterine septum usually occurs in puberty in girls without a history of sexual intercourse, thus iatrogenic damage to the hymen during the diagnosis and treatment cannot probably be avoided. However, Chinese people traditionally tend to use hymen intactness as a standard to judge whether an unmarried woman is chaste. Therefore, in China, to protect the hymen from damage during hysteroscopic diagnosis and treatment is of special significance for girls and women with unbroken hymens. None of the previously reported cases were treated with electrosurgical obstructing uterine septum excision based on B-ultrasound-guided hymen-protecting hysteroscopy and laparoscopy.\n\n\nCASE PRESENTATION\nCase 1 patient was a virgo intacta 13-year-old Chinese girl. She was admitted due to an 8-day post-menstruation lower abdominal pain. With the guidance of B-ultrasound, we observed a 30mm×20mm mixed echogenicity mass in her uterine cavity. Case 2 patient was a virgo intacta 14-year-old Chinese girl. She was admitted to our hospital more than 6 months after secondary dysmenorrhea and 6 days after B-ultrasound-diagnosed uterine malformations. We observed a 30mm×25mm mixed echoic area in her uterine cavity with the guidance of B-ultrasound.Both patients were surgically treated without hymen damage with B-ultrasound-guided combined therapy of hysteroscopy and laparoscopy. A needle electrode with an 8mm diameter was placed into their uterine cavities under hysteroscopy. After obstructing uterine septum removal, their uterine cavities showed normal morphology. To protect their hymens, misoprostol was placed into their rectums to soften their cervices, so that the hysteroscope could be inserted into their cavities without damaging their hymens.\n\n\nCONCLUSION\nVirgo intacta women with obstructing uterine septum could be treated with electrosurgical obstructing uterine septum excision based on B-ultrasound-guided hymen-protecting hysteroscopy and laparoscopy.", "title": "" }, { "docid": "32bd80dbcb0a949fa88374abb42841f0", "text": "A new radiating element design of the slot stripline leaky-wave antennas with right-hand circular polarization is proposed for high-accuracy positioning using Global satellite navigation signals (GNSS) GPS (L1/L2) and GLONASS (L1/L2/L3). Technical characteristics of the antenna with the new radiator have been studied. It is shown that the application of the new radiator decreases the axial ratio and improves the crosspolarization suppression and stability of the phase center, allowing one to increase the accuracy of GLONASS/GPS positioning.", "title": "" }, { "docid": "36ae895829fda8c8b58bf49eaa607695", "text": "In this paper, we describe SymDiff, a language-agnostic tool for equivalence checking and displaying semantic (behavioral) differences over imperative programs. The tool operates on an intermediate verification language Boogie, for which translations exist from various source languages such as C, C# and x86. We discuss the tool and the front-end interface to target various source languages. Finally, we provide a brief description of the front-end for C programs.", "title": "" }, { "docid": "6d9393c95ca9c6534c98c0d0a4451fbc", "text": "The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the Challenge Set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.", "title": "" }, { "docid": "7b4140cb95fbaae6e272326ab59fb884", "text": "Network intrusion detection systems (NIDSs) play a crucial role in defending computer networks. However, there are concerns regarding the feasibility and sustainability of current approaches when faced with the demands of modern networks. More specifically, these concerns relate to the increasing levels of required human interaction and the decreasing levels of detection accuracy. This paper presents a novel deep learning technique for intrusion detection, which addresses these concerns. We detail our proposed nonsymmetric deep autoencoder (NDAE) for unsupervised feature learning. Furthermore, we also propose our novel deep learning classification model constructed using stacked NDAEs. Our proposed classifier has been implemented in graphics processing unit (GPU)-enabled TensorFlow and evaluated using the benchmark KDD Cup ’99 and NSL-KDD datasets. Promising results have been obtained from our model thus far, demonstrating improvements over existing approaches and the strong potential for use in modern NIDSs.", "title": "" }, { "docid": "72845c1eebbe683bfb91db2ddd5b0fee", "text": "Sketch-based modeling strives to bring the ease and immediacy of drawing to the 3D world. However, while drawings are easy for humans to create, they are very challenging for computers to interpret due to their sparsity and ambiguity. We propose a data-driven approach that tackles this challenge by learning to reconstruct 3D shapes from one or more drawings. At the core of our approach is a deep convolutional neural network (CNN) that predicts occupancy of a voxel grid from a line drawing. This CNN provides an initial 3D reconstruction as soon as the user completes a single drawing of the desired shape. We complement this single-view network with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint. A key advantage of our approach is that we can apply the updater iteratively to fuse information from an arbitrary number of viewpoints, without requiring explicit stroke correspondences between the drawings. We train both CNNs by rendering synthetic contour drawings from hand-modeled shape collections as well as from procedurally-generated abstract shapes. Finally, we integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Conference’17, July 2017, Washington, DC, USA © 2018 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn This is the authors version of the work. It is posted by permission of ACM for your personal use. Not for redistribution. The definite version will be published in PACMCGIT.", "title": "" }, { "docid": "aecd7a910b52b6e34e10f10a12d0f966", "text": "Language processing is an example of implicit learning of multiple statistical cues that provide probabilistic information regarding word structure and use. Much of the current debate about language embodiment is devoted to how action words are represented in the brain, with motor cortex activity evoked by these words assumed to selectively reflect conceptual content and/or its simulation. We investigated whether motor cortex activity evoked by manual action words (e.g., caress) might reflect sensitivity to probabilistic orthographic–phonological cues to grammatical category embedded within individual words. We first review neuroimaging data demonstrating that nonwords evoke activity much more reliably than action words along the entire motor strip, encompassing regions proposed to be action category specific. Using fMRI, we found that disyllabic words denoting manual actions evoked increased motor cortex activity compared with non-body-part-related words (e.g., canyon), activity which overlaps that evoked by observing and executing hand movements. This result is typically interpreted in support of language embodiment. Crucially, we also found that disyllabic nonwords containing endings with probabilistic cues predictive of verb status (e.g., -eve) evoked increased activity compared with nonwords with endings predictive of noun status (e.g., -age) in the identical motor area. Thus, motor cortex responses to action words cannot be assumed to selectively reflect conceptual content and/or its simulation. Our results clearly demonstrate motor cortex activity reflects implicit processing of ortho-phonological statistical regularities that help to distinguish a word's grammatical class.", "title": "" }, { "docid": "27ddea786e06ffe20b4f526875cdd76b", "text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .", "title": "" }, { "docid": "5ffd60a7004ff53fa2df68a4ad11a314", "text": "OBJECTIVE\nIt is widely recognized that social networks and loneliness have effects on health. The present study assesses the differential association that the components of the social network and the subjective perception of loneliness have with health, and analyzes whether this association is different across different countries.\n\n\nMETHODS\nA total of 10 800 adults were interviewed in Finland, Poland and Spain. Loneliness was assessed by means of the 3-item UCLA Loneliness Scale. Individuals' social networks were measured by asking about the number of members in the network, how often they had contacts with these members, and whether they had a close relationship. The differential association of loneliness and the components of the social network with health was assessed by means of hierarchical linear regression models, controlling for relevant covariates.\n\n\nRESULTS\nIn all three countries, loneliness was the variable most strongly correlated with health after controlling for depression, age, and other covariates. Loneliness contributed more strongly to health than any component of the social network. The relationship between loneliness and health was stronger in Finland (|β| = 0.25) than in Poland (|β| = 0.16) and Spain (|β| = 0.18). Frequency of contact was the only component of the social network that was moderately correlated with health.\n\n\nCONCLUSIONS\nLoneliness has a stronger association with health than the components of the social network. This association is similar in three different European countries with different socio-economic and health characteristics and welfare systems. The importance of evaluating and screening feelings of loneliness in individuals with health problems should be taken into account. Further studies are needed in order to be able to confirm the associations found in the present study and infer causality.", "title": "" }, { "docid": "572348e4389acd63ea7c0667e87bbe04", "text": "Through the analysis of collective upvotes and downvotes in multiple social media, we discover the bimodal regime of collective evaluations. When online content surpasses the local social context by reaching a threshold of collective attention, negativity grows faster with positivity, which serves as a trace of the burst of a filter bubble. To attain a global audience, we show that emotions expressed in online content has a significant effect and also play a key role in creating polarized opinions.", "title": "" } ]
scidocsrr
fd082106b8dc49c0ad0c5da0a7c5bbbf
ESPnet: End-to-End Speech Processing Toolkit
[ { "docid": "aac17c2c975afaa3f55e42e698d398b3", "text": "Many state-of-the-art Large Vocabulary Continuous Speech Recognition (LVCSR) Systems are hybrids of neural networks and Hidden Markov Models (HMMs). Recently, more direct end-to-end methods have been investigated, in which neural architectures were trained to model sequences of characters [1,2]. To our knowledge, all these approaches relied on Connectionist Temporal Classification [3] modules. We investigate an alternative method for sequence modelling based on an attention mechanism that allows a Recurrent Neural Network (RNN) to learn alignments between sequences of input frames and output labels. We show how this setup can be applied to LVCSR by integrating the decoding RNN with an n-gram language model and by speeding up its operation by constraining selections made by the attention mechanism and by reducing the source sequence lengths by pooling information over time. Recognition accuracies similar to other HMM-free RNN-based approaches are reported for the Wall Street Journal corpus.", "title": "" }, { "docid": "82bfc1bc10247a23f45e30481db82245", "text": "The performance of automatic speech recognition (ASR) has improved tremendously due to the application of deep neural networks (DNNs). Despite this progress, building a new ASR system remains a challenging task, requiring various resources, multiple training stages and significant expertise. This paper presents our Eesen framework which drastically simplifies the existing pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen involves learning a single recurrent neural network (RNN) predicting context-independent targets (phonemes or characters). To remove the need for pre-generated frame labels, we adopt the connectionist temporal classification (CTC) objective function to infer the alignments between speech and label sequences. A distinctive feature of Eesen is a generalized decoding approach based on weighted finite-state transducers (WFSTs), which enables the efficient incorporation of lexicons and language models into CTC decoding. Experiments show that compared with the standard hybrid DNN systems, Eesen achieves comparable word error rates (WERs), while at the same time speeding up decoding significantly.", "title": "" }, { "docid": "630715aa44ba84b2c04eb90f9465c481", "text": "The field of speech recognition is in the midst of a paradigm shift: end-to-end neural networks are challenging the dominance of hidden Markov models as a core technology. Using an attention mechanism in a recurrent encoder-decoder architecture solves the dynamic time alignment problem, allowing joint end-to-end training of the acoustic and language modeling components. In this paper we extend the end-to-end framework to encompass microphone array signal processing for noise suppression and speech enhancement within the acoustic encoding network. This allows the beamforming components to be optimized jointly within the recognition architecture to improve the end-to-end speech recognition objective. Experiments on the noisy speech benchmarks (CHiME-4 and AMI) show that our multichannel end-to-end system outperformed the attention-based baseline with input from a conventional adaptive beamformer.", "title": "" }, { "docid": "de478fc24877f9e144615d6f3bb46799", "text": "Design issues of a spontaneous speech corpus is described. The corpus under compilation will contain 800-1000 hour spontaneously uttered Common Japanese speech and the morphologically annotated transcriptions. Also, segmental and intonation labeling will be provided for a subset of the corpus. The primary application domain of the corpus is speech recognition of spontaneous speech, but we plan to make it useful for natural language processing and phonetic/linguistic studies also.", "title": "" } ]
[ { "docid": "c7a902faf84eabe5c7d298c2c83c4617", "text": "Fangwei Li Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China lifw@cqupt.edu.cn Xinyue Zhang Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China zhangxinyue159@163.com Jiang Zhu Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China zhujiang@cqupt.edu.cn Yan Wang Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing, China wangyan2250@sina.com ABSTRACT In order to reflect the situation of network security assessment performance fully and accurately, a new network security situation awareness model based on information fusion was proposed. Network security situation is the result of fusion three aspects evaluation. In terms of attack, to improve the accuracy of evaluation, a situation assessment method of DDoS attack based on the information of data packet was proposed. In terms of vulnerability, a improved Common Vulnerability Scoring System (CVSS) was raised and maked the assessment more comprehensive. In terms of node weights, the method of calculating the combined weights and optimizing the result by Sequence Quadratic Program (SQP) algorithm which reduced the uncertainty of fusion was raised. To verify the validity and necessity of the method, a testing platform was built and used to test through evaluating 2000 DAPRA data sets. Experiments show that the method can improve the accuracy of evaluation results.", "title": "" }, { "docid": "6cdf0dd024147cb5c8ecd3e8ae9d20b8", "text": "Traditionally, marketers have focused on functional and meaningful product differentiation and have shown that such differentiation is important because consumers engage in a deliberate reasoning process (Chernev, 2001; Shafir et al., 1993; Simonson, 1989). However, nowadays products in many categories are functionally highly similar, and it is difficult for consumers to differentiate products based on functional attributes. An alternative way of differentiating is to emphasize non-functional product characteristics or certain aspects of the judgment context. For example, the VW New Beetle brand has used unique colors and shapes very prominently. Apple Computers has used a smiley face that appeared on the screen of computers when they were powered up as well as translucent colors to differentiate, for example, its iMac and iPod lines from competitive products. In addition, Apple Computers has integrated the colors and shapes of the product design with the design of its websites and the so-called AppleStores. Similar approaches focusing on colors, shapes or affective stimuli have been used for other global brands as well and for local brands in all sorts of product categories, including commodities like water and salt. Here we refer to such attributes, which have emerged in marketing as key differentiators, as ‘experiential attributes’ (Schmitt, 1999). Specifically, experiential attributes consist of non-verbal stimuli that include sensory cues such as colors (Bellizzi et al., 1983; Bellizzi and Hite, 1992; Degeratu et al., 2000; Gorn et al., 1997; Meyers-Levy and Peracchio, 1995) and shapes (Veryzer and Hutchinson, 1998) as well as affective cues such as mascots that may appear on products, packaging or contextually as part of ads (Holbrook and Hirschman, 1982; Keller, 1987). Experiential attributes are also used in logos (Henderson et al., 2003), and as part of the judgment context, for example, as backgrounds on websites (Mandel and Johnson, 2002) and in shopping environments (Spies et al., 1997). Unlike functional attributes, experiential attributes are not utilitarian (Zeithaml, 1988). Instead, experiential attributes may result in positive ‘feelings and experiences’ (Schwarz and Clore, 1996; Winkielman et al., 2003). Yet, how exactly do consumers process experiential attributes? How can consumers use them to reach a decision among alternatives? Moreover, are there different ways of processing experiential attributes? In this chapter, we examine how experiential attributes are processed and how they are of value in consumer decision-making. We distinguish two ways of processing experiential features: deliberate processing, which is similar to the way functional attributes are processed, and fluent processing, which occurs without much deliberation. We identify judgment contexts in which consumers process experiential", "title": "" }, { "docid": "ead09145f5d45f50ed1f36b3b4fc7b17", "text": "Gesture is a natural interface in human–computer interaction, especially interacting with wearable devices, such as VR/AR helmet and glasses. However, in the gesture recognition community, it lacks of suitable datasets for developing egocentric (first-person view) gesture recognition methods, in particular in the deep learning era. In this paper, we introduce a new benchmark dataset named EgoGesture with sufficient size, variation, and reality to be able to train deep neural networks. This dataset contains more than 24 000 gesture samples and 3 000 000 frames for both color and depth modalities from 50 distinct subjects. We design 83 different static and dynamic gestures focused on interaction with wearable devices and collect them from six diverse indoor and outdoor scenes, respectively, with variation in background and illumination. We also consider the scenario when people perform gestures while they are walking. The performances of several representative approaches are systematically evaluated on two tasks: gesture classification in segmented data and gesture spotting and recognition in continuous data. Our empirical study also provides an in-depth analysis on input modality selection and domain adaptation between different scenes.", "title": "" }, { "docid": "ff81d8b7bdc5abbd9ada376881722c02", "text": "Along with the progress of miniaturization and energy saving technologies of sensors, biological information in our daily life can be monitored by installing the sensors to a lavatory bowl. Lavatory is usually shared among several people, therefore biological information need to be identified. Using camera, microphone, or scales is not appropriate considering privacy in a lavatory. In this paper, we focus on the difference in the way of pulling a toilet paper roll and propose a system that identifies individuals based on features of rotation of a toilet paper roll with a gyroscope. The evaluation results confirmed that 85.8% accuracy was achieved for a five-people group in a laboratory environment.", "title": "" }, { "docid": "b69f6ed1ba20025801ce090ef5f2e4a3", "text": "At the heart of a well-disciplined, systematic methodology that explicitly supports the use of COTS components is a clearly defined process for effectively using components that meet the needs of the system under development. In this paper, we present the CARE/SA approach which supports the iterative matching, ranking, and selection of COTS components, using a representation of COTS components as an aggregate of their functional and non-functional requirements and architecture. The approach is illustrated using a Digital Library System example. 1 This is an extended and improved version of [8]; this extension considers both functional and non-functional requirements as candidates for the matching, ranking, and selection process.", "title": "" }, { "docid": "3daa9fc7d434f8a7da84dd92f0665564", "text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).", "title": "" }, { "docid": "09c5da2fbf8a160ba27221ff0c5417ac", "text": " The burst fracture of the spine was first described by Holdsworth in 1963 and redefined by Denis in 1983 as being a fracture of the anterior and middle columns of the spine with or without an associated posterior column fracture. This injury has received much attention in the literature as regards its radiological diagnosis and also its clinical managment. The purpose of this article is to review the way that imaging has been used both to diagnose the injury and to guide management. Current concepts of the stability of this fracture are presented and our experience in the use of magnetic resonance imaging in deciding treatment options is discussed.", "title": "" }, { "docid": "803c518b8786f98aea31e5b5830b9fdc", "text": "We propose an adaption of the explanation-generating system LIME. While LIME relies on sparse linear models, we explore how richer explanations can be generated. As application domain we use images which consist of a coarse representation of ancient graves. The graves are divided into two classes and can be characterised by meaningful features and relations. This domain was generated in analogy to a classic concept acquisition domain researched in psychology. Like LIME, our approach draws samples around a simplified representation of the instance to be explained. The samples are labelled by a generator – simulating a black-box classifier trained on the images. In contrast to LIME, we feed this information to the ILP system Aleph. We customised Aleph’s evaluation function to take into account the similarity of instances. We applied our approach to generate explanations for different variants of the ancient graves domain. We show that our explanations can involve richer knowledge thereby going beyond the expressiveness of sparse linear models.", "title": "" }, { "docid": "a6f98dae1afbd7449c038157392aff45", "text": "Following the globalization of business and systems, there is a pressing need to understand the main factors affecting mobile banking user acceptance. The increasing number of mobile banking studies and articles published in the last years, as well as conferences and workshops, has made the research process on this important subject more complex and time-consuming. Therefore, it is necessary to synthesize findings from existing research, seeking an update of the current state-of-the-art knowledge. A combination of weight and meta-analysis was chosen, in order to identify the frequency and relevance of the most used constructs and their most important relationships. A total of 57 articles were found in the literature, having the necessary quantitative statistical data to be considered. The best predictors of the intention to use the mobile banking services identified, simultaneously significant in the weight and in the meta-analysis, are: (i) attitude, (ii) initial trust, (iii) perceived risk, and (iv) performance expectancy. In terms of use of mobile banking, considering the same assumptions, the best predictors are: (i) intention, and (ii) performance expectancy. Facilitating conditions on attitude, task technology fit on performance expectancy, and performance expectancy on initial trust have the potential to be added to the list of the most important predictors, but they still need additional research. A theoretical model based on our results is presented, providing a means to support future mobile banking acceptance studies. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "afc6f1531a5b9ff3f7d0d93bc1ff3183", "text": "Wounds are of a variety of types and each category has its own distinctive healing requirements. This realization has spurred the development of a myriad of wound dressings, each with specific characteristics. It is unrealistic to expect a singular dressing to embrace all characteristics that would fulfill generic needs for wound healing. However, each dressing may approach the ideal requirements by deviating from the 'one size fits all approach', if it conforms strictly to the specifications of the wound and the patient. Indeed, a functional wound dressing should achieve healing of the wound with minimal time and cost expenditures. This article offers an insight into several different types of polymeric materials clinically used in wound dressings and the events taking place at cellular level, which aid the process of healing, while the biomaterial dressing interacts with the body tissue. Hence, the significance of using synthetic polymer films, foam dressings, hydrocolloids, alginate dressings, and hydrogels has been reviewed, and the properties of these materials that conform to wound-healing requirements have been explored. A special section on bioactive dressings and bioengineered skin substitutes that play an active part in healing process has been re-examined in this work.", "title": "" }, { "docid": "33ab76f714ca23bdfddecfe436fd1ee2", "text": "A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combine that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason-schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR. keywords: defeasible reasoning, nonmonotonic logic, perception, causes, causation, time, temporal This work was supported in part by NSF grant no. IRI-9634106. An early version of some of this material appears in Pollock (1996), but it has undergone substantial change in the present paper. projection, frame problem, qualification problem, ramification problem, OSCAR.", "title": "" }, { "docid": "e30ae0b5cd90d091223ab38596de3109", "text": "1 Abstract We describe a consistent hashing algorithm which performs multiple lookups per key in a hash table of nodes. It requires no additional storage beyond the hash table, and achieves a peak-to-average load ratio of 1 + ε with just 1 + 1 ε lookups per key.", "title": "" }, { "docid": "72147e489de9053bf1a4844c2f0de717", "text": "Video Question Answering is a challenging problem in visual information retrieval, which provides the answer to the referenced video content according to the question. However, the existing visual question answering approaches mainly tackle the problem of static image question, which may be ineffectively for video question answering due to the insufficiency of modeling the temporal dynamics of video contents. In this paper, we study the problem of video question answering by modeling its temporal dynamics with frame-level attention mechanism. We propose the attribute-augmented attention network learning framework that enables the joint frame-level attribute detection and unified video representation learning for video question answering. We then incorporate the multi-step reasoning process for our proposed attention network to further improve the performance. We construct a large-scale video question answering dataset. We conduct the experiments on both multiple-choice and open-ended video question answering tasks to show the effectiveness of the proposed method.", "title": "" }, { "docid": "40ca946c3cd4c8617585c648de5ce883", "text": "Investigating the incidence, type, and preventability of adverse drug events (ADEs) and medication errors is crucial to improving the quality of health care delivery. ADEs, potential ADEs, and medication errors can be collected by extraction from practice data, solicitation of incidents from health professionals, and patient surveys. Practice data include charts, laboratory, prescription data, and administrative databases, and can be reviewed manually or screened by computer systems to identify signals. Research nurses, pharmacists, or research assistants review these signals, and those that are likely to represent an ADE or medication error are presented to reviewers who independently categorize them into ADEs, potential ADEs, medication errors, or exclusions. These incidents are also classified according to preventability, ameliorability, disability, severity, stage, and responsible person. These classifications, as well as the initial selection of incidents, have been evaluated for agreement between reviewers and the level of agreement found ranged from satisfactory to excellent (kappa = 0.32-0.98). The method of ADE and medication error detection and classification described is feasible and has good reliability. It can be used in various clinical settings to measure and improve medication safety.", "title": "" }, { "docid": "c1cdc9bb29660e910ccead445bcc896d", "text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.", "title": "" }, { "docid": "609041388f4b3744d5f1327397bcde7f", "text": "This article reviews ‘event tourism’ as both professional practice and a field of academic study. The origins and evolution of research on event tourism are pinpointed through both chronological and thematic literature reviews. A conceptual model of the core phenomenon and key themes in event tourism studies is provided as a framework for spurring theoretical advancement, identifying research gaps, and assisting professional practice. Conclusions are in two parts: a discussion of implications for the practice of event management and tourism, and implications are drawn for advancing theory in event tourism. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9075e2ae2f1345b91738f3d8ac34cfb2", "text": "We explore how well the intersection between our own everyday memories and those captured by our smartphones can be used for what we call autobiographical authentication-a challenge-response authentication system that queries users about day-to-day experiences. Through three studies-two on MTurk and one field study-we found that users are good, but make systematic errors at answering autobiographical questions. Using Bayesian modeling to account for these systematic response errors, we derived a formula for computing a confidence rating that the attempting authenticator is the user from a sequence of question-answer responses. We tested our formula against five simulated adversaries based on plausible real-life counterparts. Our simulations indicate that our model of autobiographical authentication generally performs well in assigning high confidence estimates to the user and low confidence estimates to impersonating adversaries.", "title": "" }, { "docid": "4ea0b55901c25bd56ddddb7e4301262a", "text": "Capacity limitation of runway and taxiway in an airport is often the major limiting factor of air traffic operation. The congestions on the apron of an airport cause severe delay on aircraft schedule. This paper proposes an integrated algorithm to solve problems of runway scheduling and taxiway routing simultaneously for both departure and arrival aircrafts. A set partitioning model with side constraint is proposed in which each possible aircraft route in the taxiway and runway is regarded as a decision variable. Beside traditional set partitioning constraints, two types of constraints are proposed to maintain a minimum separation distance between aircrafts in the taxiway and runway. The preliminary results show that our integrated algorithm outperforms the sequential method.", "title": "" }, { "docid": "ac15d2b4d14873235fe6e4d2dfa84061", "text": "Despite strong popular conceptions of gender differences in emotionality and striking gender differences in the prevalence of disorders thought to involve emotion dysregulation, the literature on the neural bases of emotion regulation is nearly silent regarding gender differences (Gross, 2007; Ochsner & Gross, in press). The purpose of the present study was to address this gap in the literature. Using functional magnetic resonance imaging, we asked male and female participants to use a cognitive emotion regulation strategy (reappraisal) to down-regulate their emotional responses to negatively valenced pictures. Behaviorally, men and women evidenced comparable decreases in negative emotion experience. Neurally, however, gender differences emerged. Compared with women, men showed (a) lesser increases in prefrontal regions that are associated with reappraisal, (b) greater decreases in the amygdala, which is associated with emotional responding, and (c) lesser engagement of ventral striatal regions, which are associated with reward processing. We consider two non-competing explanations for these differences. First, men may expend less effort when using cognitive regulation, perhaps due to greater use of automatic emotion regulation. Second, women may use positive emotions in the service of reappraising negative emotions to a greater degree. We then consider the implications of gender differences in emotion regulation for understanding gender differences in emotional processing in general, and gender differences in affective disorders.", "title": "" }, { "docid": "a68ccab91995603b3dbb54e014e79091", "text": "Qualitative models arising in artificial intelligence domain often concern real systems that are difficult to represent with traditional means. However, some promise for dealing with such systems is offered by research in simulation methodology. Such research produces models that combine both continuous and discrete-event formalisms. Nevertheless, the aims and approaches of the AI and the simulation communities remain rather mutually ill understood. Consequently, there is a need to bridge theory and methodology in order to have a uniform language when either analyzing or reasoning about physical systems. This article introduces a methodology and formalism for developing multiple, cooperative models of physical systems of the type studied in qualitative physics. The formalism combines discrete-event and continuous models and offers an approach to building intelligent machines capable of physical modeling and reasoning.", "title": "" } ]
scidocsrr
5665eb722830ab89106ef6ad224493e0
Trends in Interactive Knowledge Discovery for Personalized Medicine: Cognitive Science meets Machine Learning
[ { "docid": "73d9461101dc15f93f52d2ab9b8c0f39", "text": "The need for mining structured data has increased in the past few years. One of the best studied data structures in computer science and discrete mathematics are graphs. It can therefore be no surprise that graph based data mining has become quite popular in the last few years.This article introduces the theoretical basis of graph based data mining and surveys the state of the art of graph-based data mining. Brief descriptions of some representative approaches are provided as well.", "title": "" }, { "docid": "5d8f33b7f28e6a8d25d7a02c1f081af1", "text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: a.holzinger@tugraz.at Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1", "title": "" } ]
[ { "docid": "68b8540a3454bfb9e992b5180cd59e4e", "text": "Topic models are one of the most popular methods for learning representations of text, but a major challenge is that any change to the topic model requires mathematically deriving a new inference algorithm. A promising approach to address this problem is autoencoding variational Bayes (AEVB), but it has proven difficult to apply to topic models in practice. We present what is to our knowledge the first effective AEVB based inference method for latent Dirichlet allocation (LDA), which we call Autoencoded Variational Inference For Topic Model (AVITM). This model tackles the problems caused for AEVB by the Dirichlet prior and by component collapsing. We find that AVITM matches traditional methods in accuracy with much better inference time. Indeed, because of the inference network, we find that it is unnecessary to pay the computational cost of running variational optimization on test data. Because AVITM is black box, it is readily applied to new topic models. As a dramatic illustration of this, we present a new topic model called ProdLDA, that replaces the mixture model in LDA with a product of experts. By changing only one line of code from LDA, we find that ProdLDA yields much more interpretable topics, even if LDA is trained via collapsed Gibbs sampling.", "title": "" }, { "docid": "5401143c61a2a0ad2901bd72a086368b", "text": "In this paper we provide an implementation, evaluation, and analysis of PowerHammer, a malware (bridgeware [1]) that uses power lines to exfiltrate data from air-gapped computers. In this case, a malicious code running on a compromised computer can control the power consumption of the system by intentionally regulating the CPU utilization. Data is modulated, encoded, and transmitted on top of the current flow fluctuations, and then it is conducted and propagated through the power lines. This phenomena is known as a ’conducted emission’. We present two versions of the attack. Line level powerhammering: In this attack, the attacker taps the in-home power lines that are directly attached to the electrical outlet. Phase level power-hammering: In this attack, the attacker taps the power lines at the phase level, in the main electrical service panel. In both versions of the attack, the attacker measures the emission conducted and then decodes the exfiltrated data. We describe the adversarial attack model and present modulations and encoding schemes along with a transmission protocol. We evaluate the covert channel in different scenarios and discuss signal-to-noise (SNR), signal processing, and forms of interference. We also present a set of defensive countermeasures. Our results show that binary data can be covertly exfiltrated from air-gapped computers through the power lines at bit rates of 1000 bit/sec for the line level power-hammering attack and 10 bit/sec for the phase level power-hammering attack.", "title": "" }, { "docid": "76778aad4c6fbdb8e4cbb99452a08a7a", "text": "Question-oriented text retrieval, aka natural language-based text retrieval, has been widely used in software engineering. Earlier work has concluded that questions with the same keywords but different interrogatives (such as how, what) should result in different answers. But what is the difference? How to identify the right answers to a question? In this paper, we propose to investigate the \"answer style\" of software questions with different interrogatives. Towards this end, we build classifiers in a software text repository and propose a re-ranking approach to refine search results. The classifiers are trained by over 16,000 answers from the StackOverflow forum. Each answer is labeled accurately by its question's explicit or implicit interrogatives. We have evaluated the performance of our classifiers and the refinement of our re-ranking approach in software text retrieval. Our approach results in 13.1% and 12.6% respectively improvement with respect to text retrieval criteria nDCG@1 and nDCG@10 compared to the baseline. We also apply our approach to FAQs of 7 open source projects and show 13.2% improvement with respect to nDCG@1. The results of our experiments suggest that our approach could find answers to FAQs more precisely.", "title": "" }, { "docid": "7f575dd097ac747eddd2d7d0dc1055d5", "text": "It has been widely believed that biometric template aging does not occur for iris biometrics. We compare the match score distribution for short time-lapse iris image pairs, with a mean of approximately one month between the enrollment image and the verification image, to the match score distributions for image pairs with one, two and three years of time lapse. We find clear and consistent evidence of a template aging effect that is noticeable at one year and that increases with increasing time lapse. For a state-of-the-art iris matcher, and three years of time lapse, at a decision threshold corresponding to a one in two million false match rate, we observe an 153% increase in the false non-match rate, with a bootstrap estimated 95% confidence interval of 85% to 307%.", "title": "" }, { "docid": "0f25cfa80ee503aa5012772ac54fb7a3", "text": "Parameter reduction has been an important topic in deep learning due to the everincreasing size of deep neural network models and the need to train and run them on resource limited machines. Despite many efforts in this area, there were no rigorous theoretical guarantees on why existing neural net compression methods should work. In this paper, we provide provable guarantees on some hashing-based parameter reduction methods in neural nets. First, we introduce a neural net compression scheme based on random linear sketching (which is usually implemented efficiently via hashing), and show that the sketched (smaller) network is able to approximate the original network on all input data coming from any smooth and wellconditioned low-dimensional manifold. The sketched network can also be trained directly via back-propagation. Next, we study the previously proposed HashedNets architecture and show that the optimization landscape of one-hidden-layer HashedNets has a local strong convexity property similar to a normal fully connected neural network. We complement our theoretical results with empirical verifications.", "title": "" }, { "docid": "d8800e0285da2d364bf03459ab112503", "text": "As one would expect, these polynomials possess many properties of the Fibonacci sequence which, of course, is just the integral sequence {f (1)}. However, a most surprising result is that f (x) is irreducible over the ring of integers if and only if p is a prime. In contrast, for the Fibonacci sequence, the condition that n be a prime is necessary but not sufficient for the primality of f (1) = F . For instance, F19 = 4181 = 37-113. In the present paper, we obtain a ser ies of resul ts including that of Webb and Parbe r ry for the more general but clearly related sequence {u (x,y)} defined by the recursion", "title": "" }, { "docid": "5dbbe0c1087b7eade43362e81f41c614", "text": "Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce blur and is often impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effectiveness is limited in extreme conditions, such as video-rate imaging at night. To support the development of learning-based pipelines for low-light image processing, we introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work.", "title": "" }, { "docid": "e32c8589a92a92ab8fd876bb760fb98e", "text": "The importance of the social sciences for medical informatics is increasingly recognized. As ICT requires inter-action with people and thereby inevitably affects them, understanding ICT requires a focus on the interrelation between technology and its social environment. Sociotechnical approaches increase our understanding of how ICT applications are developed, introduced and become a part of social practices. Socio-technical approaches share several starting points: 1) they see health care work as a social, 'real life' phenomenon, which may seem 'messy' at first, but which is guided by a practical rationality that can only be overlooked at a high price (i.e. failed systems). 2) They see technological innovation as a social process, in which organizations are deeply affected. 3) Through in-depth, formative evaluation, they can help improve system design and implementation.", "title": "" }, { "docid": "4043ff8ad86b8268be89d3fa2d9206bb", "text": "Surfing’s progeny, snowboarding and skateboarding, present similar positional, visual, and kinesthetic reafferential aspects. Such aspects lead us to the assumption of a positive knowledge transfer from skateboarding to snowboarding. In this investigation we analyzed the probability of and theories for the transfer effect under field conditions. Students of the experimental group received five skateboarding lessons. They then joined a student control group for a six-day school snowboarding trip. Both groups were videotaped on the second and sixth days of the trip. Experts rated snowboarding performance of subjects pertaining to either of groups on a scale of one (very bad) to ten (excellent) points. Inter-rater reliability was very good. While there were no significant differences between the groups on the second day, the students of the experimental group significantly outperformed students of the control group in snowboarding on the sixth day (Mcontrol=4.80, SDcontrol=2.10; Mtreat=6.56, SDtreat=2.10; T=-1.78, df=16, pone-tailed=.045, d=-.83). Given a common underlying structure of skateboarding and snowboarding, skateboarding lessons that develop that structure have a facilitation effect on learning how to snowboard successfully.", "title": "" }, { "docid": "3b6b92759e0f13f8814af5fa34274081", "text": "Information visualization has often focused on providing deep insight for expert user populations and on techniques for amplifying cognition through complicated interactive visual models. This paper proposes a new subdomain for infovis research that complements the focus on analytic tasks and expert use. Instead of work-related and analytically driven infovis, we propose casual information visualization (or casual infovis) as a complement to more traditional infovis domains. Traditional infovis systems, techniques, and methods do not easily lend themselves to the broad range of user populations, from expert to novices, or from work tasks to more everyday situations. We propose definitions, perspectives, and research directions for further investigations of this emerging subfield. These perspectives build from ambient information visualization (Skog et al., 2003), social visualization, and also from artistic work that visualizes information (Viegas and Wattenberg, 2007). We seek to provide a perspective on infovis that integrates these research agendas under a coherent vocabulary and framework for design. We enumerate the following contributions. First, we demonstrate how blurry the boundary of infovis is by examining systems that exhibit many of the putative properties of infovis systems, but perhaps would not be considered so. Second, we explore the notion of insight and how, instead of a monolithic definition of insight, there may be multiple types, each with particular characteristics. Third, we discuss design challenges for systems intended for casual audiences. Finally we conclude with challenges for system evaluation in this emerging subfield.", "title": "" }, { "docid": "ebba58a8365e2c4422d248709bbefd6a", "text": "Inspired by recent results on information-theoretic security, we consider the transmission of confidential messages over wireless networks, in which the legitimate communication partners are aided by friendly jammers. We characterize the security level of a confined region in a quasi-static fading environment by computing the probability of secrecy outage in connection with two new measures of physical-layer security: the jamming coverage and the jamming efficiency. Our analysis for various jamming strategies based on different levels of channel state information provides insight into the design of optimal jamming configurations and shows that a single jammer is not sufficient to maximize both figures of merit simultaneously. Moreover, a single jammer requires full channel state information to provide security gains in the vicinity of the legitimate receiver.", "title": "" }, { "docid": "565941db0284458e27485d250493fd2a", "text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.", "title": "" }, { "docid": "4a16195478fcb1285ed5e5129a49199d", "text": "BACKGROUND AND PURPOSE\nLittle research has been done regarding the attitudes and behaviors of physical therapists relative to the use of evidence in practice. The purposes of this study were to describe the beliefs, attitudes, knowledge, and behaviors of physical therapist members of the American Physical Therapy Association (APTA) as they relate to evidence-based practice (EBP) and to generate hypotheses about the relationship between these attributes and personal and practice characteristics of the respondents.\n\n\nMETHODS\nA survey of a random sample of physical therapist members of APTA resulted in a 48.8% return rate and a sample of 488 that was fairly representative of the national membership. Participants completed a questionnaire designed to determine beliefs, attitudes, knowledge, and behaviors regarding EBP, as well as demographic information about themselves and their practice settings. Responses were summarized for each item, and logistic regression analyses were used to examine relationships among variables.\n\n\nRESULTS\nRespondents agreed that the use of evidence in practice was necessary, that the literature was helpful in their practices, and that quality of patient care was better when evidence was used. Training, familiarity with and confidence in search strategies, use of databases, and critical appraisal tended to be associated with younger therapists with fewer years since they were licensed. Seventeen percent of the respondents stated they read fewer than 2 articles in a typical month, and one quarter of the respondents stated they used literature in their clinical decision making less than twice per month. The majority of the respondents had access to online information, although more had access at home than at work. According to the respondents, the primary barrier to implementing EBP was lack of time.\n\n\nDISCUSSION AND CONCLUSION\nPhysical therapists stated they had a positive attitude about EBP and were interested in learning or improving the skills necessary to implement EBP. They noted that they needed to increase the use of evidence in their daily practice.", "title": "" }, { "docid": "c06bfd970592c62f952fa98289f9e3b9", "text": "This paper proposes a new inequality-based criterion/constraint with its algorithmic and computational details for obstacle avoidance of redundant robot manipulators. By incorporating such a dynamically updated inequality constraint and the joint physical constraints (such as joint-angle limits and joint-velocity limits), a novel minimum-velocity-norm (MVN) scheme is presented and investigated for robotic redundancy resolution. The resultant obstacle-avoidance MVN scheme resolved at the joint-velocity level is further reformulated as a general quadratic program (QP). Two QP solvers, i.e., a simplified primal-dual neural network based on linear variational inequalities (LVI) and an LVI-based numerical algorithm, are developed and applied for online solution of the QP problem as well as the inequality-based obstacle-avoidance MVN scheme. Simulative results that are based on PA10 robot manipulator and a six-link planar robot manipulator in the presence of window-shaped and point obstacles demonstrate the efficacy and superiority of the proposed obstacle-avoidance MVN scheme. Moreover, experimental results of the proposed MVN scheme implemented on the practical six-link planar robot manipulator substantiate the physical realizability and effectiveness of such a scheme for obstacle avoidance of redundant robot manipulator.", "title": "" }, { "docid": "75a1c22e950ccb135c054353acb8571a", "text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.", "title": "" }, { "docid": "92b9469513d1d7ba3b2086230788d4f1", "text": "Floating-point division is a very costly operation in FPGA designs. High-frequency implementations of the classic digit-recurrence algorithms for division have long latencies (of the order of the number fraction bits) and consume large amounts of logic. Additionally, these implementations require important routing resources, making timing closure difficult in complete designs. In this paper we present two multiplier-based architectures for division which make efficient use of the DSP resources in recent Altera FPGAs. By balancing resource usage between logic, memory and DSP blocks, the presented architectures maintain high frequencies is full designs. Additionally, compared to classical algorithms, the proposed architectures have significantly lower latencies. The architectures target faithfully rounded results, similar to most elementary functions implementations for FPGAs but can also be transformed into correctly rounded architectures with a small overhead. The presented architectures are built using the Altera DSP Builder Advanced framework and will be part of the default blockset.", "title": "" }, { "docid": "d95b182517307844faa458e3f4edf0ab", "text": "Scilab and Scicos are open-source and free software packages for design, simulation and realization of industrial process control systems. They can be used as the center of an integrated platform for the complete development process, including running controller with real plant (ScicosHIL: Hardware In the Loop) and automatic code generation for real time embedded platforms (Linux, RTAI/RTAI-Lab, RTAIXML/J-RTAI-Lab). These tools are mature, working alternatives to closed source, proprietary solutions for educational, academic, research and industrial applications. We present, using a working example, a complete development chain, from the design tools to the automatic code generation of stand alone embedded control and user interface program.", "title": "" }, { "docid": "4ab881c788f0d819f12094f5b9589135", "text": "The Global Navigation Satellite Systems (GNSS) suffer from accuracy deterioration and outages in dense urban canyons and are almost unavailable for indoor environments. Nowadays, developing indoor positioning systems has become an attractive research topic due to the increasing demands on ubiquitous positioning. WiFi technology has been studied for many years to provide indoor positioning services. The WiFi indoor localization systems based on machine learning approach are widely used in the literature. These systems attempt to find the perfect match between the user fingerprint and pre-defined set of grid points on the radio map. However, Fingerprints are duplicated from available Access Points (APs) and interference, which increase number of matched patterns with the user's fingerprint. In this research, the Principle Component Analysis (PCA) is utilized to improve the performance and to reduce the computation cost of the WiFi indoor localization systems based on machine learning approach. All proposed methods were developed and physically realized on Android-based smart phone using the IEEE 802.11 WLANs. The experimental setup was conducted in a real indoor environment in both static and dynamic modes. The performance of the proposed method was tested using K-Nearest Neighbors, Decision Tree, Random Forest and Support Vector Machine classifiers. The results show that the performance of the proposed method outperforms other indoor localization reported in the literature. The computation time was reduced by 70% when using Random Forest classifier in the static mode and by 33% when using KNN in the dynamic mode.", "title": "" }, { "docid": "9ba6a2042e99c3ace91f0fc017fa3fdd", "text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.", "title": "" } ]
scidocsrr
a6b7d71628f57d0a64e68969f9afca56
Benchmarking Graph Databases on the Problem of Community Detection
[ { "docid": "164b61b3c8e29e19cd6c7be2abf046db", "text": "In recent years, more and more companies provide services that can not be anymore achieved efficiently using relational databases. As such, these companies are forced to use alternative database models such as XML databases, object-oriented databases, document-oriented databases and, more recently graph databases. Graph databases only exist for a few years. Although there have been some comparison attempts, they are mostly focused on certain aspects only. In this paper, we present a distributed graph database comparison framework and the results we obtained by comparing four important players in the graph databases market: Neo4j, Orient DB, Titan and DEX.", "title": "" }, { "docid": "a69220d5cf0145eb6e2e8b13252e6eea", "text": "Database benchmarks are an important tool for database researchers and practitioners that ease the process of making informed comparisons between different database hardware, software and configurations. Large scale web services such as social networks are a major and growing database application area, but currently there are few benchmarks that accurately model web service workloads.\n In this paper we present a new synthetic benchmark called LinkBench. LinkBench is based on traces from production databases that store \"social graph\" data at Facebook, a major social network. We characterize the data and query workload in many dimensions, and use the insights gained to construct a realistic synthetic benchmark. LinkBench provides a realistic and challenging test for persistent storage of social and web service data, filling a gap in the available tools for researchers, developers and administrators.", "title": "" } ]
[ { "docid": "7bd54a65ce90f0d935857ba0fcb457a5", "text": "Estimating energy costs for an industrial process can be computationally intensive and time consuming, especially as it can involve data collection from different (distributed) monitoring sensors. Industrial processes have an implicit complexity involving the use of multiple appliances (devices/ sub-systems) attached to operation schedules, electrical capacity and optimisation setpoints which need to be determined for achieving operational cost objectives. Addressing the complexity associated with an industrial workflow (i.e. range and type of tasks) leads to increased requirements on the computing infrastructure. Such requirements can include achieving execution performance targets per processing unit within a particular size of infrastructure i.e. processing & data storage nodes to complete a computational analysis task within a specific deadline. The use of ensemblebased edge processing is identifed to meet these Quality of Service targets, whereby edge nodes can be used to distribute the computational load across a distributed infrastructure. Rather than relying on a single edge node, we propose the combined use of an ensemble of such nodes to overcome processing, data privacy/ security and reliability constraints. We propose an ensemble-based network processing model to facilitate distributed execution of energy simulations tasks within an industrial process. A scenario based on energy profiling within a fisheries plant is used to illustrate the use of an edge ensemble. The suggested approach is however general in scope and can be used in other similar application domains.", "title": "" }, { "docid": "e409a2a23fb0dbeb0aa57c89a10d61b1", "text": "Text is still the most prevalent Internet media type. Examples of this include popular social networking applications such as Twitter, Craigslist, Facebook, etc. Other web applications such as e-mail, blog, chat rooms, etc. are also mostly text based. A question we address in this paper that deals with text based Internet forensics is the following: given a short text document, can we identify if the author is a man or a woman? This question is motivated by recent events where people faked their gender on the Internet. Note that this is different from the authorship attribution problem. In this paper we investigate author gender identification for short length, multi-genre, content-free text, such as the ones found in many Internet applications. Fundamental questions we ask are: do men and women inherently use different classes of language styles? If this is true, what are good linguistic features that indicate gender? Based on research in human psychology, we propose 545 psycho-linguistic and gender-preferential cues along with stylometric features to build the feature space for this identification problem. Note that identifying the correct set of features that indicate gender is an open research problem. Three machine learning algorithms (support vector machine, Bayesian logistic regression and AdaBoost decision tree) are then designed for gender identification based on the proposed features. Extensive experiments on large text corpora (Reuters Corpus Volume 1 newsgroup data and Enron e-mail data) indicate an accuracy up to 85.1% in identifying the gender. Experiments also indicate that function words, word-based features and structural features are significant gender discriminators. a 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "80a9489262ee8d94d64dd8e475c060a3", "text": "The effects of social-cognitive variables on preventive nutrition and behavioral intentions were studied in 580 adults at 2 points in time. The authors hypothesized that optimistic self-beliefs operate in 2 phases and made a distinction between action self-efficacy (preintention) and coping self-efficacy (postintention). Risk perceptions, outcome expectancies, and action self-efficacy were specified as predictors of the intention at Wave 1. Behavioral intention and coping self-efficacy served as mediators linking the 3 predictors with low-fat and high-fiber dietary intake 6 months later at Wave 2. Covariance structure analysis yielded a good model fit for the total sample and 6 subsamples created by a median split of 3 moderators: gender, age, and body weight. Parameter estimates differed between samples; the importance of perceived self-efficacy increased with age and weight.", "title": "" }, { "docid": "68a1c87e9931bd2a0f9424de451ebfac", "text": "Recent research in sound simulation has focused on either sound synthesis or sound propagation, and many standalone algorithms have been developed for each domain. We present a novel technique for coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments. Our approach can generate sounds from rigid-bodies based on the vibration modes and radiation coefficients represented by the single-point multipole expansion. We present a mode-adaptive propagation algorithm that uses a perceptual Hankel function approximation technique to achieve interactive runtime performance. The overall approach allows for high degrees of dynamism - it can support dynamic sources, dynamic listeners, and dynamic directivity simultaneously. We have integrated our system with the Unity game engine and demonstrate the effectiveness of this fully-automatic technique for audio content creation in complex indoor and outdoor scenes. We conducted a preliminary, online user-study to evaluate whether our Hankel function approximation causes any perceptible loss of audio quality. The results indicate that the subjects were unable to distinguish between the audio rendered using the approximate function and audio rendered using the full Hankel function in the Cathedral, Tuscany, and the Game benchmarks.", "title": "" }, { "docid": "ecad37ad1097369fd03f0decff2d23dc", "text": "The unique musculoskeletal structure of the human hand brings in wider dexterous capabilities to grasp and manipulate a repertoire of objects than the non-human primates. It has been widely accepted that the orientation and the position of the thumb plays an important role in this characteristic behavior. There have been numerous attempts to develop anthropomorphic robotic hands with varying levels of success. Nevertheless, manipulation ability in those hands is to be ameliorated even though they can grasp objects successfully. An appropriate model of the thumb is important to manipulate the objects against the fingers and to maintain the stability. Modeling these complex interactions about the mechanical axes of the joints and how to incorporate these joints in robotic thumbs is a challenging task. This article presents a review of the biomechanics of the human thumb and the robotic thumb designs to identify opportunities for future anthropomorphic robotic hands.", "title": "" }, { "docid": "733885d6ec4ac2f7bce950fb7104773f", "text": "This paper presents a neuro-fuzzy classifer for activity recognition using one triaxial accelerometer and feature reduction approaches. We use a triaxial accelerometer to acquire subjects’ acceleration data and train the neurofuzzy classifier to distinguish different activities/movements. To construct the neuro-fuzzy classifier, a modified mapping-constrained agglomerative clustering algorithm is devised to reveal a compact data configuration from the acceleration data. In addition, we investigate two different feature reduction methods, a feature subset selection and linear discriminate analysis. These two methods are used to determine the significant feature subsets and retain the characteristics of the data distribution in the feature space for training the neuro-fuzzy classifier. Experimental results have successfully validated the effectiveness of the proposed classifier.", "title": "" }, { "docid": "3122b61a0d48888dff488cc41564c820", "text": "In this study, the ensemble classifier presented by Caruana, Niculescu-Mizil, Crew & Ksikes (2004) is investigated. Their ensemble approach generates thousands of models using a variety of machine learning algorithms and uses a forward stepwise selection to build robust ensembles that can be optimised to an arbitrary metric. On average, the resulting ensemble out-performs the best individual machine learning models. The classifier is implemented in the WEKA machine learning environment, which allows the results presented by the original paper to be validated and the classifier to be extended to multi-class problem domains. The behaviour of different ensemble building strategies is also investigated. The classifier is then applied to the spam filtering domain, where it is tested on three different corpora in an attempt to provide a realistic evaluation of the system. It records similar performance levels to that seen in other problem domains and out-performs individual models and the naive Bayesian filtering technique regularly used by commercial spam filtering solutions. Caruana et al.’s (2004) classifier will typically outperform the best known models in a variety of problems.", "title": "" }, { "docid": "efb81d85abcf62f4f3747a58154c5144", "text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.", "title": "" }, { "docid": "12a3e52c3af78663698e7b907f6ee912", "text": "A novel graph-based language-independent stemming algorithm suitable for information retrieval is proposed in this article. The main features of the algorithm are retrieval effectiveness, generality, and computational efficiency. We test our approach on seven languages (using collections from the TREC, CLEF, and FIRE evaluation platforms) of varying morphological complexity. Significant performance improvement over plain word-based retrieval, three other language-independent morphological normalizers, as well as rule-based stemmers is demonstrated.", "title": "" }, { "docid": "e82cd7c22668b0c9ed62b4afdf49d1f4", "text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.", "title": "" }, { "docid": "4a37742db1c55b877733f53ea95ee3c6", "text": "This paper presents an overview of an intelligence platform we have built to address threat hunting and incident investigation use-cases in the cyber security domain. Specifically, we focus on User and Entity Behavior Analytics (UEBA) modules that track and monitor behaviors of users, IP addresses and devices in an enterprise. Anomalous behavior is automatically detected using machine learning algorithms based on Singular Values Decomposition (SVD). Such anomalous behavior indicative of potentially malicious activity is alerted to analysts with relevant contextual information for further investigation and action. We provide a detailed description of the models, algorithms and implementation underlying the module and demonstrate the functionality with empirical examples.", "title": "" }, { "docid": "1bbb8acdc8b5573647708da7ff0252b6", "text": "I have a ton of questions about layout, design how formal to be in my writing, and Nicholas J. Higham. Handbook of Writing for the Mathematical Sciences. Nick J Higham School of Mathematics and Manchester Institute for Mathematical of numerical algorithms Handbook of writing for the mathematical sciences. (1) Nicholas J. Higham. Handbook of writing for the mathematical sciences. SIAM, 1998. (2) Leslie Lamport. LATEX Users Guide & Reference Manual.", "title": "" }, { "docid": "58fda5b08ffe26440b173f363ca36292", "text": "The dependence on information technology became critical and IT infrastructure, critical data, intangible intellectual property are vulnerable to threats and attacks. Organizations install Intrusion Detection Systems (IDS) to alert suspicious traffic or activity. IDS generate a large number of alerts and most of them are false positive as the behavior construe for partial attack pattern or lack of environment knowledge. Monitoring and identifying risky alerts is a major concern to security administrator. The present work is to design an operational model for minimization of false positive alarms, including recurring alarms by security administrator. The architecture, design and performance of model in minimization of false positives in IDS are explored and the experimental results are presented with reference to lab environment.", "title": "" }, { "docid": "45f8c4e3409f8b27221e45e6c3485641", "text": "In recent years, time information is more and more important in collaborative filtering (CF) based recommender system because many systems have collected rating data for a long time, and time effects in user preference is stronger. In this paper, we focus on modeling time effects in CF and analyze how temporal features influence CF. There are four main types of time effects in CF: (1) time bias, the interest of whole society changes with time; (2) user bias shifting, a user may change his/her rating habit over time; (3) item bias shifting, the popularity of items changes with time; (4) user preference shifting, a user may change his/her attitude to some types of items. In this work, these four time effects are used by factorized model, which is called TimeSVD. Moreover, many other time effects are used by simple methods. Our time-dependent models are tested on Netflix data from Nov. 1999 to Dec. 2005. Experimental results show that prediction accuracy in CF can be improved significantly by using time information.", "title": "" }, { "docid": "911545273424b27832310d9869ccb55f", "text": "Current people detectors operate either by scanning an image in a sliding window fashion or by classifying a discrete set of proposals. We propose a model that is based on decoding an image into a set of people detections. Our system takes an image as input and directly outputs a set of distinct detection hypotheses. Because we generate predictions jointly, common post-processing steps such as nonmaximum suppression are unnecessary. We use a recurrent LSTM layer for sequence generation and train our model end-to-end with a new loss function that operates on sets of detections. We demonstrate the effectiveness of our approach on the challenging task of detecting people in crowded scenes1.", "title": "" }, { "docid": "c166a5ac33c4bf0ffe055578f016e72f", "text": "The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).", "title": "" }, { "docid": "7e873e837ccc1696eb78639e03d02cae", "text": "Steering is an integral component of adaptive locomotor behavior. Along with reorientation of gaze and body in the direction of intended travel, body center of mass must be controlled in the mediolateral plane. In this study we examine how these subtasks are sequenced when steering is planned early or initiated under time constraints. Whole body kinematics were monitored as individuals were required to change their direction of travel by varying amounts when visually cued either at the beginning of the walk or one stride before. The analyses focused on the transition stride from one travel direction to another. Timing of changes (with respect to first right foot contact) in trunk roll angle, head and trunk yaw angle, and right foot displacement in the mediolateral plane were analyzed. The magnitude of these measures along with right and left foot placement at the beginning and right foot placement at the end of the transition stride were also analyzed. The results show the CNS uses two mechanisms, foot placement and trunk roll motion (piking action about the hip joint in the frontal plane), to move the center of mass towards the new direction of travel in the transition stride, preferring to use the first option when planning can be done early. Control of body center of mass precedes all other changes and is followed by initiation of head reorientation. Only then is the rest of the body reorientation initiated.", "title": "" }, { "docid": "b3da0c6745883ae3da10e341abc3bf4d", "text": "Electrophysiological recording studies in the dorsocaudal region of medial entorhinal cortex (dMEC) of the rat reveal cells whose spatial firing fields show a remarkably regular hexagonal grid pattern (Fyhn et al., 2004; Hafting et al., 2005). We describe a symmetric, locally connected neural network, or spin glass model, that spontaneously produces a hexagonal grid of activity bumps on a two-dimensional sheet of units. The spatial firing fields of the simulated cells closely resemble those of dMEC cells. A collection of grids with different scales and/or orientations forms a basis set for encoding position. Simulations show that the animal's location can easily be determined from the population activity pattern. Introducing an asymmetry in the model allows the activity bumps to be shifted in any direction, at a rate proportional to velocity, to achieve path integration. Furthermore, information about the structure of the environment can be superimposed on the spatial position signal by modulation of the bump activity levels without significantly interfering with the hexagonal periodicity of firing fields. Our results support the conjecture of Hafting et al. (2005) that an attractor network in dMEC may be the source of path integration information afferent to hippocampus.", "title": "" }, { "docid": "16b8a948e76a04b1703646d5e6111afe", "text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.", "title": "" }, { "docid": "2f11cc1b08083a999d5624a9600deee9", "text": "Residual Network (ResNet) is the state-of-the-art architecture that realizes successful training of really deep neural network. It is also known that good weight initialization of neural network avoids problem of vanishing/exploding gradients. In this paper, simplified models of ResNets are analyzed. We argue that goodness of ResNet is correlated with the fact that ResNets are relatively insensitive to choice of initial weights. We also demonstrate how batch normalization improves backpropagation of deep ResNets without tuning initial values of weights.", "title": "" } ]
scidocsrr
2c748573b4053bd311ae79c13e71a287
Shiny-phyloseq: Web application for interactive microbiome analysis with provenance tracking
[ { "docid": "06ab903f3de4c498e1977d7d0257f8f3", "text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.", "title": "" } ]
[ { "docid": "f322c2d3ab7db46feeceec2a6336cf6b", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios.", "title": "" }, { "docid": "de4c44363fd6bb6da7ec0c9efd752213", "text": "Modeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task on two datasets widely used in the literature. We also consider a new interesting task of ordering abstracts from conference papers and research proposals and demonstrate strong performance against recent methods. Visualizing the sentence representations learned by the model shows that the model has captured high level logical structure in these paragraphs. The model also learns rich semantic sentence representations by learning to order texts, performing comparably to recent unsupervised representation learning methods in the sentence similarity and paraphrase detection tasks.", "title": "" }, { "docid": "d4a893a151ce4a3dee0e5fde0ba11b7b", "text": "Software-Defined Radio (SDR) technology has already cleared up passive radar applications. Nevertheless, until now, no work has pointed how this flexible radio could fully and directly exploit pulsed radar signals. This paper aims at introducing this field of study presenting not only an SDR-based radar-detector but also how it could be conceived on a low power consumption device as a tablet, which would make convenient a passive network to identify and localize aircraft as a redundancy to the conventional air traffic control in adverse situations. After a brief approach of the main features of the equipment, as well as of the developed processing script, indoor experiments took place. Their results demonstrate that the processing of pulsed radar signal allows emitters to be identified when a local database is confronted. All this commitment has contributed to a greater proposal of an Electronic Intelligence (ELINT) or Electronic Support Measures (ESM) system embedded on a tablet, presenting characteristics of portability and furtiveness. This study is suggested for the areas of Software-Defined Radio, Electronic Warfare, Electromagnetic Devices and Radar Signal Processing.", "title": "" }, { "docid": "c1349662f18e4744920c7f7db93360e7", "text": "We present an approach to learning features that represent the local geometry around a point in an unstructured point cloud. Such features play a central role in geometric registration, which supports diverse applications in robotics and 3D vision. Current state-of-the-art local features for unstructured point clouds have been manually crafted and none combines the desirable properties of precision, compactness, and robustness. We show that features with these properties can be learned from data, by optimizing deep networks that map high-dimensional histograms into low-dimensional Euclidean spaces. The presented approach yields a family of features, parameterized by dimension, that are both more compact and more accurate than existing descriptors.", "title": "" }, { "docid": "50e58087f9a02a4a2d828b9434bdea17", "text": "ÐThis paper concerns an efficient algorithm for the solution of the exterior orientation problem. Orthogonal decompositions are used to first isolate the unknown depths of feature points in the camera reference frame, allowing the problem to be reduced to an absolute orientation with scale problem, which is solved using the SVD. The key feature of this approach is the low computational cost compared to existing approaches. Index TermsÐExterior orientation, pose estimation, absolute orientation, efficient linear method.", "title": "" }, { "docid": "95a380b670afe52b86aa905d7b6e5452", "text": "Objectives To determine the effectiveness of replacing restorations considered to be the cause of an oral lichenoid lesion (oral lichenoid reaction)(OLL).Design Clinical intervention and nine-month follow up.Setting The study was carried out in the University Dental Hospital of Manchester, 1998-2002.Subjects and methods A total of 51 patients, mean age 53 (SD 13) years, who had oral lesions or symptoms suspected to be related to their dental restorations were investigated. Baseline patch tests for a series of dental materials, biopsies and photographs were undertaken. Thirty-nine out of 51 (76%) of patients had their restorations replaced.Results The clinical manifestations of OLL were variable; the majority of OLL were found to be in the molar and retro molar area of the buccal mucosa and the tongue. Twenty-seven (53%) patients had positive patch test reactions to at least one material, 24 of them for one or more mercury compound. After a mean follow up period of nine months, lesions adjacent to replaced restorations completely healed in 16 (42%) patients (10 positive and 6 negative patch tests). Improvement in signs and symptoms were found in 18 (47%) patients (11 positive and 7 negative patch tests).Conclusion OLLs may be elicited by some dental restorations. Replacing restorations adjacent to these lesions is associated with healing in the majority of cases particularly when lesions are in close contact with restorations. A patch test seems to be of limited benefit as a predictor of such reactions.", "title": "" }, { "docid": "0d2a8165acbd9413a0d1e7da9a825c93", "text": "Psychological studies of categorization often assume that all concepts are of the same general kind, and are operated on by the same kind of categorization process. In this paper, we argue against this unitary view, and for the existence of qualitatively different categorization processes. In particular, we focus on the distinction between categorizing an item by: (a) applying a category-defining rule to the item vs. (b) determining the similarity of that item to remembered exemplars of a category. We begin by characterizing rule application and similarity computations as strategies of categorization. Next, we review experimental studies that have used artificial categories and shown that differences in instructions or time pressure can lead to either rule-based categorization or similarity-based categorization. Then we consider studies that have used natural concepts and again demonstrated that categorization can be done by either rule application or similarity calculations. Lastly, we take up evidence from cognitive neuroscience relevant to the rule vs. similarity issue. There is some indirect evidence from brain-damaged patients for neurological differences between categorization based on rules vs. that based on similarity (with the former involving frontal regions, and the latter relying more on posterior areas). For more direct evidence, we present the results of a recent neuroimaging experiment, which indicates that different neural circuits are involved when people categorize items on the basis of a rule as compared with when they categorize the same items on the basis of similarity.", "title": "" }, { "docid": "16e1174454d62c69d831effce532bcad", "text": "We report on the quantitative determination of acetaminophen (paracetamol; NAPAP-d(0)) in human plasma and urine by GC-MS and GC-MS/MS in the electron-capture negative-ion chemical ionization (ECNICI) mode after derivatization with pentafluorobenzyl (PFB) bromide (PFB-Br). Commercially available tetradeuterated acetaminophen (NAPAP-d(4)) was used as the internal standard. NAPAP-d(0) and NAPAP-d(4) were extracted from 100-μL aliquots of plasma and urine with 300 μL ethyl acetate (EA) by vortexing (60s). After centrifugation the EA phase was collected, the solvent was removed under a stream of nitrogen gas, and the residue was reconstituted in acetonitrile (MeCN, 100 μL). PFB-Br (10 μL, 30 vol% in MeCN) and N,N-diisopropylethylamine (10 μL) were added and the mixture was incubated for 60 min at 30 °C. Then, solvents and reagents were removed under nitrogen and the residue was taken up with 1000 μL of toluene, from which 1-μL aliquots were injected in the splitless mode. GC-MS quantification was performed by selected-ion monitoring ions due to [M-PFB](-) and [M-PFB-H](-), m/z 150 and m/z 149 for NAPAP-d(0) and m/z 154 and m/z 153 for NAPAP-d(4), respectively. GC-MS/MS quantification was performed by selected-reaction monitoring the transition m/z 150 → m/z 107 and m/z 149 → m/z 134 for NAPAP-d(0) and m/z 154 → m/z 111 and m/z 153 → m/z 138 for NAPAP-d(4). The method was validated for human plasma (range, 0-130 μM NAPAP-d(0)) and urine (range, 0-1300 μM NAPAP-d(0)). Accuracy (recovery, %) ranged between 89 and 119%, and imprecision (RSD, %) was below 19% in these matrices and ranges. A close correlation (r>0.999) was found between the concentrations measured by GC-MS and GC-MS/MS. By this method, acetaminophen can be reliably quantified in small plasma and urine sample volumes (e.g., 10 μL). The analytical performance of the method makes it especially useful in pediatrics.", "title": "" }, { "docid": "802af4a1179602c086c4bbf73208ce16", "text": "BACKGROUND\nWe undertook a feasibility study to evaluate feasibility and utility of short message services (SMSs) to support Iraqi adults with newly diagnosed type 2 diabetes.\n\n\nSUBJECTS AND METHODS\nFifty patients from a teaching hospital clinic in Basrah in the first year after diagnosis were recruited to receive weekly SMSs relating to diabetes self-management over 29 weeks. Numbers of messages received, acceptability, cost, effect on glycated hemoglobin (HbA1c), and diabetes knowledge were documented.\n\n\nRESULTS\nForty-two patients completed the study, receiving an average 22 of 28 messages. Mean knowledge score rose from 8.6 (SD 1.5) at baseline to 9.9 (SD 1.4) 6 months after receipt of SMSs (P=0.002). Baseline and 6-month knowledge scores correlated (r=0.297, P=0.049). Mean baseline HbA1c was 79 mmol/mol (SD 14 mmol/mol) (9.3% [SD 1.3%]) and decreased to 70 mmol/mol (SD 13 mmol/mol) (8.6% [SD 1.2%]) (P=0.001) 6 months after the SMS intervention. Baseline and 6-month values were correlated (r=0.898, P=0.001). Age, gender, and educational level showed no association with changes in HbA1c or knowledge score. Changes in knowledge score were correlated with postintervention HbA1c (r=-0.341, P=0.027). All patients were satisfied with text messages and wished the service to be continued after the study. The cost of SMSs was €0.065 per message.\n\n\nCONCLUSIONS\nThis study demonstrates SMSs are acceptable, cost-effective, and feasible in supporting diabetes care in the challenging, resource-poor environment of modern-day Iraq. This study is the first in Iraq to demonstrate similar benefits of this technology on diabetes education and management to those seen from its use in better-resourced parts of the world. A randomized controlled trial is needed to assess precise benefits on self-care and knowledge.", "title": "" }, { "docid": "c3e63d82514b9e9b1cc172ea34f7a53e", "text": "Deep Learning is one of the next big things in Recommendation Systems technology. The past few years have seen the tremendous success of deep neural networks in a number of complex machine learning tasks such as computer vision, natural language processing and speech recognition. After its relatively slow uptake by the recommender systems community, deep learning for recommender systems became widely popular in 2016.\n We believe that a tutorial on the topic of deep learning will do its share to further popularize the topic. Notable recent application areas are music recommendation, news recommendation, and session-based recommendation. The aim of the tutorial is to encourage the application of Deep Learning techniques in Recommender Systems, to further promote research in deep learning methods for Recommender Systems.", "title": "" }, { "docid": "98c72706e0da844c80090c1ed5f3abeb", "text": "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.", "title": "" }, { "docid": "e678405fd86a3d8a52ecf779ea11758b", "text": "The high carrier mobility of graphene has been exploited in field-effect transistors that operate at high frequencies. Transistors were fabricated on epitaxial graphene synthesized on the silicon face of a silicon carbide wafer, achieving a cutoff frequency of 100 gigahertz for a gate length of 240 nanometers. The high-frequency performance of these epitaxial graphene transistors exceeds that of state-of-the-art silicon transistors of the same gate length.", "title": "" }, { "docid": "58390e457d03dfec19b0ae122a7c0e0b", "text": "A single-fed CP stacked patch antenna is proposed to cover all the GPS bands, including E5a/E5b for the Galileo system. The small aperture size (lambda/8 at the L5 band) and the single feeding property make this antenna a promising element for small GPS arrays. The design procedures and antenna performances are presented, and issues related to coupling between array elements are discussed.", "title": "" }, { "docid": "1f5758d1c9b470c9fa1c30e72f1257b7", "text": "This study aimed to describe the social and cultural etiology of violence against women in Jordan. A sample of houses was randomly selected from all 12 Governorates in Jordan, resulting in a final sample of 1,854 randomly selected women. ANOVA analysis showed significant differences in violence against women as a result of women’s education, F = 4.045, α = 0.003, women who work, F = 3.821, α = 0.001, espouser to violence F = 17.896, α = 0.000, experiencing violence during childhood F = 12.124, α = 0.000, and wife’s propensity to leave the marital relationship F = 12.124, α = 0.000. However, no differences were found in violence against women because of the husband’s education, husband’s work, or having friends who belief in physical punishment of kids. Findings showed women experienced 45 % or witnessed 55 % violence during their childhood. Almost all 98 % of the sample was subjected to at least one type of violence. Twenty-eight percent of the sample believed a husband has the right to control a woman’s behavior and 93 % believed a wife is obliged to obey a husband. After each abusive incidence, women felt insecure, ashamed, frightened, captive and stigmatized.", "title": "" }, { "docid": "72b7e2f1c960d0c5da639fca74aa188a", "text": "Some previous studies (e.g. that carried out by Van Bruggen et al. in 2004) have pointed to a need for additional research in order to firmly establish the usefulness of LSA (latent semantic analysis) parameters for automatic evaluation of academic essays. The extreme variability in approaches to this technique makes it difficult to identify the most efficient parameters and the optimum combination. With this goal in mind, we conducted a high spectrum study to investigate the efficiency of some of the major LSA parameters in small-scale corpora. We used two specific domain corpora that differed in the structure of the text (one containing only technical terms and the other with more tangential information). Using these corpora we tested different semantic spaces, formed by applying different parameters and different methods of comparing the texts. Parameters varied included weighting functions (Log-IDF or Log-Entropy), dimensionality reduction (truncating the matrices after SVD to a set percentage of dimensions), methods of forming pseudo-documents (vector sum and folding-in) and measures of similarity (cosine or Euclidean distances). We also included two groups of essays to be graded, one written by experts and other by non-experts. Both groups were evaluated by three human graders and also by LSA. We extracted the correlations of each LSA condition with human graders, and conducted an ANOVA to analyse which parameter combination correlates best. Results suggest that distances are more efficient in academic essay evaluation than cosines. We found no clear evidence that the classical LSA protocol works systematically better than some simpler version (the classical protocol achieves the best performance only for some combinations of parameters in a few cases), and found that the benefits of reducing dimensionality arise only when the essays are introduced into semantic spaces using the folding-in method. *Address correspondence to: José Antonio León, Dpto. de Psicologı́a Básica, Facultad de Psicologı́a, Universidad Autónoma de Madrid, Campus de Cantoblanco, 28049 Madrid, Spain. Tel.: 0034 914975226. Fax: 0034 914975215. E-mail: joseantonio.leon@uam.es Journal of Quantitative Linguistics 2010, Volume 17, Number 1, pp. 1–29 DOI: 10.1080/09296170903395890 0929-6174/10/17010001 2010 Taylor & Francis", "title": "" }, { "docid": "39180c1e2636a12a9d46d94fe3ebfa65", "text": "We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.", "title": "" }, { "docid": "ec2a377d643326c5e7f64f6f01f80a04", "text": "October 2006 | Volume 3 | Issue 10 | e294 Cultural competency has become a fashionable term for clinicians and researchers. Yet no one can defi ne this term precisely enough to operationalize it in clinical training and best practices. It is clear that culture does matter in the clinic. Cultural factors are crucial to diagnosis, treatment, and care. They shape health-related beliefs, behaviors, and values [1,2]. But the large claims about the value of cultural competence for the art of professional care-giving around the world are simply not supported by robust evaluation research showing that systematic attention to culture really improves clinical services. This lack of evidence is a failure of outcome research to take culture seriously enough to routinely assess the cost-effectiveness of culturally informed therapeutic practices, not a lack of effort to introduce culturally informed strategies into clinical settings [3].", "title": "" }, { "docid": "fda80f2f0eb57a101dde880b48a80ba4", "text": "In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed “The GRASP Taxonomy” after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human-computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.", "title": "" }, { "docid": "854bd77e534e0bb53953edb708c867b1", "text": "About 60-GHz millimeter wave (mmWave) unlicensed frequency band is considered as a key enabler for future multi-Gbps WLANs. IEEE 802.11ad (WiGig) standard has been ratified for 60-GHz wireless local area networks (WLANs) by only considering the use case of peer to peer (P2P) communication coordinated by a single WiGig access point (AP). However, due to 60-GHz fragile channel, multiple number of WiGig APs should be installed to fully cover a typical target environment. Nevertheless, the exhaustive search beamforming training and the maximum received power-based autonomous users association prevent WiGig APs from establishing optimal WiGig concurrent links using random access. In this paper, we formulate the problem of WiGig concurrent transmissions in random access scenarios as an optimization problem, and then we propose a greedy scheme based on (2.4/5 GHz) Wi-Fi/(60 GHz) WiGig coordination to find out a suboptimal solution for it. In the proposed WLAN, the wide coverage Wi-Fi band is used to provide the control signalling required for launching the high date rate WiGig concurrent links. Besides, statistical learning using Wi-Fi fingerprinting is utilized to estimate the suboptimal candidate AP along with its suboptimal beam direction for establishing the WiGig concurrent link without causing interference to the existing WiGig data links while maximizing the total system throughput. Numerical analysis confirms the high impact of the proposed Wi-Fi/WiGig coordinated WLAN.", "title": "" } ]
scidocsrr
8d98fe0e406c631774650d22a447506e
A Look at Parsing and Its Applications
[ { "docid": "1e17455be47fd697a085c8006f5947e9", "text": "We present a simple, but surprisingly effective, method of self-training a twophase parser-reranker system using readily available unlabeled data. We show that this type of bootstrapping is possible for parsing when the bootstrapped parses are processed by a discriminative reranker. Our improved model achieves an f -score of 92.1%, an absolute 1.1% improvement (12% error reduction) over the previous best result for Wall Street Journal parsing. Finally, we provide some analysis to better understand the phenomenon.", "title": "" } ]
[ { "docid": "411ac34baae4a8f5358dfdad6df8e800", "text": "Bluetooth plays a major role in expanding global spread of wireless technology. This predominantly happens through Bluetooth enabled mobile phones, which cover almost 60% of the Bluetooth market. Although Bluetooth mobile phones are equipped with built-in security modes and policies, intruders compromise mobile phones through existing security vulnerabilities and limitations. Information stored in mobile phones, whether it is personal or corporate, is significant to mobile phone users. Hence, the need to protect information, as well as alert mobile phone users of their incoming connections, is vital. An additional security mechanism was therefore conceptualized, at the mobile phone's user level, which is essential in improving the security. Bluetooth Logging Agent (BLA) is a mechanism that has been developed for this purpose. It alleviates the current security issues by making the users aware of their incoming Bluetooth connections and gives them an option to either accept or reject these connections. Besides this, the intrusion detection and verification module uses databases and rules to authenticate and verify all connections. BLA when compared to the existing security solutions is novel and unique in that it is equipped with a Bluetooth message logging module. This logging module reduces the security risks by monitoring the Bluetooth communication between the mobile phone and the remote device.", "title": "" }, { "docid": "ed6e66574544a2217ff805ac5fb9c9ce", "text": "Data deduplication has been widely adopted in contemporary backup storage systems. It not only saves storage space considerably, but also shortens the data backup time significantly. Since the major goal of the original data deduplication lies in saving storage space, its design has been focused primarily on improving write performance by removing as many duplicate data as possible from incoming data streams. Although fast recovery from a system crash relies mainly on read performance provided by deduplication storage, little investigation into read performance improvement has been made. In general, as the amount of deduplicated data increases, write performance improves accordingly, whereas associated read performance becomes worse. In this paper, we newly propose a deduplication scheme that assures demanded read performance of each data stream while achieving its write performance at a reasonable level, eventually being able to guarantee a target system recovery time. For this, we first propose an indicator called cache aware Chunk Fragmentation Level (CFL) that estimates degraded read performance on the fly by taking into account both incoming chunk information and read cache effects. We also show a strong correlation between this CFL and read performance in the backup datasets. In order to guarantee demanded read performance expressed in terms of a CFL value, we propose a read performance enhancement scheme called selective duplication that is activated whenever the current CFL becomes worse than the demanded one. The key idea is to judiciously write non-unique (shared) chunks into storage together with unique chunks unless the shared chunks exhibit good enough spatial locality. We quantify the spatial locality by using a selective duplication threshold value. Our experiments with the actual backup datasets demonstrate that the proposed scheme achieves demanded read performance in most cases at the reasonable cost of write performance.", "title": "" }, { "docid": "c258ca8e7c9d351fc8e380b0af0a529e", "text": "Pervasive technology devices that intend to be worn must not only meet our functional requirements but also our social, emotional, and aesthetic needs. Current pervasive devices such as the PDA or cell phone are more portable than wearable, yet still they elicit strong consumer demand for intuitive interfaces and well-designed forms. Looking to the future of wearable pervasive devices, we can imagine an even greater demand for meaningful forms for objects nestled so close to our bodies. They will need to reflect our tastes and moods, and allow us to express our personalities, cultural beliefs, and values. Digital Jewelry explores a new wearable technology form that is based in jewelry design, not in technology. Through prototypes and meaningful scenarios, digital jewelry offers new ideas to consider in the design of wearable devices.", "title": "" }, { "docid": "4bb98ac4501d3c481aa760c61417730f", "text": "Among different recommendation techniques, collaborative filtering usually suffer from limited performance due to the sparsity of user-item interactions. To address the issues, auxiliary information is usually used to boost the performance. Due to the rapid collection of information on the web, the knowledge base provides heterogeneous information including both structured and unstructured data with different semantics, which can be consumed by various applications. In this paper, we investigate how to leverage the heterogeneous information in a knowledge base to improve the quality of recommender systems. First, by exploiting the knowledge base, we design three components to extract items' semantic representations from structural content, textual content and visual content, respectively. To be specific, we adopt a heterogeneous network embedding method, termed as TransR, to extract items' structural representations by considering the heterogeneity of both nodes and relationships. We apply stacked denoising auto-encoders and stacked convolutional auto-encoders, which are two types of deep learning based embedding techniques, to extract items' textual representations and visual representations, respectively. Finally, we propose our final integrated framework, which is termed as Collaborative Knowledge Base Embedding (CKE), to jointly learn the latent representations in collaborative filtering as well as items' semantic representations from the knowledge base. To evaluate the performance of each embedding component as well as the whole system, we conduct extensive experiments with two real-world datasets from different scenarios. The results reveal that our approaches outperform several widely adopted state-of-the-art recommendation methods.", "title": "" }, { "docid": "6a143e9aab34836fc34ffcd6cc9d1096", "text": "MOTIVATION\nDNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data.\n\n\nRESULTS\nWe develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t -test, provide a systematic inference approach that compares favorably with simple t -test or fold methods, and partly compensate for the lack of replication.", "title": "" }, { "docid": "b60555d52e5a8772ba128b184ec6de73", "text": "Standardized 32-bit Cyclic Redundancy Codes provide fewer bits of guaranteed error detection than they could, achieving a Hamming Distance (HD) of only 4 for maximum-length Ethernet messages, whereas HD=6 is possible. Although research has revealed improved codes, exploring the entire design space has previously been computationally intractable, even for special-purpose hardware. Moreover, no CRC polynomial has yet been found that satisfies an emerging need to attain both HD=6 for 12K bit messages and HD=4 for message lengths beyond 64K bits. This paper presents results from the first exhaustive search of the 32-bit CRC design space. Results from previous research are validated and extended to include identifying all polynomials achieving a better HD than the IEEE 802.3 CRC-32 polynomial. A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks.", "title": "" }, { "docid": "1314d95642bcb00529f8ef7288fcfce0", "text": "In this paper, we proposed a new MAP method more suitable for low signal to noise (SNR) measurements. Different from conventional MAP method, we assume the projection space as a Gibbs field and the penalty term we used was defined in projection space. The spatial resolution of our method was studied and we furthermore modified our method to obtain nearly spatial invariant resolution. Both simulated data and real clinical data were used to testify our method, and future work was discussed at the end of the paper.", "title": "" }, { "docid": "ff7201f0a0d879239c80306d5675bd8b", "text": "BACKGROUND\nInformation in health administrative databases increasingly guides renal care and policy.\n\n\nSTUDY DESIGN\nSystematic review of observational studies.\n\n\nSETTING & POPULATION\nStudies describing the validity of codes for acute kidney injury (AKI) and chronic kidney disease (CKD) in administrative databases operating in any jurisdiction.\n\n\nSELECTION CRITERIA\nAfter searching 13 medical databases, we included observational studies published from database inception though June 2009 that validated renal diagnostic and procedural codes for AKI or CKD against a reference standard.\n\n\nINDEX TESTS\nRenal diagnostic or procedural administrative data codes.\n\n\nREFERENCE TESTS\nPatient chart review, laboratory values, or data from a high-quality patient registry.\n\n\nRESULTS\n25 studies of 13 databases in 4 countries were included. Validation of diagnostic and procedural codes for AKI was present in 9 studies, and validation for CKD was present in 19 studies. Sensitivity varied across studies and generally was poor (AKI median, 29%; range, 15%-81%; CKD median, 41%; range, 3%-88%). Positive predictive values often were reasonable, but results also were variable (AKI median, 67%; range, 15%-96%; CKD median, 78%; range, 29%-100%). Defining AKI and CKD by only the use of dialysis generally resulted in better code validity. The study characteristic associated with sensitivity in multivariable meta-regression was whether the reference standard used laboratory values (P < 0.001); sensitivity was 39% lower when laboratory values were used (95% CI, 23%-56%).\n\n\nLIMITATIONS\nMissing data in primary studies limited some of the analyses that could be done.\n\n\nCONCLUSIONS\nAdministrative database analyses have utility, but must be conducted and interpreted judiciously to avoid bias arising from poor code validity.", "title": "" }, { "docid": "7fdc12cbaa29b1f59d2a850a348317b7", "text": "Arhinia is a rare condition characterised by the congenital absence of nasal structures, with different patterns of presentation, and often associated with other craniofacial or somatic anomalies. To date, about 30 surviving cases have been reported. We report the case of a female patient aged 6 years, who underwent internal and external nose reconstruction using a staged procedure: a nasal airway was obtained through maxillary osteotomy and ostectomy, and lined with a local skin flap and split-thickness skin grafts; then the external nose was reconstructed with an expanded frontal flap, armed with an autogenous rib framework.", "title": "" }, { "docid": "12d31865b311f0ad88ef7dd694a2cfc1", "text": "With the advance of wireless communication systems and increasing importance of other wireless applications, wideband and low profile antennas are in great demand for both commercial and military applications. Multi-band and wideband antennas are desirable in personal communication systems, small satellite communication terminals, and other wireless applications. Wideband antennas also find applications in Unmanned Aerial Vehicles (UAVs), Counter Camouflage, Concealment and Deception (CC&D), Synthetic Aperture Radar (SAR), and Ground Moving Target Indicators (GMTI). Some of these applications also require that an antenna be embedded into the airframe structure Traditionally, a wideband antenna in the low frequency wireless bands can only be achieved with heavily loaded wire antennas, which usually means different antennas are needed for different frequency bands. Recent progress in the study of fractal antennas suggests some attractive solutions for using a single small antenna operating in several frequency bands. The purpose of this article is to introduce the concept of the fractal, review the progress in fractal antenna study and implementation, compare different types of fractal antenna elements and arrays and discuss the challenge and future of this new type of antenna.", "title": "" }, { "docid": "32c5bbc07cba1aac769ee618e000a4a5", "text": "In this paper we present Jimple, a 3-address intermediate representation that has been designed to simplify analysis and transformation of Java bytecode. We motivate the need for a new intermediate representation by illustrating several difficulties with optimizing the stack-based Java bytecode directly. In general, these difficulties are due to the fact that bytecode instructions affect an expression stack, and thus have implicit uses and definitions of stack locations. We propose Jimple as an alternative representation, in which each statement refers explicitly to the variables it uses. We provide both the definition of Jimple and a complete procedure for translating from Java bytecode to Jimple. This definition and translation have been implemented using Java, and finally we show how this implementation forms the heart of the Sable research projects.", "title": "" }, { "docid": "c7d23af5ad79d9863e83617cf8bbd1eb", "text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.", "title": "" }, { "docid": "152701c9297aeaa4eb7d7891e6a08d8a", "text": "End user satisfaction (EUS) is critical to successful information systems implementation. Many EUS studies in the past have attempted to identify the antecedents of EUS, yet most of Bernard Tan was the accepting senior editor for this paper. Guy Paré was the associate editor. Anne-Marie Croteau, William DeLone, and Ronald Thompson served as reviewers. the relationships found have been criticized for their lack of a strong theoretical underpinning. Today it is generally understood that IS failure is due to psychological and organizational issues rather than technological issues, hence individual differences must be addressed. This study proposes a new model with an objective to extend our understanding of the antecedents of EUS by incorporating three well-founded theories of motivation, namely expectation theory, needs theory, and equity theory. The uniqueness of the model not only recognizes the three different needs (i.e., work performance, relatedness, and self-development) that users may have with IS use, but also the corresponding inputs required from each individual to achieve those needs fulfillments, which have been ignored in most previous studies. This input/needs fulfillment ratio, referred to as equitable needs fulfillment, is likely to vary from one individual to another and satisfaction will only result in a user if the needs being fulfilled are perceived as “worthy” to obtain. The partial least squares (PLS) method of structural equation modeling was used to analyze 922 survey returns collected form the hotel and airline sectors. The results of the study show that IS end users do have different needs. Equitable work performance fulfillment and equitable relatedness fulfillment play a significant role in affecting the satisfaction of end users. The results also indicate that the impact of perceived IS performance expectations on EUS is not as significant as most previous studies have suggested. The conclusion is that merely focusing on the technical soundness of the IS and the way in which it benefits employees may not Au et al./Understanding EUS Formation 44 MIS Quarterly Vol. 32 No. 1/March 2008 be sufficient. Rather, the input requirements of users for achieving the corresponding needs fulfillments also need to be examined.", "title": "" }, { "docid": "0ccbc8579a1d6e39c92f8a7acea979bd", "text": "In mental health, the term ‘recovery’ is commonly used to refer to the lived experience of the person coming to terms with, and overcoming the challenges associated with, having a mental illness (Shepherd et al 2008). The term ‘recovery’ has evolved as having a special meaning for mental health service users (Andresen et al 2003) and consistently refers to their personal experiences and expectations for recovery (Slade et al 2008). On the other hand, mental health service providers often refer to a ‘recovery’ framework in order to promote their service (Meehan et al 2008). However, practitioners lean towards a different meaning-in-use, which is better described as ‘clinical recovery’ and is measured routinely in terms of symptom profiles, health service utilisation, health outcomes and global assessments of functioning. These very different meanings-in-use of the same term have the potential to cause considerable confusion to readers of the mental health literature. Researchers have recently identified an urgent need to clarify the recovery concept so that a common meaning can be established and the construct can be defined operationally (Meehan et al 2008, Slade et al 2008). This paper aims to delineate a construct of recovery that can be applied operationally and consistently in mental health. The criteria were twofold: 1. The dimensions need to have a parsimonious and near mutually exclusive internal structure 2. All stakeholder perspectives and interests, including those of the wider community, need to be accommodated. With these criteria in mind, the literature was revisited to identify possible domains. It was subsequently identified that the recovery literature can be reclassified into components that accommodate the views of service users, practitioners, rehabilitation providers, family and carers, and the wider community. The recovery dimensions identified were clinical recovery, personal recovery, social recovery and functional recovery. Recovery as a concept has gained increased attention in the field of mental health. There is an expectation that service providers use a recovery framework in their work. This raises the question of what recovery means, and how it is conceptualised and operationalised. It is proposed that service providers approach the application of recovery principles by considering systematically individual recovery goals in multiple domains, encompassing clinical recovery, personal recovery, social recovery and functional recovery. This approach enables practitioners to focus on service users’ personal recovery goals while considering parallel goals in the clinical, social, and role-functioning domains. Practitioners can reconceptualise recovery as involving more than symptom remission, and interventions can be tailored to aspects of recovery of importance to service users. In order to accomplish this shift, practitioners will require effective assessments, access to optimal treatment and care, and the capacity to conduct recovery planning in collaboration with service users and their families and carers. Mental health managers can help by fostering an organisational culture of service provision that supports a broader focus than that on clinical recovery alone, extending to client-centred recovery planning in multiple recovery domains.", "title": "" }, { "docid": "8df0970ccf314018874ed3f877ec607e", "text": "In graph-based simultaneous localization and mapping, the pose graph grows over time as the robot gathers information about the environment. An ever growing pose graph, however, prevents long-term mapping with mobile robots. In this paper, we address the problem of efficient information-theoretic compression of pose graphs. Our approach estimates the mutual information between the laser measurements and the map to discard the measurements that are expected to provide only a small amount of information. Our method subsequently marginalizes out the nodes from the pose graph that correspond to the discarded laser measurements. To maintain a sparse pose graph that allows for efficient map optimization, our approach applies an approximate marginalization technique that is based on Chow-Liu trees. Our contributions allow the robot to effectively restrict the size of the pose graph.Alternatively, the robot is able to maintain a pose graph that does not grow unless the robot explores previously unobserved parts of the environment. Real-world experiments demonstrate that our approach to pose graph compression is well suited for long-term mobile robot mapping.", "title": "" }, { "docid": "c128546da3d777d52185afbdca8afbe3", "text": "Compared with labeled data, unlabeled data are significantly easier to obtain. Currently, classification of unlabeled data is an open issue. In this paper a novel SVMKNN classification methodology based on Semi-supervised learning is proposed, we consider the problem of using a large number of unlabeled data to boost performance of the classifier when only a small set of labeled examples is available. We use the few labeled data to train a weaker SVM classifier and make use of the boundary vectors to improve the weaker SVM iteratively by introducing KNN. Using KNN classifier doesn’t enlarge the number of training examples only, but also improves the quality of the new training examples which are transformed from the boundary vectors. Experiments on UCI data sets show that the proposed methodology can evidently improve the accuracy of the final SVM classifier by tuning the parameters and can reduce the cost of labeling unlabeled examples.", "title": "" }, { "docid": "2cff48b7c30c310e0d334e5983ae8f1f", "text": "In this paper we introduce a low-latency monaural source separation framework using a Convolutional Neural Network (CNN). We use a CNN to estimate time-frequency soft masks which are applied for source separation. We evaluate the performance of the neural network on a database comprising of musical mixtures of three instruments: voice, drums, bass as well as other instruments which vary from song to song. The proposed architecture is compared to a Multilayer Perceptron (MLP), achieving on-par results and a significant improvement in processing time. The algorithm was submitted to source separation evaluation campaigns to test efficiency, and achieved competitive results.", "title": "" }, { "docid": "c41678d57f0f44b7c834e56585456ded", "text": "Movie and TV subtitles contain large amounts of conversational material, but lack an explicit turn structure. This paper present a data-driven approach to the segmentation of subtitles into dialogue turns. Training data is first extracted by aligning subtitles with transcripts in order to obtain speaker labels. This data is then used to build a classifier whose task is to determine whether two consecutive sentences are part of the same dialogue turn. The approach relies on linguistic, visual and timing features extracted from the subtitles themselves and does not require access to the audiovisual material - although speaker diarization can be exploited when audio data is available. The approach also exploits alignments with related subtitles in other languages to further improve the classification performance. The classifier achieves an accuracy of 78 % on a held-out test set. A follow-up annotation experiment demonstrates that this task is also difficult for human annotators.", "title": "" }, { "docid": "944521c30d94122fa1dfe69105db71cd", "text": "The head related impulse response (HRIR) characterizes the auditory cues created by scattering of sound off a person's anatomy. The experimentally measured HRIR depends on several factors such as reflections from body parts (torso, shoulder, and knees), head diffraction, and reflection/ diffraction effects due to the pinna. Structural models (Algazi et al., 2002; Brown and Duda, 1998) seek to establish direct relationships between the features in the HRIR and the anatomy. While there is evidence that particular features in the HRIR can be explained by anthropometry, the creation of such models from experimental data is hampered by the fact that the extraction of the features in the HRIR is not automatic. One of the prominent features observed in the HRIR, and one that has been shown to be important for elevation perception, are the deep spectral notches attributed to the pinna. In this paper we propose a method to robustly extract the frequencies of the pinna spectral notches from the measured HRIR, distinguishing them from other confounding features. The method also extracts the resonances described by Shaw (1997). The techniques are applied to the publicly available CIPIC HRIR database (Algazi et al., 2001c). The extracted notch frequencies are related to the physical dimensions and shape of the pinna.", "title": "" }, { "docid": "2f174828265ace6055f83393d1357c25", "text": "Coplanar wave guide (CPW) inter digital capacitor (IDC) configurations on printed circuit board (PCB) and parametric variations over frequency are studied by simulation using ADS Momentum. The structures are fabricated in printed circuit board using PCB fabrication techniques. The scattering parameters of the structure are measured using vector network analyzer (VNA). The capacitance is estimated in both case using an approximate circuit model and simulation. A Comparative study of the simulation performance with measured results conducted.", "title": "" } ]
scidocsrr